Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Quick Look for Observers¶ The difficulty with high-dispersion spectral analysis is that it takes a lot of work to make a small example spectrum. In my personal experience, it is surprisingly annoying to quickly find out if, for example, the water line was here or not during an observation. AutoRT was made for that purpose. Here is an example to quickly make the spectrum assuming water, CO, and CIA. import numpy as np import matplotlib.pyplot as plt from exojax.spec import AutoRT from exojax.utils.grids import wavenumber_grid nus,wav,res=nugrid(23200,23300,1000,"AA") #compute a spectrum in 23200-23300 AA Parr=np.logspace(-8,2,100) Tarr = 1270.*(Parr/1.0)**0.1 #T-P profile autort=AutoRT(nus,1.e5,2.33,Tarr,Parr) #g=1.e5 cm/s2, mmw=2.33 autort.addcia("H2-H2",0.74,0.74) #CIA, mmr(H)=0.74 autort.addmol("ExoMol","CO",0.01,crit=1.e-45) #CO line, mmr(CO)=0.01 autort.addmol("ExoMol","H2O",0.004,crit=1.e-40) #H2O line, mmr(H2O)=0.004 F=autort.rtrun() F=autort.spectrum(nus,100000.0,18.0,0.0) #R=100,000 and Vsini=18km/s plt.plot(wav[::-1],F,label="CO+H2O emission") plt.legend() plt.show() AutoXS was made for quick analysis of the cross section of molecules. import numpy as np import matplotlib.pyplot as plt from exojax.spec import AutoXS, AutoRT from exojax.utils.grids import wavenumber_grid nus,wav,res=nugrid(23200,23300,1000,"AA") autoxs=AutoXS(nus,"ExoMol","CO",memory_size=30) xsv=autoxs.xsection(1000.0,1.0) plt.plot(wav[::-1],xsv,label="CO") plt.yscale("log") plt.show()
OPCFW_CODE
Over lunch the other day, my friend and colleague Agata Ciabattoni told me about her paper at this year’s LICS, “From axioms to analytic rules in nonclassical logics“. In it, she and her co-authors Nikolaos Galatos and Kazushige Terui present an intriguing and very general result: Suppose you have a logic which can be axiomatized in full Lambek calculus with exchange (that’s intuitionistic logic without weakening or contraction, but with exchange) by adding axioms. If the additional axioms aren’t too complex, there’s a systematic way of generating an analytic hypersequent calculus for your logic, i.e., a systematic way of converting the additional axioms into a structural rule for a hypersequent calculus in such a way that cut is eliminable. This procedure applies to a wide range of substructural logics but also superintuitionistic logics. (UPDATE: more detail in the next post.) So that got us thinking: what other general, systematic approaches to generation of calculi are there? Agata’s approach generates calculi of a specific form (hypersequent calculi) from other calculi (Hilbert-style calculi). Then I know of two approaches that systematically generate calculi from a semantics. Arnon Avron, Beata Konikowska, and Anna Zamansky have been doing a lot of work on logics given by what they call non-deterministic matrices. I wrote a while ago about the approach I detailed in my undergraduate thesis, which goes back to work by Rousseau in the 1960s: systematic (i.e., automatic) generation of sequent, tableaux, natural deduction calculi for a logic given by finite truth tables. These are the only systematic results I know of, but that just shows my ignorance! There must be others! I’m sure there are general results in modal correspondence theory, for instance, to obtain axioms and perhaps tableaux systems, etc., for modal logics from conditions on frames. Can anyone help me (and Agata) out here? 2 thoughts on “Bleg: Systematic Approaches to Generation of Logical Calculi” Richard, on this issue I would like to recommend you and Agata to have a look at our paper Two’s company: “The humbug of many logical values”, a preprint of which can be found here. We exhibit there a constructive procedure for reducing a ‘sufficiently expressive’ finite-valued into an alternative adequate (non-truth-functional) bivalued semantics, and show next how to extract 2-signed tableau systems from that. Other important papers on the issue are mentioned at the bibliography of that paper, in particular the paper by Carnielli at the JSL in 1987, where he extends the work of Rousseau that you mentioned.For an even more practical work on the automatic generation of classic-like (non-analytic) tableaux for finite-valued logics, I have delivered this year a paper at IJCAR in Sydney, where we implement the algorithm mentioned in the above paper in order to generate theory files ready to use by the Isabelle proof assistant. The corresponding paper, currently submitted for publication, is called Towards fully automated axiom extraction for finite-valued logics, and I will send a copy directly to your email and to anyone else who manifests interest on it. You might also like to have a look at this implementation in ML.Finally, on the extraction of axioms for modal systems, there is some nice work done by Renate Schmidt, please do have a look at her webpage. “Superdeduction” deals with transforming certain kinds of definitional axioms into sequent calculus rules…Have a look on: http://hal.archives-ouvertes.fr/docs/00/14/67/35/PDF/superdeduction.pdfBest regards,Bruno Woltzenlogel Paleohttp://www.logic.at/people/bruno/
OPCFW_CODE
I came back from the TPAC (the W3C’s Technical Plenary/Advisory Committee meeting week) earlier this month, where I attended the Browser Tools- and Testing Working Group’s meetings on WebDriver. Unlike previous meetings, this was the first time we had a reasonably up-to-date specification text to discuss. That was clearly not a bad idea to have because we were able to make some defining decisions on long-standing, controversial topics. This shows how important it is for assigned action items to be completed in time before a specification meeting, and to have someone with time dedicated to working on the spec. The WG decided to punt the element visibility, or “displayedness” concept, to level 2 of the specification and in the meantime push for better visibility primitives in the platform. I’ve previously outlined in detail the reasons why it’s not just a bad idea—but impossible—for WebDriver to specify this concept. Instead we will provide a non-normative description of Selenium’s visibility atom in an appendix to give some level of consistency for implementors. This does not mean we are giving up on visibility. There is general agreement in the WG that it is a desirable feature, but since it’s impossible to define naked eye visibility using existing platform APIs we call upon other WGs to help outline this. Visibility of elements in viewport is not a primitive that naturally fits within the scope of WebDriver. Our decision has implications for element interactability, which is used to determine if you can interact with an element. This previously relied on the element visibility algorithm, but as an alternative to the tree traversal visibility algorithm we dismissed, we are experimenting with a somewhat naïve hit-testing alternative that takes the centre coordinates of the portion of the element inside the viewport and calls elementsAtPoint, ignoring elements that are opaque. We had previously decided to make two separate commands for getting attributes and properties. This was controversial because it deviates from the behaviour of Selenium’s getAttribute, that conflates the DOM concepts of attributes and properties. Because the WG decided to stick with David Burns’s proposal on special-casing boolean attributes, the good news is that the Selenium behaviour can be emulated using WebDriver primitives. In practice this means that when Get Element Attribute is called for an element that carries a boolean attribute, this will return a string " rather than the DOM attribute value which would normally be an empty string. We return a string so that dynamically typed programming languages can evaluate this into something truthful, and because there is a belief in the WG that an empty string return value for e.g. would be confusing to users. Because we don’t know which attributes are boolean attributes from the DOM’s point of view, it’s not the cleanest approach since it means we must maintain a hard-coded list in WebDriver. It will also arguably cause problems for custom elements, because it is not given that they mirror the default attribute values. One of the requirements for moving to REC is writing a decent test suite. WebDriver is in the fortunate position that it’s an evolution of existing implementations, each with their own body of tests, many of whom we can probably re-purpose. One of the challenges with the existing tests is that the harness does not easily allow for testing the lower level details of the protocol. So far I have been able to make a start with merging Microsoft’s pending pull requests. Not all the tests merged match what the specification mandates any longer, but we decided to do this before any substantial harness work is done, to eliminate the need for Microsoft to maintain their own fork of Web Platform Tests. Microsoft and Mozilla are both working on implementations, so there is a pressing need for a test suite that reflects the realities of the specification. Vital chapters, such as Element Retrieval and Interactions, are either undefined or in such a poor state that they should be considered unimplementable. Despite these reservations, I’d say the WebDriver spec is in a better state than ever before. At TPAC we also had meetings about possible future extensions, including permissions and how WebDriver might help facilitate testing of WebBluetooth as well as other platform APIs. The WG is concurrently pushing for WebDriver to be used in Web Platform Tests to automate the “non-automatable” test cases that require human interaction or privileged access. In fact, there’s an ongoing Quarter of Contribution project sponsored by Mozilla to work on facilitating WebDriver in a sort of “meta-circular” fashion, directly from testharness.js tests. But more on that later. (-:
OPCFW_CODE
Is LibreTexts allowable as a reference for formal research? I am currently writing a paper that has a potential to be published. It is mathematics-heavy, so a lot of the sources I encountered are something like libretexts, paul's online notes, etc. My main sources are online textbooks and physical textbooks. Is it acceptable to cite them? The thing with a mathematics-heavy paper is its very hard to find "peer-reviewed" articles that are suitable for what you're writing. The theoretical concept you need to support the paper will easily come from a textbook. Generally, if what you are relying on is textbooks, what you are doing is probably not at the frontiers of research and is already known or easily derived from what is in the literature (and hence not publishable), even if the derivation is obvious only to experts. Are you sure that your paper is actually publishable in a decent journal? There is some ambiguity in the formulation. Is it a paper on mathematics, or is it a paper in another field that, however, uses a lot of math? In the latter case, is it a field in which the generic audience tends to have insufficient mathematical training to follow the argument and needs help even finding suitable references? It is fine to cite actual textbooks that are published by reputable academic publishers. However, online mathematics lecture notes are poorly vetted sources and often contain errors. (I say this not just based on my experience reading lecture notes, but as someone who posted various lecture notes online, some of which eventually evolved into actual textbooks. I can attest that the difference in reliability between lecture notes and textbooks is huge, as is the effort required to transform lecture notes into textbooks.) Thus, I would say the credibility of lecture notes as a resource for citing is low, and you should avoid as much as possible citing them. If I were refereeing a paper that relied on such citations I would very possibly give you trouble over this. By relying on such sources you risk misleading yourself as much as your readers. The thing with a mathematics-heavy paper is its very hard to find "peer-reviewed" articles that are suitable for what you're writing. This has not been my experience. I tend to agree with @AlexanderWoo, if you are only citing textbooks then it seems quite likely that your research is not at the cutting edge of knowledge and your results may not be as publishable as you think. Moreover, Paul's online notes is a website with tutorials about undergraduate level math. I cannot imagine a research-level math paper that would need to cite anything from there, so if you are under the impression that it contains anything useful for citing, you may have serious misconceptions about what math research is and what counts as publishable work. In that case I'd recommend that you look for an experienced mentor to discuss your ideas with and get detailed feedback from. The theoretical concept you need to support the paper will easily come from a textbook. Good, so find a proper textbook to cite - one that was actually edited and polished by competent people and that a reputable publisher is putting their credibility behind. You can cite anything you want. But you're asking the wrong question - if all of your sources are textbooks and online learning resources, is the work actually publishable? Why isn't there peer reviewed literature to support it? I have cited textbooks and other material before. I don't think there is fundamentally anything wrong with it. I sometimes work on multidisciplinary projects and it can be nice to include background on a topic that may be simple/basic to an expert in field A but not to one in field B. But if the core of your paper can be explained or found in a free online textbook, that could be an issue. Maybe the topic isn't really novel. Or it might be a dead end (that others have explored already). I don't work in mathematics though, so maybe I'm off base.
STACK_EXCHANGE
What I want: The experiment that I want to build includes pressing a specific key for a certain amount of time. In detail: Participants need to click “space” for 500ms and only after the “space” key is pressed for 500ms the target stimuli (i.e., a word) can appear on the screen. If the participant clicks “space” and releases his finger at the 400th ms (after the space keypress) I do not want stimulus to appear. What I have tried: I have tried to add a code component (on each frame) which gets the keypresses event.getKeys() however as far as I understood this only saves the key presses on that specific frame. So if I want to see how long the key is pressed, I cannot get that information. I have read some related topics however I could not find an exact answer. I would be really grateful if someone knows which code needs to be used. I would probably stick all the event.getKeys() calls into a list and then query the length of that list, for example: isPressed = # Append True to list if a key is pressed, clear list if not if ```desired key``` in event.getKeys(): isPressed.append(```desired key``` in event.getKeys()) isPressed = # If key has been pressed long enough, end routine if len(isPressed) > ```desired duration, in frames```: continueRoutine = False @TParsons Thank you so much for your response! In my code component, I do not have a “Start experiment” section. Is it a “begin experiment” or a new component of 2020.2 version of PsychoPy? Currently, I am using v2020.1.3. Oops, my poor wording! We did add a “Before Experiment” tab in 2020.2 but I just meant “Begin Experiment” - the only difference between Before and Begin is that Before is before the screen is drawn, so either will work if you’re just defining an empty variable. event module isn’t really designed for timing the onset or duration of key presses, and should really be thought of as deprecated. The suggested code above wouldn’t work as-is, because event.getKeys() will clear the event queue when the key is first detected, and not re-detect it on subsequent checks, until it is released and pressed again. It would be much better to shift to the The latter won’t report a duration of a keypress until it has actually been released, but, unlike the event module, each keypress is stamped with the time it was actually first pressed, so you can work out timing relative to that. You should set clear=False in the call to the keyboard’s .getKeys() so that it will keep reporting the status of a previously-pressed key. Hi @Michael! Thank you for your message! I have worked on the keyboard component a bit but I guess I have not managed to do what I want. At the moment my routine: The code component (in each frame) has: keys = key_resp.getKeys(['space'], waitRelease=False,clear=False) for key in keys: if key == 'space' and t > 0.5: force end of Routine: False allowed keys: ‘space’ store: all keys Routine only needs to end when ‘space’ is pressed for 500 ms. So at the moment, I can get all the keypress but I cannot see whether the ‘space’ is pressed and also held pressed for a certain amount of time (500 ms in my case). So I was not sure how to structure my code. Is there any way to get key presses each frame repeatedly? Don’t mix and match your custom Keyboard checking with a graphical Builder Keyboard component - that will also be checking for keypresses on every frame and hence will quite possibly conflict with your code. So delete the Builder component, and make your own Keyboard object in code, in the “begin routine” tab: kb = keyboard.Keyboard() Then in the “each frame” tab, you will want to check the keyboard for a key press, but not clear the queue, so you can keep checking for the space bar being held down. The key won’t have a duration while it is being held down (it will be None), but you can check the time it was pressed compared to the current time ( t), and if it is >= 0.5 s ago, end the trial. Adapting your code above, it should be something like this: keys = kb.getKeys(['space'], waitRelease=False, clear=False) for key in keys: if key == 'space' and t - key.tDown >= 0.5: Does it work for you? I also need to display a stimulus after some key (e.g., space) is pressed for some time (e.g., 500ms), but this code does not work on my machine… How to force end of the routine when key is not pressed anymore? Hi @Francisco_Contreras , One efficient way would be to set a Stop Duration to a certain time period. It would force the end of routine at the end of that time duration irrespective of any key pressed or not. When solving this issue myself, I have tended to go for a mouse button press instead, because PsychoPy is better at checking the button status each frame. event.getKeys() works differently offline and online.
OPCFW_CODE
Uzebox Joyrider in 01:40.07 by Noxxa - Emulator used: BizHawk 2.0.1 git interim (syncs on BizHawk 2.1.0) - Clear Story Mode About the system and game The Uzebox has a wiki, describing what it is: "The Uzebox is a retro-minimalist homebrew game console. It is based on an AVR 8-bit general purpose microcontroller made by Atmel. The particularity of the system is that it's based on an interrupt driven kernel and has no frame buffer. Functions such as video sync generation, tile rendering and music mixing is done realtime by a background task so games can easily be developed in C. The design goal was to be as simple as possible yet have good enough sound and graphics while leaving enough resources to implement interesting games. Emphasis was put on making it easy and fun to assemble and program for any hobbyists. The final design contains only two chips: an ATmega644 and an AD725 RGB-to-NTSC converter." The Uzebox wiki also has a page for this game. Joyrider is a topdown driving game for the Uzebox, in the style of the classic Grand Theft Auto and Driver games. It was created by James Howard (jhhoward) for the Uzebox Coding Competition 2014, where it won 1st place. Like the classic GTAs and Drivers, it features a city to drive around in, with several things to do; free roaming, doing missions or participating in police chases for instance. The game has a story mode with three missions, and arcade/multiplayer modes with other activities. This TAS goes through the story mode missions. General driving notes - The game is limited to 16 driving angles. Because of this, it's not always possible to make perfect beelines towards the next turn, although I still aim to go for the straightest and shortest lines possible. The car accelerates relatively quickly, making this an efficient method of driving. - When an objective building is reached, the car automatically stops, and usually a brief cutscene plays with the player getting out of the car, entering the building, and returning to the car. (Sometimes other characters are involved as well). To save time, it's usually best to part as close to the door as possible, so that less time is spent on the walking cutscenes. - Car collisions generally don't do much to car velocity, but they do mess things up just enough that there's a slight deceleration/speed loss; therefore, car collisions are avoided throughout the run. - Running over pedestrians or colliding with cars randomly gets you a wanted level. With some luck manipulation, this is pretty easily avoidable. Mission 1 - Bank Job - First objective is to pick up a crew of 3 people to do the bank job with. The parking layout makes it easy to get the car right next to the door; this easily saves up to a second compared to going directly for the marked objective point. - Getting a good parking point for the bank was significantly harder, as there is not as much room, and the car also needs to not be turned so much that it can't easily exit the front of the building, and needs to be able to go south fast enough for the next objective. - At the safehouse, I drive past the objective point on the right side, in order to get closer to the door. This again saves around a second (possibly even more) compared to parking on the objective point itself. Mission complete! Mission 2 - Collector - The building where the payment must be collected has another annnoyingly placed objective point. I can't really reach the door without going in a full 180 degree spin (which would lose a lot of time for obvious reasons), so I end up driving just south of it and then coming inside from there. This is still a fair distance away from the door, but at least no horizontal walking is required, and the car can still easily get away after this. - The next building is similarly laid out like the first, but because I enter from a southwards angle this time it is a lot more viable to go around the objective point in order to park right in front of the door. - The second collectee goes to his car and escapes, and has to be chased. This car goes on a predetermined path, and the fastest way to get it to reach its destination is just to let it do its thing and not have anything touch it or get in its way. Since I have nothing else to do, I play around a bit, going in different directions (close to a mission failure by letting the car "escape"), driving in front of the car rather than properly chasing it, or driving in a circle, and so on. - When the collectee enters his destination building, an objective point appears. Since I had time to reach here in advance, I set up such that I'm on the very top right point of the objective point, to minimize driving time for the next target and minimize the distance to the door for cutscene speed. - Since the next missions relocates the player elsewhere, I didn't need to care about how to leave the car in front of the boss' building; just making sure to reach it and enter the door as fast as possible. Mission 3 - Street Race - A simple mission; just follow directions until a lap is completed, and do so before the green car does. - I steer a bit at the beginning to avoid having a car bump into me shortly into the race. - Since I still need to advance the "continue" menu item after the mission ends, I don't have any end-of-input shenanigans to do here; just end the race as soon as possible so the continue prompt can be pressed as soon as possible. This brings the game back to the main menu, ending the run. Thanks to natt for screwing around with an UZEM core in BizHawk, resulting in this. Thanks for watching! Fog: A poor man's GTA, the run was pretty boring and not really entertaining. The technical qualities are sound, as usual. Accepting for Vault.
OPCFW_CODE
package cn.wizzer.mqttwk.mqtt.common.message; import cn.wizzer.mqttwk.mqtt.common.utils.StringUtil; /** * Created by wizzer on 2018/5/9. */ public class MqttMessage { private final MqttFixedHeader mqttFixedHeader; private final Object variableHeader; private final Object payload; private final DecoderResult decoderResult; public enum DecoderResult { SUCCESS, FINISHED, FAILURE } public MqttMessage(MqttFixedHeader mqttFixedHeader) { this(mqttFixedHeader, null, null); } public MqttMessage(MqttFixedHeader mqttFixedHeader, Object variableHeader) { this(mqttFixedHeader, variableHeader, null); } public MqttMessage(MqttFixedHeader mqttFixedHeader, Object variableHeader, Object payload) { this(mqttFixedHeader, variableHeader, payload, DecoderResult.SUCCESS); } public MqttMessage( MqttFixedHeader mqttFixedHeader, Object variableHeader, Object payload, DecoderResult decoderResult) { this.mqttFixedHeader = mqttFixedHeader; this.variableHeader = variableHeader; this.payload = payload; this.decoderResult = decoderResult; } public MqttFixedHeader fixedHeader() { return mqttFixedHeader; } public Object variableHeader() { return variableHeader; } public Object payload() { return payload; } public DecoderResult decoderResult() { return decoderResult; } @Override public String toString() { return new StringBuilder(StringUtil.simpleClassName(this)) .append('[') .append("fixedHeader=").append(fixedHeader() != null ? fixedHeader().toString() : "") .append(", variableHeader=").append(variableHeader() != null ? variableHeader.toString() : "") .append(", payload=").append(payload() != null ? payload.toString() : "") .append(']') .toString(); } }
STACK_EDU
When it comes to the Swift programming language, the Swift Compiler is one of the most influential tools that contributes to creating robust applications. Providing the backbone for Swift’s high-performance nature, the Swift Compiler is an integral component that every Swift developer must fully comprehend. Section 1: Unpacking the Swift Compiler The Swift Compiler is a vital toolset for the translation of Swift source code into efficient, executable outputs. Its chief role is to transform high-level, human-readable Swift code into low-level, machine-readable bytecode. Subsection 1.1: Anatomy of the Swift Compiler The Swift Compiler comprises two main components: The Front End: The front-end performs syntax analysis, semantic analysis and generates an Abstract Syntax Tree (AST). The Back End: The back-end converts the AST into an executable machine-level code. Subsection 1.2: The Swift Compiler’s Front End The Front End handles the first vital step of the compilation process. It begins with lexing or lexical analysis – identifying and classifying code into distinct "tokens", each with distinguished meaning. Then, these tokens go through syntax analysis, where they are evaluated based on Swift’s rules of grammar. Validated tokens are represented in a parse tree. The outcome is an Abstract Syntax Tree (AST), a simplified structure that discards nonessential elements such as white space and comments. Subsection 1.3: The Swift Compiler’s Back End The Back End translates ASTs from the Front End into machine code. It uses a Low-Level Virtual Machine (LLVM), an open-source compiler infrastructure known for its modularity and reusability. The LLVM transforms the AST into Intermediate Representation (IR), optimizes it, and then generates machine code. Section 2: Navigating Swift Compiler Errors High proficiency in understanding and rectifying Swift Compiler errors is a valuable skill. These errors provide critical insight into our code’s quality and function, enabling us to compose more stable, efficient, and clean Swift applications. Subsection 2.1: Syntax Errors Syntax errors occur due to violation of the language’s rules of grammar. Swift compiler spots these errors via lexical and syntax analysis. A missing semicolon or mismatched parenthesis might trigger a syntax error that prevents successful code compilation. Subsection 2.2: Semantic Errors Semantic errors typically result from logically incorrect code. Though syntactically correct, such code does not align with the language’s semantics or an operation’s intended outcome. Examples include attempting to perform an invalid operation on a data type or referencing a non-existent variable. Subsection 2.3: Runtime Errors Runtime errors transpire during the program’s execution and aren’t detectable during the compilation process. These errors often arise from illegal operations such as division by zero, null pointer referencing, or array index out of bounds. Section 3: Swift Compiler Optimization Techniques The Swift Compiler employs an assortment of optimization techniques to enhance the efficiency of Swift applications. Subsection 3.1: Loop Unrolling Loop unrolling is a common optimization technique that enhances loops’ performance by reducing their overhead. Subsection 3.2: Dead Code Elimination Dead code, i.e., code that does not affect the program’s outcome, can unnecessarily slow down the application. The Swift compiler identifies and removes these portions, raising the code’s efficiency. Subsection 3.3: Inline Expansion The Swift compiler uses inline expansion to replace function calls with the function’s code. This strategy can significantly advance application speed by reducing the overhead of function calls. Understanding the dynamics and intricacies of the Swift Compiler is pivotal in becoming a proficient Swift Developer. The Swift Compiler may initially appear complex, but comprehending its being-and-doing, troubleshooting its errors, and leveraging its optimization techniques can unlock commanding mastery in Swift Programming. - Mastering Android Development with Kotlin: Guides, Tips, and Best Practices - Grasping The Most Popular Coding Languages in Today’s Digital Landscape - An In-depth Exploration of the Zig Programming Language - Introduction to MIPS Assembly Language - Mastering Backend Developer Languages in the Current Digital Age
OPCFW_CODE
|Mr N. Weedy||N.Weedy@stokesleyschool.org||01642 718546| |Mrs S. Metcalfeemail@example.com||01642 718546| |Miss S. Dinsleyfirstname.lastname@example.org||01642 718546| We follow the OCR GCSE Computing course. The emphasis of this course is on computational thinking rather than just how computers work. Students sit a written exam, worth 60% of the final course, and two pieces of coursework worth 20% each. The written exam is based on the following topics: - Fundamentals of computer systems - Computer hardware - Representation of data - Computer communication and networking - Programming Coursework task 1 (Starting mid-October) An investigation on a particular issue related to computing, e.g. the use of a particular application or model. We spend at least 20 hours of class time (controlled assessment) completing this. Coursework task 2 (Starting mid-January) Completing a series of programming tasks. We have chosen to use Python as our language as it is easy to learn yet very powerful. Again, at least 20 hours of class time is devoted to this (controlled assessment). For more information about the course look on the OCR GCSE Computing website. In an ever changing digital world students need to be equipped with the skills that will enable them to be digital competent to be successful in whatever path they choose. Students can choose from OCR GCSE Computing and AQA GCSE ICT at key stage 4. Students have their own email address and access to the internet, through our filtered line, to carry out research for their work in school. Introduction to ICT Searching the Internet Effects of ICT on Society Spreadsheets Scratch Programming Collaborative Learning Introduction HTML Creating Web Pages with Dreamweaver Flash Animation Excel for Science GameMaker Introduction How Computers Work Using Photoshop Programming with Python Letter Writing E-Safety Systems and Networks GCSE – ICT Key stage 4 students cover a wide range of both practical skills and theory when working on the AQA ICT GCSE. Skills covered include; spreadsheet design, databases, web design and using a blog, as well as the use of these in the digital world. Students are expected to contribute to a class blog and make good use of email and collaborative working. There are 3 units which make up the course, one of which is an exam (which can be taken on-screen or on paper) and two controlled assessments carried out in class. Unit 1 - Systems and Applications in ICT - (1 hour 30 minutes exam – 40%). Topics covered: - - Current and emerging technologies - Operating systems and user interfaces - Applications software - Word processing, DTP, web design and other presentation software - Graphics production and image manipulation - Spreadsheets and modelling software - Web browsing and e-mail - Web logs and social networking - Data logging and control software - Society’s use of ICT - Collaborative working Unit 2 - The Assignment – Applying ICT – (Controlled Assessment – 30%) - A scenario will be given and students are expected to use what they have learned to complete the assignment which is an internal controlled assessment. This is set by the exam board and usually involves building a spreadsheet/database/website whilst working through the systems life cycle Unit 3 - Practical Problem Solving in ICT– (Controlled Assessment – 30%) - Students will be provided with several tasks based on education, work or the community and be expected to choose one to solve using ICT. Skills learned during the first two terms will contribute to the development of a number of systems based around a real world scenario. Students have both practical and theory lessons each week and work on all units during each term in unison: - Term 1: - - Unit 1 - Current and emerging technologies, Operating systems and user interfaces, Applications software, Word processing, DTP, web design and other presentation software, Graphics production and image manipulation, Spreadsheets and modelling software, Databases. Unit 2 – Start of controlled assessment. Term 2: - - Unit 1 - Web browsing and e-mail, Web logs and social networking, Data logging and control software. - Unit 2 – Completion of controlled assessment. Term 3: - - Unit 1 - Society’s use of ICT, Collaborative working. - Unit 3 – Controlled assessment. After each of the topics above students sit an end of topic test. Some topics are longer than others but on average they sit one about every three weeks. The results of these, along with any homework tasks set, are used to monitor progress. We have a range of quality learning materials on The Learning Zone of our ILE, both made by us and from other sources. These include powerpoints, worksheets, revision guides and videos. Any student wishing to look ahead or reinforce what they have covered in class has a range of options to choose from. How parents can help Students must have access to a computer at home with a good internet connection. It would be convenient if they were to have full administration rights. Although specific guidance may not be given regarding the coursework tasks, you may discuss these and offer some ideas.
OPCFW_CODE
Thanks to Microsoft's Windows 7 commercials, the term "cloud" has moved into the mainstream. Most people understand that "cloud" refers to applications that stream from the Internet. But there's a lot more to it than that. The word "cloud computing" covers a bunch of technologies, which is why Forrester Research can say that the cloud was a $40.7 billion market in 2011 and will grow to a whopping $241 billion in 2020. A quarter of a trillion dollar market? Wowzers. But, truth be told, most folks still don't completely understand what the "cloud" is. So let's fix that ... What does the term "cloud" mean? The "cloud" is an umbrella term used for a whole bunch of things, most, but not all, having to do with getting software or computing resources delivered over the Internet as a service. These services are usually paid for on some kind of usage or subscription basis -- a certain dollar amount per resource (like data) consumed, or per month. Stop paying, and service is cut off. This is different from buying a software product and getting to use it forever. Where did the term come from? Back in the day, network diagrams used the cloud icon to indicated the public telephone network and later the Internet. At the time, it really meant "out there, out in the messy world, on someone else's systems, out of my control." So the cloud is really just another term for "Internet" right? No, not really. It is possible to use the Internet without using cloud services and it is possible to be on a cloud without being on the Internet. For instance, you could use the Internet to download some software from an application developer, send an e-mail, and connect remotely to your files stored on your office service, all without using "cloud" services. What are the different technologies that are considered cloud? Ah. Here we get to the techie, wonkie stuff. Essentially there are five buckets of things called cloud. - Software as a service ( SaaS): This is an application delivered over the Internet as a subscription. It is not installed on a company's servers or on a person's PC. Salesforce.com is the granddaddy of this category, but other examples include things like Google Apps, Microsoft's Office 365, or the human resources suite Workday. When the term "cloud" is used for consumers, it typically means SaaS such as Dropbox, iCloud, Evernote, and so on. - Platform as a Service (PaaS): This is the next layer up, where you want to build your own cloud application but you want to rent everything you need including the runtime platform like Java, Ruby or .Net. Examples of PaaS include Google App Engine, Microsoft Azure, and Salesforce.com's Heroku. - Infrastructure as a Service: This is the most basic cloud of them all, where you rent the hardware (server with operating system, storage, and networking) and you upload your own applications. The difference between IaaS and old-school hosting is that you are sharing the hardware with other renters (in geek terms: it's a multitenant environment). You only pay for the computer resources you use. Amazon AWS is the biggest here, but Rackspace is another and Linode is popular for Linux users. - Private clouds: Here's where things get tricky. Private clouds don't necessarily use the Internet. This means an enterprise has built its own version of a public cloud to use for itself. When enterprises remodel their IT systems to be like a multitenant environment, they can become more efficient. - Hybrid clouds: This means that a company is choosing to store some of its applications in its private cloud and using a public cloud for spill-over. A retail company could, for instance, rent extra space on an IBM retail cloud in December when transactions spike. In the old days, a company would have to buy more servers to be ready for those spikes, even though they would sit unused for 11 months. What's the big deal about cloud computing? Because companies share the infrastructure, cloud computing makes it super cheap for anyone to access enormous computational resources. Rather than spending thousands to own your own computer and networks, you can rent all the power you want for however long you want it. This has lead to a boom in innovation. Companies can grow really big before they need to invest in their own data centers ( Groupon started on Amazon), or they can quickly expand by putting a new service in the cloud. Examples include Pinterest, which uses Amazon Web Services, and LivingSocial, which uses Rackspace. Does everyone love the cloud? Hardly. Enterprise IT folks are still a little suspicious of the cloud -- and the lack of control it represents. But they are getting over it. They are being won over by the number of cloud products, their low cost, improved security and management controls. Get the latest Google stock price here. Get the latest Microsoft stock price here.
OPCFW_CODE
I can login to the CMS on my local site running EPiServer 18.104.22.168. Edit mode works without any problems but when I try to go the Admin mode I get this: External component has thrown an exception.System.Web.HttpCompileException (0x80004005): External component has thrown an exception. at System.Web.Compilation.AssemblyBuilder.Compile() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.ThemeDirectoryCompiler.GetThemeBuildResultType(String themeName) at System.Web.Compilation.ThemeDirectoryCompiler.GetThemeBuildResultType(HttpContext context, String themeName) at System.Web.UI.Page.InitializeThemes() at System.Web.UI.Page.PerformPreInit() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.episerver_cms_admin_default_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\root\6df7b224\256e77ca\App_Web_mtnzxpkp.6.cs:line 0 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Anyone knows what causes this problem? I should mention that the application worked fine before but suddenly started to show this error message. I have tried to restart the IIS, removing ASP temporary files and even starting the machine but none of them helped. Thank you in advance, Did you manage to solve this problem? I'm running v22.214.171.124 and everything works in dev and test, had an initial deploy uploaded to the production environment which ran just fine.When publishing the latest code to production i get the same error as you. Tried setting up an new site with production files and database which worked oob. Is there any error message in log files? I had the exact same error. EPiServer version 9.3.1 Windows Server 2012 R2 Deploy to production worked fine the first time and then stopped working two days later. A restart of the production server solved the issue. I would guess that temporary asp.net files is messing things up. Try deleting those... Happens from time to time... Cleanup of temporary asp.net-files did resolve the issue only to rise again, the main problem was with a couple of wiindows updates. Read more @ https://support.microsoft.com/en-us/kb/3118750
OPCFW_CODE
(Note: This post somewhat of a story, but also has some useful code design info.) This allows you to use MyExample.com to reference your blog, instead of something like myexample.typepad.com. All of your posts and images use the original name, such as in search engine listings, on trackbacks and more. So if you change the name, all of your old links will break. You might loose a bunch of RSS feeds that used the old links. And also, all of the images in your posts will be missing! I ran into this issue for a client last week. A popular blog, with an average of 1,500 unique visitor per day, and over 2,100 posts, was essentially broken. Not acceptable. Because of TypePad’s inept support I was forced to find a solution, which can likely be adapted for use on other hosted blogging systems as well. The answer lies in some simple .htaccess code. Read on to find out more. First, a rant against TypePad. TypePad doesn’t have live support. At all. Six Apart, the parent company, doesn’t even maintain their own support forums. With no other recourse to solve the issue, I opened an online help ticket. We were told that the issue was a tough one, and would be looked at by technical support. After a week of back and forth, asking if there was any progress, TypePad finally responded that there was nothing at all they could do. I tried asking @sixapart on Twitter for help. No response whatsoever. Ouch! I called GoDaddy.com, where the domain name is registered, to see if their live support team had any ideas. While they were helpful and friendly as ever, they didn’t see how to fix the problem. But that’s when I myself realized how to do it. An .htaccess file is a text-only file that web servers read first, before any other pages on a site. Turns out it’s a common way to fix issues that might occur when you switch your domain name! In TypePad, all blog posts live in a sub-folder of your main domain, like “my_example”. In this case, the original links, now broken, were www.myexample.com/my_example/my-post.html, and the new links, after changing domain mapping, were blog.myexample.com/my_example/my-post.html. Very similar links, and a simple replacement: just find anything with the /my_example/ in its address and send it to blog. instead of www. To create an .htaccess file, all you need is a simple text editor. Open the file, type in your code, and save with the extension .htaccess. You might have to give the file a name, like a.htaccess, but just make sure to remove the preceeding “a” before uploading to your webserver. [More on .htaccess files here and here.] For this specific redirect, the code in the .htaccess file is the following: Redirect 301 /my_example/ http://blog.myexample.com/my_example/ It worked to fix all the old, cached, search-engine-indexed, mentioned-by-others posts’ links. However, images are a separate line of code. In TypePad, all images you upload to their severs are stored not in your /my_example/ folder, but in a directory called /.a/ Why? Who knows. But all that means is we need another line in the .htacess file: Redirect 301 /.a/ http://blog.myexample.com/.a/ Upload the file to the root level of your site, and done. Phew. SixApart, please take note, and save other customers some anguish.
OPCFW_CODE
These summary notes have been lifted from the NED webpages. NED’s ongoing mission is to provide a comprehensive and easy-to-use, multi-wavelength fusion of fundamental data for all known (cataloged and published) objects beyond the Milky Way. As new observations are published, they are cross-identified or statistically associated with previous data and integrated into a unified database to simplify queries and retrieval. Note, these high fidelity cross-ids are non-trivial1. The Data Available Available data include positions, redshifts, morphological and spectral types, sizes, photometry, images, spectra, distances, diameters, cross-IDs, associations, reference abstracts, and detailed notes. Derived quantities include Galactic extinction, velocity corrections, Hubble flow distances and scales, cosmological corrections, quick-look luminosities, and spectral energy distributions (SEDs). Updates to the public database occur approximately every three months after periods of data entry, quality assurance, and testing. Many of the individual catalogs integrated into NED are available from CDS (Centre données astronomiques de Strasbourg) Links to External Cosmology and Extinction-Law Calculators There are links to five different Cosmology Calculators enable you to calculate various cosmological parameters and to five different Extinction Calculators enable you to calculate Galactic extinction Unprocessed Catalog Sources versus NED Objects Note NED’s Search Objects, With Unprocessed Catalog Sources option allows a search similar to the “Near Position” search, but optionally returns unprocessed catalog sources from very large catalogs that have yet to be cross-matched with NED. A Status of S (catalog source) indicates an astronomical observation which has not yet been fully integrated into NED’s representation of the hierarchy of the universe. A Status of O (NED object) indicates a physical thing or a group of things in the universe, or a region of space, which has been cross-identified or associated by NED with one or more vetted catalog sources. The Status column returned by the Search Objects, With Unprocessed Catalog Sources service indicates the NED processing status. Note when you return to NED in the future, you may find that some entries previously identified as S have been promoted to O or added as cross-identifications to other entries, due to new runs or refinements of NED’s cross-matching algorithms. Current Object Counts in NED World Wide Web: Automated access to NED’s Web (http) services via computer programs and scripts is supported. Batch Mode: designed for searches that will return typically more than a few hundred objects. Using this mode simply involves submitting to NED via email a “batch form” containing a list of objects or positions. When NED has completed your batch job, you will receive an email message with information on using FTP to retrieve the results. NED will also process long lists of objects through its Batch Job option - the limit on the size of batch jobs is 3,000 objects. Objects can be queried: - By Name, - Near Name or - Near Position (cone search), and - With Unprocessed Catalog Sources (to include very large catalog sources that are not yet cross-matched with NED objects). You may specify a search radius of up to 300 arcminutes. (i.e. 5 deg radius) NED VO/XML Services NED web services use the HTTP GET protocol whenever possible, with query filters encoded as URL name–value pairs. To construct a conesearch URL use the format: - RA and DEC are in decimal degrees (J2000) and - SR (search radius) is in degrees. For example to query objects within 15 arcminute radius around M 83 use the URL: Results are returned in VO Table XML format. The main search program, nph-objsearch, has a number of options that affect the results. - The output format option. When set to value xml_all it returns a VOTable containing nested tables (“table of tables”), each containing a specific type of source data. - Specific data types are also available separately by specification of xml_main (main source table), xml_names (source cross-IDs), xml_posn (source position, with uncertainties when available), xml_basic (Basic Data), and xml_extern (links to External Resources at the source position). - extend=no requests data for only the object name specified by objname. extend=yes also returns data for objects associated to the queried objname, for example, H II regions within a galaxy or members of a galaxy group. - examples include search_type= Diameters , search_type= Redshifts , search_type= Notes , search_type= Positions Types of NED Searches NED will return only 50,000 objects with a Near Position search. Searches that are likely to return more than 50,000 objects are best done with NED Batch Jobs. Choose the format of your tabular output list: - preformatted HTML text - an HTML table of all data for all the returned sources, - an ASCII bar-separated variable table of the list of sources, - an ASCII tab-separated variable table of the list of sources, - an XML table of the list of sources, - an XML table of the returned source names (cross-identifications), - an XML table of the returned source positions (equatorial B1950 and J2000, ecliptic B1950 and J2000, Galactic, and supergalactic) - an XML table of the returned source Basic Data, - an XML table of the returned source quantities derived from its redshift (if any), - an XML table of links to external archives and services with data for the returned source, - an XML table of all data for the returned sources, Note you can change the way in which redshifts are displayed Search Objects, With Unprocessed Catalog Sources This search allows you to search NED’s master list of astronomical objects for entries near a given position. Search with a List of Positions WIth near position lists you can cut-and-paste up to 500 positions, one per line, of the object(s) you wish to search for in the “Input List Equatorial J2000 Positions or Object Names” box. You may choose a search radius up to 30.0 arcsec. The output you can select includes: - Basic Data - Data Counts and Links A galaxy pair resolved at 2 μm may be unresolved at 24 μm. Astrophysics makes sources look different as a function of wavelength; for example, in dusty starburst galaxies, centroids in the IR often do not match those in the UV. In addition, objects reside in a hierarchical Universe: galaxies contain components (AGNs, supernovae, star clusters, HII regions, etc.); galaxies occur in pairs, group and clusters; and clusters string together in superclusters separated by vast voids. ↩
OPCFW_CODE
I hope this is the right forum. I'm currently working on creating mka audios that contain an AAC stream, an Opus stream, and a SRT stream. Purpose is for people who are hearing impaired but still benefit from audio, they just need captions to help them make out some of the words. The AAC and Opus streams are same content, it's just AAC decoders are not included out of box on some operating systems (those usually have Opus at this point) Anyway - figuring out Matroska tags took a little effort but I can finally generate an xml python that when added to the MKA makes sense when I view it in VLC - I have one Tag TargetTypeValue of 70 that has metadata about the collection and one tag with a TargetTypeValue of 30 that is specific to the track itself. But I want to also add replay gain data, and my understanding is that proper replay gain may vary from lossy format to lossy format, so I can't just add the replay gain data to the Tag node with TargetTypeValue 30. I'm guessing I have two add to more Tag nodes, one specific to the AAC and one specific to the Opus. That would also let me store the encoder information. But I have trouble with the Matroska tag documentation, it is very confusing. How do I create a Tag node that adds metadata specific to the AAC stream (always first added) and another one specific to the Opus stream (always second added)? Thank you for suggestions. Last post by Mr.Fox - Cuetools does not understand "REM COMPILATION" from the cue file and "part of a compilation" no offense - english is not my first language either - but I have no idea what you mean with some of the above... As for the discogs style question - to me it sounds like you want to reformat a field, and pump the value with different formatting into another field... (?). For something similar, I use this and find it the most convenient: Hope this helps. I keep looking worthy upgrades to my favorites Sony MH1 and LG Quadbeat 3. They costed me about 15$ each. USB is a differential transmission line. Meaning the the data is sent on both the D+ and D- wires at the same time. On the receiving side, the D- line is inverted, then summed with D+ and that's the signal that's being decoded. Since USB is digital, it's best to think of it in terms of data. USB cables have two common failure modes: 1. they cannot carry enough current on the Vcc and GND lines (this is often plainly obvious with phones being charged very slowly, etc). 2. Improper shielding and/or improper twisting of D+ and D-. Both of which introduce noise into the signal and hence may or may not corrupt data, in the sense that the differential signal is not decodable. Should failure mode 1 occur: when your playback device is powered by USB, then it simply won't work, because there's not enough power going to your player, or amp, or whatever. Should failure mode 2 occur: if the data is undecodable since the levels aren't discernable, the host (USB is always host controlled) will probe the USB device a couple times, and if that doesn't succeed, the device will be rejected. Host controllers must cut power to the device should the enumerator be unable to probe the device. If you plug in something like a mouse into a USB port very slowly, you'll see the mouse LED or laser switch on, then off, then on again. This is because while inserting the mouse your connection is intermittent. The host controller then cuts power. The controller will then usually give the device another chance to see if the connection has been settled down, this is usually with a 1sec delay, to allow for someone to actually plug a device in, as USB is hot-pluggable by design. Should this fail a couple more times (usually 12 times for some reason), the host will permanently cut power, and only retry after it sense that the device has been unplugged, and re-plugged. The controller now assumes that another device has been plugged in. Up to this point, the device hasn't even been attached to the bus by the enumerator. USB is a logically enumerated device bus, even though it says "bus" in the name, the bus is a logical bus, not a physical one, like with SCSI, for instance. All devices are connected directly to the host controller, and only the controller may initiate data transfers (hence, host controlled). Enumerated means, that the devices are only ever attached to the (logical) bus, when the host controller is able to attach the device at the enumerator. I.e. you can't send data via USB "in the blind" like on an RS-232 line. When the host controller is unable to poll the device on the other end, even though "something" is plugged in, it won't show up to the enumerator, meaning it doesn't exist on the bus. Before data is being send to the attached device by the host, a set of commands are sent by the host controller. The device on the other end must either accept or reject packets based on their CRC checksums. the ACK or NAK replies are then sent back to the host, the host might want to re-transmit that packet. If you're using an un-shielded, untwisted USB cable, it will basically require frequent re-transmissions, assuming the noise is not high enough to completely make the signal illegible to the controller. For instance, you're running the bare D+ and D- wires of the USB cable next to a switchmode power supply, and it just induces the right high-enough levels for a host controller to incorrectly assume they're data. In this edge-case the USB connection will seem to be very slow. In this case USB behaves much like Twisted-Pair network cables, as should be evident. Now, what if that data is PCM audio? Well, if the host controller cuts power and detaches the device from the enumerator, you'll hear nothing, as the device is essentially removed from the host - as saratoga noted correctly. If communication over said USB line necessitates frequent retransmissions, the USB connection will seem to be slow. So either the sound will be choppy, like streaming from a very slow source, like on very slow internet, or it will buffer forever. Usually, timeouts on USB are rather short, though. So even though the USB will work on the enumerator side, it will give up after a couple tries, and simply let the devices on either end handle error notification, etc. On controlled digital lines, whether it's USB, or twisted pair networking or what have you, introducing noise into the signal transmission simply means the line gets slower. Down to the point where zero packets reach the other end before either side gives up requesting re-transmissions or re-transmitting itself. Long story short: What does a bad USB cable sound like? The same a slow or choppy internet connection sounds like. Incorrect data is always rejected, never simply forwarded no matter what, similar to packets over TCP/IP networks. You either receive the packet, or the NIC rejects it. Same with USB, the host or the slave either accept the packet when it passes its CRC, or reject it. And if the host gathers too many such errors, it might simply decide to remove the device from the bus when control commands are rejected, too. not every track is worth keeping in 5 or so slightly different versions. as long as they have replaygain it should not be a problem. Last post by orangefx - i have my library write protected. changing files in foobar wont work. i am looking for a command to remove, and after applied changes reinstate write protection of marked files. florians old foo run component could be doing this. but i did not manage to get it to work with attrib.exe -r/+r. Anyone hoping a new version of this component ? Requested functionnality : to have the stream title to display dynamic information like %title% or %artist% or both ? Thanks in advance @gix : yes - TheQwertiest has right - please issue a new thread on forums. I have some feature request, as this plugin misses certain functionality from original one (blocking tracks from being scrobbled, based on tags)
OPCFW_CODE
dead-interval (time; default: 40s) - specifies the interval after which a neighbor is declared as dead. The interval is advertised in the router's hello packets. This value must be the same for all routers and access servers on a specific network hello-interval (time; default: 10s) - the interval between hello packets that the router sends on the interface. The smaller the hello-interval, the faster topological changes will be detected, but more routing traffic will ensue. This value must be the same on each end of the adjancency otherwise the adjacency will not form You can use any subnet between your p2p links and the router - but choose a subnet that is only as big as necessary. So if you have used a /30 on the wireless interface of your p2p link and the p2p device is the only one connected to the router port then use another /30 between its ethernet interface and the router.Hello, I'm continuing my testing on OSPF and I have a new question for you! May you help me to understand what is a good ip plan? I read "Burning Bridges" here: http://www.mywisptraining.com/wp-conten ... Routed.pdf I understand I have to remove switches and add router in place, but... What address should I assign to each port of the router? I mean: for P2P links I use a /30, but what about the router? In the example I posted on the first post, what IPs should I assign to the last RB750 and the NetMetal? Thank you for helping me to understand Excellent point on not flooding a bunch of LSAs.For AP's it is best to avoid using OSPF to publish client facing subnets actively. As clients connect and drop it creates new LSA's across the whole network so better to not specify the client device subnet in /route ospf net and better to set the /rou ospf instance to publish connected instead. This way the AP's subnet gets published as a whole and not on a per client address basis. If you have an ethernet interface on your Mikrotik bridged via 3rd party wireless link (e.G SAF etc) then you will most likely need a /29 which will give 8 IP Adresses less 2 (broadcast and network) so 6 usable addresses. 1 for each Mikrotik and 1 each for your radios. As the radios may not support OSPF set their Gateway to point to the Mikrotik nearest where you are connecting from. Network type=Broadcast should be OK if the link is a true L2 bridge but if you experience difficulty or instability then try setting the relevant interfaces to network-type=point to point
OPCFW_CODE
Chattering creatures with a superficial similarity to squirrels, the ratatosk have tiny tusks and fur that shimmers in a way that defies the surrounding light. Tiny celestial, chaotic neutral Armor Class 14 Hit Points 42 (12d4 + 12) Speed 20 ft., climb 20 ft. |4 (-3)||18 (+4)||12 (+1)||17 (+3)||10 (+0)||18 (+4)| Saving Throws Wis +4, Cha +6 Skills Deception +6, Persuasion +6, Stealth +6 Damage Resistances bludgeoning, piercing, and slashing damage from nonmagical weapons Senses darkvision 60 ft., passive Perception 10 Languages Celestial, Common; telepathy 100 ft. Challenge 4 (1,100 XP) - Innate Spellcasting. The ratatosk’s spellcasting attribute is Charisma (save DC 14). It can innately cast the following spells without requiring material or somatic components: - At will: animal messenger, message, vicious mockery - 1/day each: commune, mirror image - 3/day each: sending, suggestion - Skitter. The ratatosk can take the Dash, Disengage, or Hide action as a bonus action on each of its turns. - Gore. Melee Weapon Attack: +6 to hit, reach 5 ft., one target. Hit: 1 piercing damage plus 14 (4d6) psychic damage and the target must make a successful DC 14 Wisdom saving throw or be charmed for 1 round. While charmed in this way, the creature regards one randomly determined ally as a foe. - Divisive Chatter (recharge 5-6). Up to six creatures within 30 feet that can hear the ratatosk must make DC 14 Charisma saving throws. On a failure, the creature is affected as if by a confusion spell for 1 minute. An affected creature repeats the saving throw at the end of each of its turns, ending the effect on itself on a success. - Desperate Lies. A creature that can hear the ratatosk must make a DC 14 Wisdom saving throw when it attacks the ratatosk. If the saving throw fails, the creature still attacks, but it must choose a different target creature. An ally must be chosen if no other enemies are within the attack‘s reach or range. If no other target is in the attack‘s range or reach, the attack is still made (and ammunition or a spell slot is expended, if appropriate) but it automatically misses and has no effect. Sleek-furred Celestials. The ratatosk is a celestial being that is very much convinced of its own indispensable place in the multiverse. Its fur is sleek, and it takes great pride in the cleaning and maintaining of its tusks. Planar Messengers. Ratatosks were created to carry messages across the planes, bearing word between gods and their servants. Somewhere across the vast march of ages, their nature twisted away from that purpose. Much speculation as to the exact cause of this change continues to occupy sages. Maddening Gossips. Ratatosk are insatiable tricksters. Their constant chatter is not the mere nattering of their animal counterparts, it is a never-ending celestial gossip network. Ratatosk delight in learning secrets, and spreading those secrets in mischievous ways. It’s common for two listeners to hear vastly different words when a ratatosk speaks, and for that misunderstanding to lead to blows. Tome of Beasts. Copyright 2016, Open Design; Authors Chris Harris, Dan Dillon, Rodrigo Garcia Carmona, and Wolfgang Baur.
OPCFW_CODE
Expectation of a function of pairs of random variables For positive random variables $(X_1, Y_1)$ and $(X_2, Y_2)$, suppose that $(X_1, Y_1)$ and $(X_2, Y_2)$ have the same distribution and (the two pairs) are independent. Also suppose that $E[Y_1|X_1] = \theta X_1$. Let $Z=\frac{Y_1 + Y_2}{X_1+X_2}$. Find $E[Z]$. Solution attempt: Using Law of Iterated Expectations (LIE), we have that $E[Y_1]=\theta E[X_1]$. We can also write $Z=\frac{Y_1 + Y_2}{X_1+X_2}$ as $\frac{Y_1}{X_1 + X_2} + \frac{Y_2}{X_1 + X_2}$. So, $E[Z]=E[\frac{Y_1}{X_1 + X_2} + \frac{Y_2}{X_1 + X_2}] = E[\frac{Y_1}{X_1 + X_2}] + E[\frac{Y_2}{X_1 + X_2}]$. Now, I tried to use LIE again to get: $E[Z] = E_{X_1+X2}E[\frac{Y_1}{X_1 + X_2} | X_1+X_2]+... = E_{X_1+X2}[\frac{1}{X_1+X_2}E[Y_1|X_1+X_2]] +...$ Now I have no idea what to do. Can I still treat the $\frac{1}{X_1+X_2}$ as a constant and take it out? I don't think so. How do I proceed from here? Any help appreciated! Use LIE three times, then simply because of independence of the two coordinates w.r.t. each other. $\begin{align} \mathsf E[Z] & = \mathsf E_{X_1}\left[\mathsf E_{X_2\mid X_1}\left[\mathsf E_{Y_1\mid X_1,X_2}\left[\mathsf E_{Y_2\mid X_1,X_2,Y_1}\left[\frac{Y_1+Y_2}{X_1+X_2}\middle| X_1,X_2,Y_1\right]\middle| X_1,X_2\right]\middle| X_1\right]\right] \\[2ex] & = \mathsf E_{X_1}\left[\mathsf E_{X_2\mid X_1}\left[\mathsf E_{Y_1\mid X_1,X_2}\left[\frac{Y_1+\mathsf E_{Y_2\mid X_2}[Y_2\mid X_2]}{X_1+X_2}\middle| X_1,X_2\right]\middle| X_1\right]\right] \\ & \vdots \\[1ex] & = \mathsf E_{X_1}\left[\mathsf E_{X_2}\left[\frac{\mathsf E_{Y_1\mid X_1}\left[Y_1\mid X_1\right]+\mathsf E_{Y_2\mid X_2}\left[\mathsf Y_2\mid X_2\right]}{X_1+X_2}\right]\right] \\ & \vdots \end{align}$ Can you take it from here? Hint: Don't split the numerator. Use both $E[Y_1|X_1] = \theta X_1$ and $E[Y_2|X_2] = \theta X_2$. Ok...so I would have $E_{X_1+X_2}[\frac{1}{X_1+X_2}E[Y_1+Y_2|X_1+X_2]]$ but then is $E[Y_1+Y_2|X_1+X_2]=2\theta E[X]$? I'm sorry, you'll have to figure out the rest on your own. "Don't split the numerator." Sorry but this is rather moot, one can split or not split (and, sooner or later, it is simpler to split, I believe). @Did In this case you have to resist the temptation and not split the numerator. This way the numerator and denominator cancel and leave the answer $\theta$. Of course, you can split and join back, but that would be a bit pointless. Rereading the question, I have to disagree with your comment and with the hint in your answer. For example, to know $E(Y_1\mid X_1)$ is not enough to conclude, one needs to identify $E(Y_1\mid X_1,X_2)$. And to do that, splitting is simpler... Or, one masters conditioning well enough to see right away what $E(Y_1+Y_2\mid X_1,X_2)$ is, but then one would not ask the question in the first place.
STACK_EXCHANGE
Want to improve your risk assessment? Identify, then question, the constants in your world. A risk is a potential change that carries consequences. In order to get ahead of a risk, you have to develop a nose for the kinds of changes that might happen in your world. Exploring “what if?” scenarios is one tool to surface those possible changes. A special kind of “what if?” is to question a constant. Or, more specifically, to question something that you assume is a constant. This thought experiment can shed light on possible changes long before they happen, which can broaden your understanding of your risk exposure. A constant is something that is held as unchanging. We see constants in software development, business rules, and scientific experiments. Sometimes they are facts, such as the speed of light, the acceleration of gravity, or the number of hotel stays required to achieve a higher loyalty status. Other times, they are used as a convenience to simplify calcuations or planning: “for the purposes of this exercise, let’s hold the number of customers at 10,000.” In the real world, constants … sometimes aren’t. People change jobs. Housing prices fall. As do nations. And, much to the detriment of predictive models, a pandemic may suddenly and dramatically change consumer spending habits in a way that invalidates years’ worth of historical training data. Assuming something won’t change – treating it as a constant – is a way to lull yourself into a false sense of security. Let’s say that you’ve built a system that performs some calculations. You’ve probably defined some numeric constant somewhere, and the system passes that value into various formulas as part of its daily operation. Have you ever tested different values of that constant? If you haven’t, you can start now. Change the value of the constant. When you rerun the calculations, what other changes do you see? And what impact does that have on the system as a whole? Testing that kind of change just once can be eye-opening. It would help even more to test with a variety of values. Instead of picking new values by hand, you could build a simulation which tests the system on a wide range of randomly-chosen values of the no-longer-a-constant. Even in a simulation, you still have some choice over the definition of “random.” We can borrow the idea of a random variable from statistics. Unlike a variable in software, which holds a single value, a random variable represents an entire statistical distribution. It returns a different value, from the same statistical family, each time you call it. You don’t know exactly what number you’ll get but you have a rough idea of the scope and “shape” of the possible values. Most people will default to a Gaussian (normal) distribution for testing the no-longer-constant: “it should still be reasonably close to 7.5, but it may vary just a bit. So let’s set the mean to 7.5 and choose a very small standard deviation.” You could also pick from a uniform distribution, in which there’s equal probability of any value within a given range. You have as many choice as there are statistical distributions, really, so you can get creative. Another flavor of a constant is to assume a fixed number of possible outcomes, or a fixed range of input values. Consider the statistics textbook classic of rolling dice. A die has six sides, each of which has an equal probability of coming up every time you roll. This is an easy way of splitting workloads into six different, balanced queues for processing. Sort of. It’s possible for the dice (or an equivalent random-choice system) to be artificially biased. This loaded die will return one side more often than any other. A sorting system that relies on an unbiased roll is at the mercy of honest dice. If you never validate that assumption by checking that queue loads are roughly equivalent, then your work-processing system will become imbalanced. Similarly, consider a system that can handle any mix of results from the dice, but still expects integer values in the 1 - 6 range. What happens, for example, when that system suddenly gets a 7? Can your code handle that? Maybe some component upstream passes in a 3.5. Will downstream code react poorly because this is not an integer value? or will it round it up to 4, thereby obscuring the problem in the upstream component? This may seem like a trivial exercise, but the idea of assuming a fixed range of values has caused large-scale trouble in financial scenarios. Consider the Black-Scholes formula for options pricing, which economist Paul Samuelson critiqued as follows: “The essence of the Black-Scholes formula is that you know, with certainty, not what the deal of the cards will be but what kind of universe is being sampled, which gives you the assumption of the log-normal process.” (Excerpt from When Genius Failed, p70) It’s wise to test what values your systems can handle, when possible. Barring that, you can develop constraints to reject unexpected values before they make it into any calculations. Some scenarios may require that you establish alert systems, so you know when the system is operating outside of its norms. I’ve been exploring this idea in terms of numbers, but the notion of challenging your constants also applies in the physical world. As with the numeric examples, treating your life’s constants as matters subject to change will make you more adaptable and less prone to surprises. Your office location, who heads your company, the legality of your business model, whether people will be able to enter the office to work… How many of those do you tacitly assume will never change? And what happens when they do? Even by a just a little bit? Exploring this can seem daunting but it takes just two steps to start. First, replace “always” with “most likely” and “never” with “shouldn’t.” With that change in your vocabulary, you’ll train yourself to accept and work through “what else?” kinds of questions. Next, having mapped out other possibilities, ask yourself: “how will I know when the situation has changed?” Knowing what can happen is of little value if you can’t detect it in time. Exploring these questions – challenging your life’s constants – is the first step to uncovering risks. And uncovering risks is the first step to mitigating them.
OPCFW_CODE
It is needed to use dictionary to learn every language. English learners must know how to use dictionary to save their time and learn English soon and easily. So I am going to give you some points about how to use a dictionary in English. Dictionary is a resource (like a book, e-book or maybe in an electronic form) that contains lists of words in a language (typically in alphabetical order) and gives their meaning, or equivalent words in one language or in different languages. It also provide some information about pronunciation, origin, usage, and …. so it is possible to get all these information such as different meanings, collocations, examples of use and standard pronunciation and much more by searching through it. There are some parts which you can see in most dictionaries and these parts help you to get some information about every single word. If you want to know what is this information exactly, and how to use a dictionary in English read the following points and guidelines. Most of English dictionaries contain the following information so notice them to know how to use a dictionary in English: 1. Meaning and definition of words 2. Phonetic alphabet 3. Stress marks 4. The grammar of the new word 5. Examples (usually by some sentences) 6. Formal or informal part of speech 7. Pictures of words (for some special words) Before you choose a dictionary for yourself, you need to be sure that you have chosen the right one for yourself based on your needs, and to know all different types of information the dictionary contains and how to use it correctly. A good dictionary is not just the one with a list of isolated words with their meanings or translations. Actually an English word has no definite meaning in isolation, and the meaning only becomes apparent when the word is used in a context. So a good dictionary must mention all possible meanings of a word and use some examples for all these different meanings while it should be based on the analysis of words in real texts and communication. There are some tips below about using dictionary: - Find the best dictionary for yourself based on your needs. It is better to use a normal English dictionary and not a bilingual one, because you can learn English words better and sooner. Some word in one language can’t be translated into another language just and it causes some problems for your English learning process. While using a learners dictionary is easy and not to be complicated. - All words are arranged by alphabetical order in every dictionary, so knowing how to spell a word would make it so easier to find a special word which you are looking for. - Learn new words you need, not all of them because it is impossible and it takes a lot of time and energy. - Read the introduction of the dictionary for knowing how to use that dictionary. It explains how entries and information in the dictionary are arranged. Reading this part in the beginning of the dictionary will give you an information on how to find words and explanations to use the information you are looking for. - Learn the dictionaries’ abbreviations. Most of dictionaries have some abbreviations in the definitions for a word. For example “adj.” stands for “adjective”. You need to know these abbreviations to understand the definitions for every word. These kind of dictionaries which use abbreviations introduce them in the introduction. - Learn the guides of pronunciation. Without understanding the pronunciation guide, it can be difficult to pronounce the word which you are looking for. So you must know about the symbols of pronunciation to make pronunciation a lot easier for you. - Electronic dictionaries are the best choice for ESL students and English learners. They hold a lot of data in a small space. You can find native-language equivalents and explanations, as well as pronunciation, definitions, and example sentences in English. Paper dictionaries are hard to transport while electrical and software ones are easy to transport and most of them have audio pronunciation. - English learners would be better to use an updated dictionary for newly words which are entered to a language. So it’s a good idea to upgrade your dictionary or use an up-to-date dictionary so that you have access to the latest new words that are added to the dictionary. - Most of traditional dictionaries have online editions which can help ESL students and English learners. So if you have an access to internet, online dictionaries can help you so much. They contain very updated information for every word. - Many countries have their own native dictionaries that might be more helpful than using other dictionaries; such as Oxford dictionary in England, Webster’s dictionary in the US, and Macquarie dictionary in Australia. [divider style=”solid” top=”30″ bottom=”30″]
OPCFW_CODE
glb_to_rank for large number of receivers the glb_to_rank function in distributed is one of the remaining computational bottleneck for sparse objects coordinates distribution, Can you give more detail? Any profiling information? Regarding the slowdown we have with the thousands of receivers in the examples: Actually I spent some time on vtune cuz I was working on some clusters today but then I remembered that the bottleneck is on python-land. I attach 2 files here for running: mpirun -n 2 python3 -m cProfile -s time examples/seismic/acoustic/acoustic_example.py -d 500 500 50 --tn 10 on my local laptop (I will do better machines as well, though problem I think is obvious) (edited) File cpu2.log was produced from running the default example: (TOP 5 by time) 256889387 function calls (256374925 primitive calls) in 436.819 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 4000032 48.857 0.000 245.024 0.000 sparse.py:627(<genexpr>) 5000411 40.648 0.000 170.159 0.000 data.py:401(_index_glb_to_loc) 6000757 40.071 0.000 109.028 0.000 data.py:342(_normalize_index) 5000390 32.184 0.000 327.336 0.000 data.py:189(__getitem__) 12190115 31.820 0.000 39.914 0.000 utils.py:31(as_tuple) 11366072 function calls (10851759 primitive calls) in 15.949 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 3 0.689 0.230 2.642 0.881 operator.py:583(apply) 5958 0.513 0.000 0.516 0.000 {built-in method numpy.array} 12 0.378 0.032 0.378 0.032 {method 'fill' of 'numpy.ndarray' objects} 982618/982233 0.338 0.000 0.415 0.000 {built-in method builtins.isinstance} 313086 0.274 0.000 0.388 0.000 random.py:250(_randbelow_with_getrandbits) elastic_mpi_profile_rank0.pdf from DEVITO_LANGUAGE=openmp OMP_NUM_THREADS=8 tmpi 2 python benchmark.py run -P elastic -op forward -d 492 492 492 -so 12 --tn 50 --autotune off --dump-norms "/tmp/norms0.txt" on hero (my workstation), on top of devito 976fda2a2 proposal: coordinates become immutable ; this is potentially invasive change. Alterantive: dirty flag to avoid recomputing _dist_datamap if coordinates haven't changed
GITHUB_ARCHIVE
Code & Development | Drupal Government Day San Francisco City and County have a growing set of internal solutions that present Drupal dashboards. This session will show how we deploy dashboards for business users, for deployment in both intranet and web facing systems. The types of dashboards we will be looking at include: - IT procurement dashboard - Tax Credit Request Management dashboard - Entertainment Permits dashboard - Discrimation Reporting dashbord This session will focus on the management of dashboards. Specifically how we deploy multiple dashboards for users with varying permissions. The data capture and workflow are not part of this demonstration. The use case is very much about using Drupal inside the organisation for delivering subsites for specific business needs. Get the latest scoop on DC.Gov, the District’s official web portal. In 2009, the Office of the Chief Technology Officer embarked on an ambitious project to modernize DC.Gov. Its goals included moving off an antiquated CMS, migrating over 100 sites to a new information architecture that is more citizen centric, implementing a new design, bridging the digital divide through the deployment of a mobile platform, facilitating the use of Web 2.0 features, and making the creation and publication of content easier and more efficient by District agencies. In 2011, we adopted Drupal 7, moving away from a commercial CMS that was the original solution. There has been a lot of focus on Drupal as a content management system and platform for better serving citizens online. Drupal also has a lot of capabilities for helping government work better behind the scenes. This presentation will cover a variety of Drupal modules which relate to applications for use within an organization. From tools for better communication, identification of areas for improvement to building your own custom applications. - Ideation tool - Open Atrium - Walk through building a small application to track a process Displaying data with a map can help people understand how it all fits together and how it can directly impact them and where they live. Learn about the open source tools that can make your data come alive. We'll demonstrate some of the current tools as well as discuss the architecture used in the recently launched County Health Rankings site. - Tools and technologies to create maps - Resources to help visualizing your data - Common challenges you may face We'll take an in-depth, technical look at the challenges of migrating external data into Drupal. Working from a live example, we'll use Drupal 7's Migrate module to pull data into a Drupal site. This will be a technical session designed to help you solve your own data migration problems. Topics will include: - Migration strategies for Drupal 7. - Why the Migrate module rocks. - The Migrate UI. - Preparing data for migration. - Defining source data for migration. - Defining target objects for migration. - Migrating nodes, users and taxonomy terms. - Handling Drupal fields. - Extending Migrate module. - Using Drush with Migrate. The session will examine a real-world scenario involving an Oracle-to-Drupal migration. Live interactive demonstration of Open Source based Liferay Portal plus Q & A session Discussion of a low cost, feature rich, enterprise web platform for building organizational solutions that deliver immediate results and long-term value. TCOs tend to be 1/2 to 1/3 the cost of more "proprietary" solutions. Liferay Portal ships with broad product capabilities to provide immediate return on investment: Content & Document Management with Microsoft Office® integration Web Publishing and Shared Workspaces Social Networking and Mashups Enterprise Portals and Identity Management Liferay has no stack agenda. It runs on your existing application servers, databases and operating systems to eliminate new spending on infrastructure. Have you been hearing a lot about responsive design? Are you looking to get your feet wet? This session will provide a walkthrough on setting up your own subtheme and working with the 960 and responsive features of the Omega 7.3 base theme in Drupal 7. This sessions assumes a basic familiarity with Drupal site building, and the main focus is on creation and configuration of your Omega subtheme. This session will describe how to incorporate BIRT reports into Drupal. BIRT (Business Intelligence Reporting Tools) is an open source business intelligence tool thar produces robust reporting from almost any type of data source. In this session you will learn how to: 1) Install and use the BIRT Report Designer and BIRT Runtime Engine 2) Install and use JavaBridge to allow Drupal to work with the BIRT Runtime Engine 3) Install and use the BIRTOS module for Drupal 4) Create BIRT reports that will display on Drupal You have great content on your agency’s website, but can visitors find what they’re looking for? Site search is a strategic asset that you need to make sure the public can easily find your agency’s official information. The panel will discuss how USASearch makes it easy for you to offer a commercial-grade search experience on any government site at no cost (that’s free). During this case study, the panel will discuss how you can leverage USASearch and its Drupal module to deliver fast, relevant search results. They will also provide a live demo of current Drupal sites (e.g., Commerce.gov and WhiteHouse.gov) using USASearch. Attendees will learn 1) Understand the importance providing a fast, relevant site search 2) Learn how to leverage USASearch and its Drupal module to provide site search on their government site—at no cost.
OPCFW_CODE
eliot.miranda at gmail.com Thu Mar 10 17:02:39 UTC 2016 On Wed, Mar 9, 2016 at 5:27 PM, Florin Mateoc <florin.mateoc at gmail.com> > On 3/9/2016 8:23 PM, Eliot Miranda wrote: > > Hi Florin, > > I believe the correct fix is for ObjectMemory needs to decompose > fetchLong64:ofObject: into two 32-but reads unless BytesPerWord = 8. I'll > commit asap (which is once I have 64-bit small float tagging converted). > But your fix should keep you going until then. > > _,,,^..^,,,_ (phone) > Hi Eliot, > I don't understand how two 32-bit reads can take care of 5-byte long > largeIntegers, but you know best (usually :)) Because in V3 any object occupies some number of 32-bit words, zero padded. So a 5 byte large integer is actually a 4 byte header followed by an 8 byte unit whose most significant 3 bytes are always zero. In Spur, any object occupies some number of 8-byte words, so a 5 byte integer has an 8 byte header followed by an 8 byte unit, but a 9 byte integer occupies 24 bytes (8 byte header, 16 bytes data). So in V3, fetching 64-bits from a 5 to 8 byte large integer must be done in two reads because objects are only aligned to a 4 byte boundary, but in Spur it can be done in a single 64-bit read because all objects are aligned on an 8 byte boundary. > >> On Mar 9, 2016, at 1:53 PM, Florin Mateoc <florin.mateoc at gmail.com> > >>> On 3/9/2016 3:17 PM, Florin Mateoc wrote: > >>> Hi again, > >>> I think I found the bug: in method > InterpreterPrimitives>>signed64BitValueOf: there seems to be an assumption > >>> mentioned in the method comment) that (on 32bit machines) > largeIntegers have to be either 4 or 8 bytes. > >>> In this case we get a 5byte largeInteger, so we get the error. What I > don't understand is where does this assumption > >>> come from, because it does not seem limited to this method. > >>> Also note that on BigEndian machines the code does not act upon this > assumption, so it would not fail. > >>> Actually, I suspect that the assumption comes from "generalizing" the > 32-bit one, since the methods seem to be copied > >>> and pasted. > >>> For the 32bit variant, the comment stated that "The object may be > either a positive SmallInteger or a four-byte > >>> LargeInteger". But in this case it was correct, anything less than 4 > bytes would not be a LargeInteger. When moving to > >>> 64bit, the same does not hold true. We can have largeIntegers with > 4,5,6,7 or 8 bytes fitting in 64 bits. > >>> Also, speaking of BigEndian, it seems that, in the same class, the > methods #magnitude64BitValueOf: and > >>> #positive64BitValueOf: do not take care of the BigEndian case. > >>> Cheers, > >>> Florin -------------- next part -------------- An HTML attachment was scrubbed... More information about the Vm-dev
OPCFW_CODE
August 16, 2020, 6:32am How do you change the focus style for the button matrix of a tabview? What MCU/Processor/Board and compiler are you using? What LVGL version are you using? What do you want to achieve? change the color of the focused button when the tab is selected. What have you tried so far? I have tried the code below and the button pressed state color changes so I think coding is right. But the focused state color does not change. What am I missing? Code to reproduce Add a code snippet which can run in the simulator. It should contain only the relevant code that compiles without errors when separated from your main code base. The code block(s) should be formatted like: static lv_style_t style8; lv_style_set_bg_color(&style8, LV_STATE_FOCUSED, LV_COLOR_GREEN); // FOCUSED is not working ??? lv_style_set_bg_color(&style8, LV_STATE_PRESSED, LV_COLOR_RED); lv_obj_add_style(tabview, LV_TABVIEW_PART_TAB_BTN, &style8); Screenshot and/or video If possible, add screenshots and/or videos about the current state. I looked through . It looks like the tabview uses the “checked” feature of the button matrix on selected tabs, so I think you’re looking for LV_STATE_FOCUSED. I’ve submitted a pull request to add this to the docs August 16, 2020, 9:22pm Hello, thanks for the quick response. But I have tried the LV_STATE_CHECKED and it does not work either. I still only get the red pressed state. Also I added a default state below and that is not working either. lv_style_set_bg_color(&style8, LV_STATE_DEFAULT, LV_COLOR_CYAN); // DEFAULT is not working ??? I have also tried adding this style to the tab and the tabview with no luck. But the border width is working and border color. when added to the tabview LV_TABVIEW_PART_TAB_BTN. Any other suggestions? August 17, 2020, 8:50pm Do you think this is a bug? August 18, 2020, 12:43am I have tried a lot of things now. It all works if I just create a BTNMATRIX. I can get the checked state to display the correct color. it just will not work in the Tabview. I have even tried to run the same code that worked on the BTNMATRIX by setting it directly on the tabview’s btnmatrix, with ext->btns, but that does not work either. I have also checked that the btmmatrix button 0 is in the checked state. What is different about the btmmatrix in the tabview? August 18, 2020, 12:56am Also I’m running lv_arduino which has V7.0.2 and I noticed the current version is 7.3 could this be the issue? if so when will lv_arduino be updated? The coloring issue It happens because the bg_opa of the tab buttons is transparent by default (except in the pressed state). So just add this line: lv_style_set_bg_opa(&style8, LV_STATE_CHECKED, LV_OPA_COVER); From now you can use lvgl directly as an Arduino library. August 19, 2020, 8:56pm Thank you very much! It works again.
OPCFW_CODE
Development of enterprise applications using Java technologies is not for the faint-hearted. Writing to the J2EE specs is proving to be complex, difficult, and tedious - slowing down advanced Java developers and creating a barrier to entry for many mainstream developers. With advanced Java developers in short supply and even among them, experience with EJB development is rare, thus slowing time-to-market for business applications and challenging application reliability and performance. To solve this problem we'd ideally want to simplify Java development to allow developers of all levels to build reliable, high-performance components and provide them with a framework for delivering J2EE-compliant business applications. Compuware Corporation believes that it has built such a solution with OptimalJ, a new breed of development environment that enables the rapid design and development of J2EE business applications. Using OptimalJ, developers can generate complete working applications directly from a visual model, bypassing many of the routine coding tasks associated with EJB development. Design patterns implement best practices for architecture and coding, and an active synchronization feature keeps the code and model in step and up-to-date, allowing application changes at any stage of the development life cycle. The promise is that OptimalJ, by simplifying the development of J2EE applications, will enable developers of all experience levels to produce reliable applications in a fraction of the time it would take using current development tools. So how does it do this? Let's step through the features of OptimalJ. Rapid Enterprise Java Development Developers use OptimalJ to interact with a visual model of the application that can either be imported from other modeling tools using the XMI or DTD interfaces or built from scratch with the visual modeler. OptimalJ uses this model to generate the architectural submodels and even the working code needed to implement a complete application. Simple graphical windows, editors, and wizards walk developers through each stage of the design, generation, and deployment, reducing the time spent on tedious implementation tasks. In addition to its model-based interfaces, the integrated development environment based on the open-source IDE, NetBeans, provides a source editor, class browser, form editor, and debugger to enable developers to view, modify, and customize the generated application. So-called "free-blocks" in the generated code allow existing classes to be imported and called by the application, making use of work done outside of OptimalJ. Using this visual paradigm, developers are shielded from the complexity of coding to the distributed J2EE development. Less experienced Java developers can quickly build or make modifications to business applications. Advanced developers are freed from many of the repetitive coding tasks and can focus on architecture refinements or customization. Dynamic Business Rules Implementation Once the basic application structure has been defined in the model, application differentiation can be built-in using a flexible business rule editor. Simple scripting enables developers to add both static and dynamic business rules at the model level. The business rules editor can define referential data constraints, which ensure data integrity and consistency, and event condition rules that provide support for conditional processing. Static rules are generated as Java code in the application, and dynamic rules are stored in a rules database on the application server to allow for modification at runtime. By separating out business rules as easily identifiable elements in this way, many business requirement changes can be quickly and easily implemented. Pattern-Driven Application Generation OptimalJ can generate all the application code required for running an application. To do this it first generates models for the Web (JSP), business logic (EJB), and data tiers, which are then used to generate the actual Java code, business rules, and data implementation scripts. The generated models and code are based on implementation templates called patterns, which encapsulate knowledge and best practices for coding to the J2EE specification and follow OMG standards. Developers can quickly generate full working applications using JSP, session EJB, and entity EJB technologies with only limited knowledge of the J2EE specs. Active Synchronization of Models and Code OptimalJ provides live synchronization between the application models and the Java code. Developers can make changes to applications during or after release using the visual model, and the affected code will be regenerated automatically. The OptimalJ Source Editor identifies managed source code, business rule code, and custom source code to accelerate understanding and enable you to make modifications during development or after release of the software. This feature really underlines one of the core benefits of OptimalJ, one that will become more apparent as Java business applications reach their second and third iterations. In most projects today the application model is not kept up to date and is discarded once the initial implementation is completed. Developers who join projects late or have to take the first cut and create new versions will benefit from being able to make modifications directly at the model level, confident that OptimalJ has kept the implemented code fully synchronized with that model. Integrated Deployment Environment OptimalJ automatically deploys to many of the leading J2EE production servers including the fully integrated Compuware OptimalServer, offered as an option to OptimalJ. OptimalJ also includes an open-source test environment that contains a Web server and EJB container. The OptimalJ deployment packager automatically deploys to this local environment, allowing developers to directly test as they develop without worrying about the complexities of deployment. In the time it would normally take to create the basic business design of an application using a visual modeling tool, OptimalJ can finish coding the entire application and deploy it. Developers can spend their time refining the design and the advanced development staff can concentrate on adding new patterns or customizing the existing ones. OptimalJ represents a new paradigm in Java development, promising productivity gains similar to those experienced with the popular and advanced proprietary 4GL products of yesteryear. It adheres to open standards such as J2EE, EJB, JSP, XML, and MOF, uses open source components and industry patterns, and is even IDE and application-server independent. (It supports NetBeans or Forte for Java as its IDE and the iPlanet Application Server, IBM WebSphere, or BEA WebLogic as application servers.) Compuware has made a clear investment in the J2EE development market and is aligning other products such as its DevPartner and QACenter testing suites alongside OptimalJ to provide a Java tools platform to make development of J2EE applications much easier. Bob Hendry is a Java instructor at the Illinois Institute of Technology. He's the author of Java as a First Language. He can be contacted at: [email protected]
OPCFW_CODE
The current pandemic and the subsequent quarantine will have consequences. This being a technology blog, we are going to deal with those for which we can do something. The way that By using open source software we can help reduce the economic consequences. There are companies that can continue to carry out their operations using the Internet while others are unable to do so. However, They can use digital communication tools to build and strengthen relationships with customers so that they can continue with their normal activities. According to digital marketers, le-mailing lists are one of the key tools in communication. Using it, you can distribute a newsletter, inform about offers, give access to gifts, etc. Almost everyone agrees that the best platform to manage this is MailChimp. MailChimp has templates to create emails, the ability to create different lists, generate different types of reports AND the possibility of having an integrated website. The drawbacks is that To get the best features you have to pay, and most importantly you are entrusting your customer list to a third party. Unless you have a copy elsewhere, this is usually never a good idea. Let's start to analize what open source options do we have. Of course, it must be borne in mind that to use them we are going to need a server with enough capacity to support the mail traffic. And to manage our own service requires paying attention to technical and safety aspects. Some alternatives to MailChimp Of course, as MailChimp is a fully integrated platform, none of the options we offer will completely replace it. Therefore, we will have to do some things manually or combine more than one tool. Es A program free and available under the GNU license. It can handle up to 1 million contacts and works with 3 database engines; MySQL, PostgreSQL and Oracle. - Complete documentation (in English) - Translated to Spanish. - Access to administrator and user functions from a single screen. - Web interface compliant with W3C standards. - Document exchange function. - Subscription by topics. - Protection of the privacy of stored addresses. - Forwarding of messages at the request of the user. - Different roles with different types of access. - Creation of templates. Es one of the oldest and being written in PHP and working with MySQL databases, is compatible with most web hosting plans. It can be downloaded for free and used under the Affero General Public License as free software. Through the browser you can access the functions sending newsletters by email, organizing marketing campaigns and creating advertisements. - Flexible handling of subscribers, from a few to millions. - Importing mailing lists from other sources. - Creation of emails using plain text, html or templates. - Management of an unlimited number of segmented lists with complex demographic data. - Translated into our language. - Recipient segmentation tools. - Complete documentation in English. Es the open source version from a well-known industry software called EMM. It doesn't have as many functions like other alternatives, but in return it is easier to learn how to use it. - Template-based email creation. - Graphical representation of performance statistics. - Real-time performance statistics. - Self-defined and behavior-based target groups. - User self-service based on web forms. - Extension of functionalities through add-ons. - Compliance with privacy regulations. Another tool available under the GNU license. It is a mailing list manager that It can be used to send newsletters and announce promotions and events. Discussion lists, announcements and groups can be created - Custom subscription fields. - Target audience segmentation. - Personalization of messages. - Message editor in plain text and HTML format - Programmable shipping schedule - Mail client support. - Custom reports.
OPCFW_CODE
Which project management and software development methodology is best suited for iPhone Development? Are there any documentation templates available which documents an iPhone Development project? A software methodology is simply a tool that helps you make sure you got the job done right. You don't have to follow any formal methodology to create great software on any platform. In short, the platform doesn't dictate the process or methodology you use. Instead of going to a menu of methodologies, take some time to understand your particular project. There will be challenges that your project faces that are not necessarily addressed or spelled out in any off the shelf methodology. You will need to tailor the process to your situation. For example, if you need to coordinate art assets and development assets, you need to adjust things so that both of those teams will deliver the right resources at the right times. While I favor agile methodologies, my "brand" of agile is not any one of the off the shelf variety. I've incorporated good practices from both agile backgrounds and more formal CMMI or IEEE backgrounds. The important aspects are deciding: - How much documentation do you really need? Decide target audience for every document you think you need. Let that target audience have the final say on whether they really need it. - How rapidly do you expect aspects of the project to change? Work on stable stuff while you hammer out those changing details. NOTE: the more rapidly you expect things to change, the smaller you need your iterations to keep the cost of change down. - Do you really know what you want, or are you discovering as you go? Most user interfaces incorporate experimentation and discovery as part of the process of coming up with the secret sauce to making your application awesome. The more you don't know, the more often you need to stick a version of your app in front of users. These events need to be planned and the feedback incorporated. Machine to machine interaction on the other hand is pretty static. All this assumes you will have basic source control management and issue/bug/feature tracking tools installed in your environment. Whether you use a particular brand of agile/formal processes or you come up with your own process, these are the minimum required elements of successful engineering. You need to be able to roll back certain changes while preserving all others (source control management), and you need to be able to make sure you close every bug you found. There are free tools available to take care of both of those aspects, so there is definitely no excuse not to. I don't see why any management methodology couldn't apply to iOS development. It's just development, the fact that it's for a highly hyped platform doesn't really change the basics of the work. The iOS methodology employed at my company appears to be "we can't tell you what we want until you show us something for us to dislike, and we're shocked (shocked!) that you won't be meeting the deadline." Not so different from developing for other platforms, eh? ;-)
OPCFW_CODE
I started developing with raspberry pi. How can I send the values I read from the sensor to the remote server with Raspberry Pi? Source: Windows Que.. I am trying to modify a piece of code originally designed for an AD5245 to use it for an MCP4462. Both are I2C devices on a Raspberry PI. I’m trying to figure out how to formulate the write code to the device based on the documentation in the image/ If I am not mistaken, I .. I’m currently working on a traffic light detection code, that I would like to implement on a Raspberry Pi 3. To do it, I need to have the fastest program possible to detect in real time the different traffic lights. At one step of my program, I need to separate the R, G and B .. Connection schema servo to GPIO: servo minus (brown wire) – to pin #9 (GND) servo plus (red wire) – to pin #2 (5VO) servo signal (yellow wire) – to pin #11 (GPIO 17) python code – ! works fine from gpiozero import Servo from time import sleep servo = Servo(17) while True: servo.mid() sleep(0.5) servo.min() .. I have a C++ program that I want to run continuously that gathers sensor data on a Raspberry Pi. I’m trying to use Python’s subprocess feature to launch the C++ program, and then occasionally read output from the C++ program. However, I am having trouble reading in data from the C++ program to Python. The .. I have been struggling with this, any help would be appreciated. Source: Windows.. At first I would like to mention that in spite of the fact that I am using this site for a douzen of years, this is the first time that I am posting a question, so feel free to consult me if you thing I m doing somehting wrong. Issue: I am trying to develop .. I’m student and for a project I need to use a camera with Opencv C++ on Raspberry 3. I found a program on the web and I have an error when I try to start this. I searched on the web and I didn’t found an answer Here you have the code : #include <stdio.h> .. i am using this to cross compile with raspberry pi, but i stuck in the step ../qt-everywhere-src-6.0.0/configure -opengl es2 -device linux-rasp-pi3-g++ -device-option CROSS_COMPILE=arm-linux-gnueabihf- -sysroot /opt/qt5pi/sysroot -prefix /usr/local/qt5pi -opensource -confirm-license -skip qtscript -nomake examples -make libs -pkg-config -no-use-gold-linker -v because it shows the error "List doesn’t recognize subcommand transform" I am using Ubandu 18.04 in virtual .. So I’m working on a project to make a robotic car recognize a stop sign and some object and takes actions based on that. now whenever I run this code It barley open the camera and then I lose connection to the VNC Viewer immediately. I use 5 Volt, 3 A output power bank. there ..
OPCFW_CODE
CA-TPX provides support for Secured Signon using Pass Tickets. The use of Pass Tickets eliminates the transmission of passwords across network facilities in clear text. A pass ticket is a one-time only password substitute that is automatically generated by an authentication server, such as CA's Single Signon Option or IBM's Network Security Program or on behalf of a client workstation requesting access to a mainframe application, such as TPX. Once a user is signed on to TPX, Pass Tickets may also be generated for applications subsequently accessed through TPX. NOTE 1: This document is specific to Top Secret. For instructions specific to ACF2 or RACF, please refer to the links at the end of this document. NOTE 2: IF PTF LU08678 (enhancement) is applied and active, regardless of whether it is a legacy pass ticket or enhanced pass ticket; TPX PTF LU03420 (Enhancement) must be applied and active as well. Component: TPX for Z/OS The implementation of Pass Ticket (PTKT) support requires customization within both TPX and the security system. Customize Top Secret for Pass Tickets For Top Secret, you must have the required NDT rules in place. Refer to Top Secret User Guide and the Top Secret Cookbook. 1.TSS ADDTO(NDT) PSTKAPPL(applname) SESSKEY(................) SIGNMULTI 2.TSS ADD(dept) PTKTDATA(IRRPTAUT) The Resource Class has a maximum Ownership of 8 characters. 3.The Resource can be permitted as one of the following, where 'applname' is the Application Name defined in the NDT and 'userid' is the Userid: 4. And finally, authority to generate pass tickets: TSS PER(serveracid) PTKTDATA(IRRPTAUTH.applname.acidname) ACCESS(UPDATE) Authorize Applications to Generate or Evaluate PassTickets Applications can invoke the R_ticketserv or R_GenSec callable service to generate or evaluate a PassTicket on behalf of an authorized user. If running 64-bit addressing mode (AMODE 64), you must use R_GenSec. R_ticketserv does not support AMODE 64. For complete information about R_GenSec and R_ticketserv, see the IBM z/OS Security Server RACF Callable Services documentation. The PTKTDATA resource class authorizes the use of each callable service. The following table describes the required resource and access for generating and evaluating PassTickets: Operation Resource Name Access Required Generate PassTicket IRRPTAUTH.application.target_ userid UPDATE Evaluate PassTicket IRRPTAUTH.application.target_ userid READ Invoking R_ticketserv/R_GenSec triggers a security call for PTKTDATA(IRRPTAUTH.application.target_userid) to ensure the caller is authorized to evaluate/generate a PassTicket. If the calling ACID (typically the region ACID) is authorized, the PassTicket operation can occur. If the PTKTDATA class is not active, or the required resources are not defined, the PassTicket request fails. How to Authorize applications to generate or evaluate PassTickets. 1. Define the PTKTDATA resource to the Resource Descriptor Table (RDT). TSS ADD(RDT) RESCLASS(PTKTDATA) A PTKTDATA resource definition now exists in the RDT. 2. Grant ownership of the IRRPTAUT resource: TSS ADD(owning_acid) PTKTDATA(IRRPTAUT) IRRPTAUT is now owned. 3. Give a target user the permission to have a PassTicket generated/evaluated through an application: TSS PER(acid) PTKTDATA(IRRPTAUTH.application.target_userid) application --- Specifies the application. target_userid --- Specifies the user who receives the permission. A permit is added. 4. If control option PTKRESCK(YES) is set, grant additional permissions as follows: TSS ADD(owning_acid) PTKTDATA(PTKTGEN.) TSS PER(userid) PTKTDATA(PTKTGEN.application.target-userid) < See - Authorize Applications to Generate or Evaluate PassTickets STEP :4. for additional details on PTKRESCK > Top Secret customization for applications to generate or evaluate PassTickets complete. Within TPX, there are two separate aspects of Pass Ticket support: Users and Applications. You can implement one or the other or both depending upon your site requirements. You can specify pass ticket and/or qualified pass ticket for users and applications. When both are specified, CA TPX attempts to use the most secure form of pass ticket available as defined in the external security system. A. TPX customization to activate Pass Ticket. This parameter does not impact the actual sign on to TPX. TPX accepts the userid and password then makes a security call for validation. TPX is unaware of whether the passcode field contains a password or pass ticket at this point. It is only after the user is signed on to TPX where this parameter becomes important, and these are outlined in the field level help: B. TPX applications sign-on with a Pass Ticket Session Options requirements Set ' Generate Pass Ticket: Y ' in the ACT (Application Characteristics Table), or Profile Session Options, or User Session Options. To use qualified pass tickets, set ' Gen Qualified Pass Ticket: Y ' in the ACT, or Profile Session Options, or User Session Options. Set 'Pass Ticket Prof name', if required, in the ACT. (Parameter is not available at profile or user level.) The 'Pass Ticket Prof name' will be supplied to the external security system instead of userid during Pass Ticket generation. When 'Pass Ticket Prof name' is NOT specified (field left blank), TPX issues the pass ticket request with the USERID & APPLID. When 'Pass Ticket Prof name' is specified, TPX issues the pass ticket request with the USERID & Prof name. NOTE : To use pass ticket in TPX, the 'application' must be defined in CA TPX Application Charateristic table (ACT). In order to trigger a pass ticket session for a selected application, a startup TPX ACL is required to ensure secured signon. To verify whether or not your application requires a 'Pass Ticket Prof name' to be defined run the TSS SECTRACE. For Top Secret (TSS) Run a SECTRACE against the TPX address space (using TPX jobname) to verify the generation of a pass ticket in TSS. Repeat the test with a second SECTRACE against the application to verify what is the entity/element the application is sending to TSS for validation. If it is not the VTAM APPLID, define this entity/element in the TPX ACT 'Pass Ticket Prof name' field to request pass ticket for this value instead of the actual APPLID. It is important to not run each SECTRACE at the same time so that the trace data remains specific to either TPX or the application. Also ensure the entity/element identified in the trace on the application (the one that you are specifying in 'Pass Ticket Prof name') is defined within TSS PTKTDATA. To verify if the profile name has been set up for pass tickets: TSS LIST(profile_name) DATA(ALL) 'Pass Ticket Prof name' required for TSO and VM systems, will have a value ; Other applications requiring Pass Ticket prof name, as provided to us by TPX customers: (Please verify for your environment.) There may be additional applications where this entity/element is required and can be determined in conjunction with the application vendor and your security administrator. Additional TPX Setup Requirements: To ensure that all changes have been implemented, it is recommended to re-cycle TPX. If you are familiar with the reload command that may also be used to implement each change.
OPCFW_CODE
How to Find Motivation, and Learn as a Remote Junior Developer By: Kat Connolly July 30, 2020 Estimated reading time: 3 minutes. The developer world is quickly changing because of COVID-19, and it seems most of us will be fully remote for a long time to come. I started my first developer job at Neo Financial last fall and like most of the world, we went remote in March to help flatten the curve. Although I had a headstart of a few months on being totally new to life as a developer, I found the adjustment to becoming a fully-remote junior in a fast-paced startup particularly challenging. There are plenty of lists about how to be productive in your home office - get a desk, take breaks, get dressed, etc. - but I found that these weren’t enough. In addition to adjusting to the new norm, I was also learning how to be a real developer. For those of us in a junior role, it’s not just a matter of self-discipline. It’s about staying motivated to continue when you face never-ending errors, short deadlines, and less (or sometimes no) access to seniors for help. For the first half of self-isolation, I kept pushing things aside saying ‘I’ll get to this when we’re back in the office.’ But as days turned to weeks, I realized that I needed a totally new mindset in order to leave this period of time a better developer instead of just surviving and maintaining. Let Yourself Feel Success It’s easy to see everything you haven’t accomplished. When you give yourself a to-do list with items that span multiple components and pages, you set yourself up for disappointment. You begin and end the day with one task, and don’t complete it. This is why I started looking at my bigger projects and finding every small task. It’s kind of cheesy, but crossing off that to-do list is gratifying. No one becomes a developer because it’s easy. In fact, studies have shown that that motivation actually helps you to complete more tasks. When splitting these tasks up, give yourself a reasonable deadline -- that way you can walk away from it if you need to, or reach out and find help from a senior or on Stack Overflow after you’ve passed the allotted time. Take Time to Invest in Learning; Don't Just Do I fell into the terrible habit of copying and pasting solutions into my code that I didn’t understand. ROOKIE MISTAKE, I know. But isolation made it so much easier to just do it to get it done and move on. When I realized that this was only to my detriment, I made a rule that turned my lack of motivation on its head: if there is something that you don’t know how to do, the first place you should look is in the docs. Not google, not Stack Overflow, but the real, original docs. And while you are in the docs, read that whole page instead of just the small piece that might help you. This seems obvious, but the day I decided to do this things really started to turn around for me. Doing this gave me a deeper understanding of the technology that I was working with and my code quality improved overnight. On a similar note: track the things you learn. Because I'm a junior, so many things that I use day-to-day are totally new to me. I started keeping a list of new concepts and tricks that I had learned (surprise! Most of them came directly from the docs!) and I was shocked to see my progress after every single day. It’s like a reverse todo list. Teach Others Feeling isolated can seep into your state of mind and affect your overall mood and productivity. In mid-March, I volunteered to mentor a group of teens for the Technovation Challenge. As a junior I’m so used to there being some senior out there who has seen my issue before, but for this group of teens I was the only experienced developer to help them. This really pushed me to think differently and never take ‘no’ for an answer. This was their passion project and I wasn’t about to disappoint them because I couldn’t find a way to work around their technical issues. Their motivation also gave me a sense of community and an energy that I just hadn’t been feeling while working remotely. You don’t have to dive into the deep end of teaching the way I did. Contribute to Stack Overflow’s community or jump on some coding subreddits and help beginners learn how to write loops and use REGEX. In addition to much needed social interaction, explaining coding concepts will solidify your own understanding and build your confidence as a developer. Remember Why You're a Developer I know that a lot of bootcamp grads (myself included) took a chance on this career because we want to feel challenged. At the beginning of self-isolation, I was really dreading some of the tasks that I knew I had to complete for work. But I just needed to be reminded that this is the best job in the world and my potential is as high as the effort that I put in (that’s a lot of cheese, I know). Since making these changes in my mindset, I’ve been feeling so motivated to come up with creative solutions to complex problems--the one true love of every developer. And, if you’re feeling drained from work tasks, learn something new or make something fun. Try teaching yourself to DJ with Sonic Pi or make an app like this one that finds you the fattest cat that is up for adoption at the San Francisco SPCA. No one becomes a developer because it’s easy. I think the best motivation for succeeding as a remote junior is to remind yourself that every roadblock is just another opportunity to become better at what you do. Kat Connolly is a Lighthouse Labs web development bootcamp graduate and developer at Neo Financial.
OPCFW_CODE
Is the Japanese emperor forbidden from eating fugu (puffer fish)? According to Wikipedia Fugu is also the only food the Emperor of Japan is forbidden to eat, for his safety This information comes from a Forbes article "Killer Foods": The fish remains the only delicacy denied the emperor – too risky. This seems a bit fishy to me, so doing a search in Japanese, the best I could come up with was this anecdote, with my translation interspersed: 昭和天皇が自分だけフグ食べさせてもらえなくて拗ねられた話大好き。 I love this story about Hirohito sulking because he alone couldn't eat fugu 侍従「陛下はだめです」 Chamberlain: Your Majesty is forbidden 陛下「なんで」 Hirohito: Why? 侍従「毒があるので」 C: It's poisonous 陛下「みんな食べてるじゃん」 H: But everyone else is eating it 侍従「毒抜きしましたので」 C: The poison has been removed 陛下「じゃあ私も」 H: OK, I'll have some too 侍従「だめです」 C: It's forbidden 陛下「なんで」 H: Why? Here is another page (with a horrendous Google Translate) that makes similar statements. So, what is the truth about the emperor and fugu? I have to ask: Forbidden by whom? Custom? Law? Terms and Conditions signed upon becoming Emperor? This might be possible, in Belgium for example if the king is visiting places around Belgium he isn't allowed to eat fish due to the fishbones. But this is all anecdotal ofcourse... @Oddthinking, I would assume forbidden by his flunkies in the Imperial household. I hope you can see my concern: A flunkie that forbade their boss from doing something the boss actually wanted to do might find that they no longer are employed as a flunkie. "This seems a bit fishy to me" - I see what you did there... This is a nice question and I wondered about it a while ago. All useful I found is this paper which outlines the story background: http://digitalcommons.law.wustl.edu/cgi/viewcontent.cgi?article=1086&context=globalstudies It has no mention of this particular issue though, and author was unable to track any of the related stories to a credible source, and implies it is a word-of-mouth story. About the emperor claim, I remember tracking it down to early 80-s Reader's Digest issue which I can't find now, but it does say something about the credibility of the story (I mean in a negative way). Formerly the emperors had to stay in their palace in Kyoto and did not have the chance to visit Shimonoseki, where most fugu is prepared. More concretely, fugu was illegal in Kyoto and Tokyo from about 1600 to about 1945. It's fairly probable that no emperor before Hirohito was given the option. (source) In 1964, Hirohito went to Shimonoseki, but his doctor forbid him from eating fugu, which made him upset. At another time, his son presented him with fugu. Again his doctor intervened and this time Hirohito argued about it for a long time, possibly around two hours, since it was a gift being offered to him. In the end, his wife the empress got him to calm down. Hirohito was never permitted to eat fugu. (source) This custom has simply changed in the present day. The current Emperor Emeritus (Hirohito's son) was brought up occasionally eating non-Japanese food so got in the habit of choosing his own menu. He has never picked fugu himself, but Shimonoseki City was aware of his broader diet and offered him fugu as a gift, and he was allowed to eat it, so we can say the custom has changed. (source) Presumably the present emperor eats fugu as well. Looks like a great answer, any chance you could add some translated quotes and provenance for each source (e.g. a simple line explaining what each the source is), so it's not wholly reliant on those links staying fresh? I've found references to Fugu (or poisonous Blowfish) being forbidden "to the Emperor", "the Emperor and his family", "to the Emperor and the Empress", To the Emperor and the royal family" in Forbes, NYMag, The Guardian, Chicago Tribune, etc etc. In this article in the LA Times, they talk about it being an ancient law forbidding the Emperor to eat fugu; The blowfish, known here as fugu, carries a deadly neurotoxin with no known antidote. An average-sized fugu is chock-full of the poison tetrodotoxin -- in its blood, liver and even its sex organs, Sasaki says. But he scoffs at the centuries-old ban on the Japanese monarch eating the delicacy, sought after by many Japanese as daring cuisine. "The prince and other royalty have eaten fugu, so why not the emperor?" he says. "It would set a good example." Frankly I've seen so many re-hashings of the same phrase (all without any shred of sourcing) that I'm utterly convinced that this is an urban myth, potentially one taught to qualified fugu chefs during their training since they seem to be taken in by it as well.
STACK_EXCHANGE
Failed to set parking heater mode to off[BUG] Environment skodaconnect release with the issue: v1.0.30-rc7 Last working homeassistant-skodaconnect release (if known): v1.0.30-rc5 Home Assistant Core release with the issue: core-2021.1.5 Operating environment (Home Assistant/Supervised/Docker/venv): Home Assistant OS 5.10 Car model and year: Super 2018 Valid We Connect subscription: Yes Debug logs enabled: Yes Describe the bug When attempting to deactivate the preheater an error occurs stating that there is an invalid or no response and then failure to execute. Steps to Reproduce Toggle the switch for disabling the pre heater. Expected behavior It should be deactivated Screenshots Some additional log ocurrances too Traceback/Error logs DEBUG (MainThread) [skodaconnect.connection] HTTP POST "https://msg.volkswagen.de/fs-car/bs/rs/v1/skoda/CZ/vehicles/vin-goes-here/action" ERROR (MainThread) [skodaconnect.connection] Failure to execute: WARNING (MainThread) [skodaconnect.vehicle] Failed to set parking heater mode to off - Invalid or no response File "/config/custom_components/skodaconnect/switch.py", line 38, in async_turn_off File "/usr/local/lib/python3.8/site-packages/skodaconnect/dashboard.py", line 530, in turn_off File "/usr/local/lib/python3.8/site-packages/skodaconnect/vehicle.py", line 407, in set_pheater Additional context This was definitely working in RC5? And start preheater works in RC7? Then it's a simple diff to see what I changed that might have botched it. Always tricky with preheater since it's "legacy" API and I can't test it myself. @Farfar Just reinstalled RC5 and it doesn't work there either, which makes me wonder if it really did work before. I´m pretty sure that it did work then since i wrote that in a comment before the weekend. https://github.com/lendy007/homeassistant-skodaconnect/issues/8#issuecomment-769163737 The error handling changed the last couple of RC releases since it was a hassle trying to get exceptions carried correctly and informative. Could be som typo somewhere. It's interesting though this part: ERROR (MainThread) [skodaconnect.connection] Failure to execute: Since the error is blank there's an exception but no message which is weird. It might be that a HTTP error occured since there's not, or isn't a need for, as extensive error handling for set functions as with get. It's enough to know if set succeeded or failed. What might be an issue is that the SPIN token is generated and sent for both start and stop pre heater, it was previously only for start but preventing it from being generated for 'stop' would need more checks and I didn't think it was necessary. Do you have access to the skodaconnect/connection.py file? If so you can change line 829 from: self._session_headers['x-mbbSecToken'] = await self.get_sec_token(vin = vin, spin = spin, action = 'heating') to: if not 'quickstop' in data: self._session_headers['x-mbbSecToken'] = await self.get_sec_token(vin = vin, spin = spin, action = 'heating') and report back if it works? That seems to work! While trying i somehow made too many requests and ended up getting 429 - too many requests. The app basically told me to take the car for a ride before any more requests would be allowed. ERROR (MainThread) [skodaconnect.connection] Unhandled HTTP exception: 429, message='Too Many Requests', url=URL('https://msg.volkswagen.de/fs-car/bs/vsr/v1/skoda/CZ/vehicles/vin-goes-here/requests') WARNING (MainThread) [skodaconnect.vehicle] Failed to execute data refresh - Invalid or no response File "/config/custom_components/skodaconnect/switch.py", line 32, in async_turn_on File "/usr/local/lib/python3.8/site-packages/skodaconnect/dashboard.py", line 370, in turn_on File "/usr/local/lib/python3.8/site-packages/skodaconnect/vehicle.py", line 486, in set_refresh Maybe should also declare request_state = '' on line 843 of connection.py or remove the variable altogether from the dict in line 845. HA was complaining when trying to access the non-existing variable. That seems to work! While trying i somehow made too many requests and ended up getting 429 - too many requests. The app basically told me to take the car for a ride before any more requests would be allowed. ERROR (MainThread) [skodaconnect.connection] Unhandled HTTP exception: 429, message='Too Many Requests', url=URL('https://msg.volkswagen.de/fs-car/bs/vsr/v1/skoda/CZ/vehicles/vin-goes-here/requests') WARNING (MainThread) [skodaconnect.vehicle] Failed to execute data refresh - Invalid or no response File "/config/custom_components/skodaconnect/switch.py", line 32, in async_turn_on File "/usr/local/lib/python3.8/site-packages/skodaconnect/dashboard.py", line 370, in turn_on File "/usr/local/lib/python3.8/site-packages/skodaconnect/vehicle.py", line 486, in set_refresh Maybe should also declare request_state = '' on line 843 of connection.py or remove the variable altogether from the dict in line 845. HA was complaining when trying to access the non-existing variable. Excellent. Yes, that variable is a remnant I've forgotten to remove. I will incorportate this in next release which will probably be 1.0.30 final. Closing this issue as it is solved with 1.0.30
GITHUB_ARCHIVE
Send later not functioning predictably I schedule a couple of emails to send later, but for some reason the scheduling is all messed up. When I set a specific snooze time, that works fine, but send later doesn't seem to work right. It might be that it is snoozing according to some other time zone. I'm based in Israel, so ET +7hrs. Not clear. In the image below, you can see the time on my computer's clock (top right 8:48PM), snooze later time (6:03PM) and time it is actually planning to sent on the message window (in 3 hours). I'd love for this feature to be predictable and reliable. Thanks! OSX El Capitan Version 0.4.38-27d21d1 (0.4.38-27d21d1) hey @osakkers, thanks for reporting this! This is no good. Can you tell me what time you scheduled it for specifically? 6:03 PM - it's the second message in the drafts folder which you can see in the background and you can see the timestamp on the far right. Update: I checked my drafts folders again and the messages still haven't sent (even though previously it said they were due to send in 3hrs). It seems that it simply is failing to send. So it seems it's not a time zone/clock issue. See latest image: thanks for the update @osakkers , still looking into this @osakkers could you also tell me at what time you scheduled it? Around noon if I recall correctly... On May 19 2016, at 12:11 am, Juan<EMAIL_ADDRESS>wrote: @osakkers could you also tell me at what time you scheduled it? — You are receiving this because you were mentioned. Reply to this email directly or [view it on GitHub](https://github.com/nylas/N 1/issues/2258#issuecomment-220159558)![](https://github.com/notifications/beac on/ASSWolmDKtl3ehepwY5gV1grbmEfwe3Oks5qC4CbgaJpZM4Igic_.gif) great, thanks! I think we know what is causing this, we should ship a fix soon. Thanks! Awesome. You guys are the best! Sent from my iPhone On May 19, 2016, at 12:47 AM, Juan<EMAIL_ADDRESS>wrote: great, thanks! I think we know what is causing this, we should ship a fix soon. Thanks! — You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub Hey folks—we've dedicated more server resources to running the Send Later queue and it's resolved this issue. Cheers! Hi, I scheduled a number of messages last night and not one of them sent this morning, despite my computer being on. I also didn't get an alert or anything. Do I need to set reminders to check every time I send later? It's pretty disastrous if messages don't send! Is there anything in place to ensure they get sent? Help much appreciated. send later dont work at all! The messages stay in the draft section then disappears, no one gets them, not in sent folder. I really liked the Nylas app but the basic things are not working on this email as advertised. I really tried hard to love this app even i subscribed but I think i have to cancel my subscription now :(
GITHUB_ARCHIVE
In the ever-evolving landscape of programming languages, Python stands tall as a versatile and high-level programming language. Its simplicity, readability, and extensive libraries have catapulted it into the limelight, making it a favorite among developers and organizations worldwide. In this exploration of Python’s prowess, we delve into the essence of Python, understanding why it is regarded as a high-level language and its implications in the world of technology. Python is a dynamically typed, interpreted, and high-level programming language known for its readability and clean syntax. Created by Guido van Rossum and first released in 1991, Python has since become one of the most popular programming languages globally. It is widely used for web development, data science, artificial intelligence, automation, and more. The Python Training in Hyderabad course by Kelly Technologies helps to build the skills needed to become an expert in this domain. Read Also :- How To Cancel Spectrum Internet High-Level Programming Language: The term “high-level” refers to the abstraction of complex details, enabling developers to focus on solving problems rather than dealing with machine-specific intricacies. Python achieves this abstraction through several features: - Readability: Python’s syntax is designed to be clear and readable, making it easier for developers to express concepts in fewer lines of code. This readability enhances collaboration among programmers and facilitates the maintenance of code. - Abstraction: Python provides a high level of abstraction, allowing developers to work with abstract data types and structures without concerning themselves with low-level details. This simplifies the programming process and accelerates development. - Dynamic Typing: Python is dynamically typed, meaning variable types are interpreted during runtime. This flexibility enables developers to write code more quickly and focus on the logic rather than worrying about data types. - Memory Management: Python features automatic memory management, known as garbage collection. This relieves developers from manual memory allocation and deallocation, reducing the chances of memory-related errors. Python Training in Hyderabad – Empowering Developers: As Python’s popularity soars, the demand for proficient Python developers is at an all-time high. In Hyderabad, a city known for its vibrant IT ecosystem, Kelly Technologies has emerged as a leader in Python training. The institute offers comprehensive Python training programs designed to equip aspiring developers with the skills and knowledge needed to excel in the competitive tech industry. Kelly Technologies: Nurturing Python Talent: Kelly Technologies, with its commitment to excellence, provides a structured and hands-on Python training curriculum. The institute’s experienced trainers ensure that participants not only grasp theoretical concepts but also gain practical insights through real-world projects. The training modules cover: - Fundamentals of Python: Understanding the basics of Python, including data types, control structures, and functions, lays a solid foundation for aspiring developers. - Web Development with Django: Kelly Technologies’ Python training extends to web development using Django, a high-level web framework. Participants learn to build robust and scalable web applications. - Data Science and Machine Learning: Python’s extensive libraries, such as NumPy, Pandas, and TensorFlow, make it a powerhouse for data science and machine learning. Kelly Technologies’ training programs delve into these domains, empowering participants with practical skills. - Automation with Python: Automating repetitive tasks is a key strength of Python. Kelly Technologies ensures that participants gain expertise in automation, enhancing their efficiency as developers. In conclusion, Python’s status as a high-level programming language is rooted in its readability, abstraction, dynamic typing, and memory management. As technology continues to advance, Python remains a pivotal force, driving innovation across various domains. For those aspiring to master Python in Hyderabad, Kelly Technologies stands as a beacon, providing top-notch training that paves the way for a successful career in the dynamic world of programming. Embark on your Python journey with Kelly Technologies, where expertise meets excellence in training. Master Python and unlock a world of possibilities in the realm of technology.
OPCFW_CODE
A while ago a friend of mine asked for some help in getting maps for his upcoming Te Araroa hike, on his smart phone. I decided to create an offline map file for his use, and I am now sharing it with anyone else who might be interested. I started out by researching the services offered by Landcare Research, and ended up using their TMS Tile servers for the Topo Base Map and a Text layer on top of it, to create a useful map of the trail. I wanted to use my own little app (OMFG) to download the tiles and create the offline map file, but unfortunately it can not handle creating a single map file from two tile sources (topo + text). So I ended up using the excellent MOBAC app, after making some minor code improvements. On smaller zoom levels, the map seems a bit empty, but at the zoom levels most people would normally navigate (13-15), it shows all the elevation contours, streams and rivers, roads and place names. It seems like a good map to have handy, on such a long trail. I later bumped into a post on Facebook that directed me to this site. The site offers offline maps for what I assume is the whole of New Zealand, in two separate files. One for each Island. Each file is around 1GB in size. That’s pretty big, though it’s still very useful. I looked at the data source for those maps, and found the Land Information New Zealand web site and web services, which offered a different set of map tiles than the ones I used for my first file. This time, I was able to use my own app to create an offline map file that uses their Topo250 data for zoom levels of 0-11, and the Topo50 data for zoom levels of 12-15 (I chose the grid-less option, to remove some clutter from the map). Here is a view on the data I used for my files. You can switch between the two sources by hovering over that blue button at the top right of the map. Notice that this interactive map will show all of New Zealand, so you can zoom in and out anywhere you’d like, on either island. But the offline files only contain the trail corridor itself (and ~1.5Km around it). You can switch the layers to see the difference in the maps themselves. Make sure you check out the higher zoom levels, to really decide which one you prefer. In my opinion, the LINZ map looks a bit better than the Landcare one, but If I had to chose, I’d take both of them with me, and alternate according to my needs. Each file is 478MB, so I was able to get a good size for the entire TA. The only thing that is bothering me with those maps, is that in lower zoom levels (check out around 8-11), the trail corridor seems a bit too narrow. I thought about maybe adding the whole of New Zealand at those levels, just to get a better understanding of your whereabouts in comparison to places a bit further away from you. If there is a need for such an improvement, I can give it a shot. Don’t hesitate to ask. Please double check that the files contain all the mapping data you might need while on the trail. Mistakes might have been made (and probably were made) during the file’s creation, so don’t rely only on them on your hike. Have backup maps in case your phone’s power run out, or the files don’t contain a required section. Use your own judgement. Here are the files: Both versions follow the trail corridor defined by the TeAraroaTrail_asTrack.gpx file from here. I am sure you can use the other versions on that site, as they should all follow the same route. - Land Information (BCNav) (474MB) - Landcare Research (BCNav) (472MB) - Land Information (Orux) (474MB) - Landcare Research (Orux) (390MB)
OPCFW_CODE
RadioButton is broken I know there is now a RadioButtons control, but this is about using multiple RadioButton controls, which is still necessary to support certain layouts (e.g. RadioButton controls in a WrapPanel). Bug 1 - can't set IsChecked in XAML Consider the following XAML. <StackPanel> <StackPanel BorderBrush="Black" BorderThickness="1"> <RadioButton Content="Radio button 1" IsChecked="True"></RadioButton> <RadioButton Content="Radio button 2"></RadioButton> <RadioButton Content="Radio button 3"></RadioButton> </StackPanel> <StackPanel BorderBrush="Black" BorderThickness="1"> <RadioButton Content="Radio button 1"></RadioButton> <RadioButton Content="Radio button 2"></RadioButton> <RadioButton Content="Radio button 3" IsChecked="True"></RadioButton> </StackPanel> </StackPanel> RadioButton controls without a GroupName are scoped to their parent, thus this should produce two independent sets of radio buttons. Want to guess what it initially looks like when you load a page containing this markup? Only the final radio button is checked. The first one is not checked. It appears that the radio buttons are put into their respective groups after the IsChecked property is set in XAML. Note the the radio buttons do subsequently work as two separate groups. The workaround is that you have to set IsChecked in code behind after the InitializeComponent of the page, which is annoying. Bug 2 - GroupName is scope to the entire visual tree rather than the naming container I'm not sure if naming container is the right word - I think this is what it was called at some point, but maybe it's now called definition scope or something like that. You'd think you'd get two independent sets of 3 radio buttons. In fact, you will get a single set of 6 radio buttons. Now it's possible this behavior is deliberate, but I really don't think so - would anyone really expect it to work like this? This effectively makes setting the GroupName property to a fixed value useless. The workaround is to create a new GUID after InitializeComponent in the user control, and then to set the GroupName of all the radio buttons to that, and then set the IsChecked property of the one you want. Technically, fixing this so the GroupName is scoped to the current naming container or definition scope or whatever it's called, would be a breaking change. If you think people are really relying on the existing behavior (which I find extremely unlikely), then please consider adding a ScopedGroupName property to RadioButton which is scoped to the current naming container. These bugs make working with RadioButton controls very awkward (and these bugs have been there from the beginning - I did report them on UserVoice in the past but that didn't get any attention). Windows 10 version Saw the problem? November 2019 Update (18363) Yes Device form factor Saw the problem? Desktop Yes Those seem to valid issues. If the only reason RadioButtons is not issue is the fact you can't change the layout, you might want to retemplate the RadioButtons control and use a different layout internally. We might want also want to make the layout of the internal ItemsRepeater a property, so that does not require retemplating. @YuliKl @StephenLPeters as FYI. Exposing the Layout property on RadioButtons is an option. @ranjeshj Should we open a proposal for that (adding a layout property to RadioButtons)? @ranjeshj Should we open a proposal for that (adding a layout property to RadioButtons)? Yes. Please. Thanks for the comments. This issue was intended to be mainly about fixing bugs with the individual RadioButton control - I hope it can stay focused on that. I think there will always be situations (possibly quite rare) where individual RadioButton controls have to be used, and since frameworks like HTML have this control, I think it should continue to be supported. Regarding the comments about the WinUI RadioButtons control. In my case I have RadioButton controls inside a custom WrapPanel control. If it the RadioButtons template was happy with an ItemsControl replacing the ItemsRepeater, then I could re-template it to make use of my custom WrapPanel, using it as the ItemsPanel in the ItemsControl. (I haven't tested whether this works or not). Exposing the Layout property would be another solution, but then I'd have to refactor the layout logic of my control so that I could be used as the Layout property, which would be a bit awkward. @benstevens48 Yes. I am following up to understand the issues with RadioButton. I believe one of the reasons for the RadioButtons control in the first place is because of the automation experience when using GroupName. RadioButtons provides a better automation experience (Narrator) for that scenario. ItemsRepeater and ItemsControl do similar things, so if you are writing a custom WrapPanel, it is possible to do the same for WrapLayout (deriving from NonVirtualizingLayout). There is also a LayoutPanel in the preview which can get you the same behavior of WrapPanel (LayoutPanel has a Layout property that you can feed in the same layout object). Unfortunately these RadioButton issues won't be able to be fixed without winui 3. @benstevens48 Since we will not be able to fix these issues for a while, you might be able to work around this issue by setting the checked radio button from code behind in the on apply template callback. Thanks, yes I have already been using this workaround (setting after InitializeComponent) for several years, I just wanted to report it so that hopefully it can get fixed eventually. Thank you for that! I think this is the same bug as #1299 but this one has more info so I'm going to close the other as a dupe. I ran across this one too. Needs to be addressed ASAP. Radio buttons with implicit groups are broken for use in flyouts and controls. These types of bugs are a real pain with UWP and need to be addressed quickly in WinUI 3.0 if it isn't going to suffer the same fate. Has this been addressed in WinUI3 yet? Would also be good to fix it in system XAML. It looks like we are going to be able to address the first item in this issue where setting the radio button IsChecked in Xaml markup was getting lost. This was happening because when the initial "IsChecked" value is set during load of the markup, it has not yet been added to the Xaml tree, consequently, it goes into a group with a null parent. Every radio button on the page follows this same pattern so as far as the framework is concerned, they are all in the same group so only the last "isChecked" is retained. We have a change working through the process that will delay this evaluation until the radio button is actually added to the tree so the proper parent can be determined. The second item is more complex because depending on the needs of the application we can see both behaviors being desirable. Plus to change the behavior would be a breaking change. And the proposed API seems like it might be confusing as we would have multiple properties doing the same thing in most scenarios. And finally, WPF does not support his behavior, which all goes to making this item much more of a feature request and probably something we are not going to get to in the near future. However, if you feel strongly about this (and I can see the advantages of it), it might be helpful to split this off into an actual feature requested and attempt to get community support to rally around it to get it added to our priority list. OK I've added a feature proposal here - https://github.com/microsoft/microsoft-ui-xaml/issues/9297 The issue with being able to set checked boxes in Xaml has been addressed and is scheduled for the next preview release of 1.6. We also spent a fair amount of time trying to provide a solution to the second issue but were able to come up with one that didn't have a significant breaking change or require a new api, which we couldn't come up with clean definition for. I think ultimately the correct approach will be to require the explicit specification of a radio button container (e.g. a RadioButton.IsGroupNameScope="true") attached property. But if we tackle this, it will be down the road and we can track with the feature proposal.
GITHUB_ARCHIVE
Why do I have to wait x amount of hours before I can delete my own question? I am trying to delete one of my own questions but this messages pops up, "To allow for possible reopening, you may delete in 18 hours". If I asked the question why can't I delete it at my discretion? Why should I lose that right once it's posted. The question was meant to benefit me and people I know. If it no longer benefits us and even causes risk to us why should it be allowed that time frame? The best thing I was able to do was edit it to prevent consequences and repercussions to everyone the question would affect and flag it. Any help answering this is greatly appreciated. Well someone migrated the post for you. StackExchange is made up of several different sites each geared toward a specific function. Go to the very bottom of the page and you will see links to all of the various sites. At the top of each page is an FAQ link that explains the site you are on. for future questions, Use a SSCCE and don't tell your life's story in the question I am very new to everything and I appreciate that, but are you referring to another question. I think I would have to put more in to be doing that. Personally, I think I just clarified my circumstance and brought up important concerns. I now know how things work at StackExchange but that doesn't make this specific and entire question all fluff and no substance. If you are referring to my old question on StackOverflow I was very tired and wrote a lot very fast. I am normally not so verbose. Though I used to be, it has improved considerably over the years. Did I intend to write my life story? No. Did I have a lot I wanted to say? Yeah, in my head. Unfortunately it came out in type + on the site. Not my intention at all + immediately realized upon posting. At least, I can own up to my mistakes. I wanted answers that help inform me and others in my situation not suggestions that borderline on attacking me. In the future, I'll be better. amanaP gave you the right approach. I'll try to explain why it's needed. Strictly speaking, you licensed your content to Stack Exchange (as per the Terms of Service, section 3. Subscriber Content). So you don't have a "right" (in the legal sense) to get the content deleted. That doesn't mean that it won't be deleted, if you have a good reason, but it means that you can't "demand" it. More importantly: questions and answer on SO (and the entire Stack Exchange Network) are not just for the benefit of the asker. In fact the people who benefit the most are not usually the ones who ask questions. The people who benefit the most are the ones who find the questions others have written (via Google) and can read and profit from the answers they where given. If every asker deleted their question after they got the answer, that value would be severely reduced. I certainly wasn't demanding it and if it actually causes harm outside of StackOverflow into the real world I don't see any reason they'd want to let it remain. @JasonSobel: I understand, I also understand your situation, I just tried to explain (in general) why that rule is in place. I know questions can benefit other people and that is one of the greatest benefits of these sites but at this point it benefits very few people if anybody I assure you. I know it's not up to me to decide the validity of that statement though. Aren't limits relaxed and the process sped up in certain circumstances though whatever the end result might be, deletion or not. I would think that's how it works. But, I will be patient and accept whatever the consequences are whether they be good or bad. The question was meant to benefit me and people I know That's your problem; it's not what the site exists for. It's not just to help each person answer their own question. The purpose of the site is to benefit the entire world through a repository of useful questions and answers that can be indexed and searched. The guidelines of the site are designed with that goal, rather than helping just one person getting their question answered. To help reach that goal, obstacles are placed to deleting questions. You may not have anything left to gain from the question staying around, but others might, and so the site will work to ensure that valuable content isn't deleted just because you don't need it anymore. I am perfectly aware of that but there is literally no benefit to anyone else. That question only harms me and the people I attempted to help. It can affect someone's standing in a class! It has now been heavily edited anyway to the point that even if there was some small benefit it is no longer in the updated question. @JasonSobel The whole point is that it's not up to you to decide whether or not it's of benefit to anyone else. Moderators, or users with over 20k reputation, can vote to delete the post, and users with 10k can vote 2 days after closure, when you can. (It has indeed been deleted now.) I wasn't saying that it was a high quality post that shouldn't be deleted, just that it's not your decision to make, it's "the communities". @JasonSobel Next, there is a revision history of all past edits, so editing a post doesn't remove the information, and even deleted posts can be viewed by mods and high rep users. Editing your question to remove all valuable content and say it should be deleted is inappropriate, and such edits will be rolled back to previous revisions as they are vandalizing the post. Most of the edits were clarification to make the question more concise and easier to understand. The original question was unintelligible hence heavy down-votes and a surprising condescending attitude from one person (most were helpful and understanding). Once I realized there was a problem and was away from my computer I was no longer editing it. It certainly was inappropriate for the community and he had every right to vent his anger out in that last edit even if it was directed at the wrong audience. I am also aware certain people can look but those I want to prevent cannot. Most of the edits were clarification to make the question more concise and easier to understand. I wasn't referring to those edits, I was referring to the last two edits where you removed all valuable content and replaced it with "please delete this". As I already stated, that wasn't me. They were from the person I was trying to help. I was away from my computer when the issue was brought up and attempting to go through the process with a smartphone was too ineffective. So, I gave him access to my account whereupon he chose to go to extreme measures to ensure deletion and avoid any chances of an issue. I wish he had been a little more proper, remained calm, realized he was probably blowing the issue out of proportion, and researched the process as I am doing, but I can't predict how people will react all the time. Again, I apologize. First of all you should not post a question that might cause risk to you. If you want to know what type of question you should ask on the site, you can refere to the F.A.Q section of any Stack Exchange sites.(In this case StackOverflow). The delay is, as explained, To allow for possible reopening, you may delete in 18 hours. If the question isn't pertinent or doesn't belong here it won't be reopened and therefor, delete in 18 hours. As you pointed out yourself, you can edit it to prevent damages and possibly flag it to get a moderator's attention. Explained in detail why you want it to be deleted right away and he might delete it for you before the delay. I did not realize it might cause risk until it was brought up to me by the person I wanted to help. That is careless on my part for not thinking it through, but I am still new to this site, school guidelines, and programming itself. I think a leeway should be made for beginners + people in such circumstances. I am willing to wait for the process to take its course but I find it very one sided. I am not a spoiled brat who wants it his way or the highway. I am not demanding. I simply want an explanation and a quick resolve to all of this. I'll certainly learn from the experience if nothing else. @JasonSobel You can always try the chat section if you have a question like this and need imminent attention. But I understand your concern. Although I don't remember how uch rep you need to access the chat. The other answers are covering the why this timeout is in place, but you have two avenues to get personal data deleted: Flag the post for moderator attention - explain what exactly you want removed. Contact Stack Exchange directly - https://stackoverflow.com/help/other As the moderator team is volunteer and the site users far outnumber the staff - you might need to wait until the 18 hours have passed - so an appeal should be for something really personal or clearly in error. In either case, explain what happened and they might assist deleting it for you. There was no personal data on the post, he simply posted a homework question and didn't want the professor finding out that he got help online doing the work.
STACK_EXCHANGE
When it comes to offering data-services, selling data assets, entering into partnerships and data-sharing agreements, agreeing on the value of data is critical. What are the methods to discover the value of data? Which strategic implications? Like for companies valuation, several methods can be employed, they differ and each reflects a different perspective on value. Similarly to companies valuation, it’s the agreement of a buyer and a seller that sets the value and sometimes methods are used as bargaining tools for setting a price on a difficult to value asset. However, the growth of data as an asset class triggered efforts to harmonize practices and institutionalize the way data value is captured, in accounting for example. Why is it difficult to assess the value of data In economics, one stream of theories sees value as the cumulative efforts (human and financial) to produce a good or a service. Another stream of theories defines the value as a function of its utility, whatever the efforts required. The first is based on the offer side of the market (how much does it cost) and the second on the demand side (how much are clients willing to pay). From a particular angle, data makes no exception to these two approaches: it requires some efforts to be produced and accessed (sensors, connectivity, storage, analytics software, …) and it can be assigned a utility: making a decision, automating tasks, … However, data has specific characteristics which limit sometimes the application of the traditional valuation methods: - data is non-rivalrous good: consumption of data by one user does not prevent others from using the same data. Data can be used by several entities at the same time. - data is non-depletable: usage of data does not lead to scarcity - data value is time-dependent, but not as a linear function: real-time data is valuable as it generates immediate insights. However, accumulating it over a long period of time is also valuable as it provides long-term insights. - data value increases when combined with other data: combining one type of data with new information can lead to new insights that cannot be extracted from only one source. - data value increases when it’s transformed: cleaning, preparation and aggregation operations increase the value of a dataset The 4 approaches to assess the value of data Given the specificities we just mentioned, it’s clear that no unique standard emerged. Value is agreed upon on a case by case basis and the methods we present capture different costs and benefits approaches used by parties when entering in a trade. - cost-based: the value comes from the efforts required to create, store, analyse, transport data. It can be measured by computing the real historical costs or estimating the costs to rebuild or acquire data. - utility-based: the value comes from the cash-flow generated when using data, for example, the discounted future cost reductions associated with lower return ratios in e-commerce. - market: the value comes from either a market price already accepted (e.g. the price of an email address is now pretty standard) or from a balance between the willingness-to-pay of the buyer and the willingness-to-accept of the seller. - externalities: the value is based on the externalities created by the use of data. For example, traffic reduction in a city associated with mobility data. What does it mean for strategy making As previously said, the question is not about deciding which method surpasses the others, however, the growing topic of data valuation sets interesting strategic questions: - As the value of data becomes more institutionalized, datasets are becoming valuable assets to manage with the same discipline as other assets class. - As datasets grow bigger, the opportunities to trade them increase, hence the argumentation on the value becomes as important as it is for merger and acquisitions. Developing capabilities around data valuation sets the ground for competitive advantage. - Markets are institutions and companies which will (individually or collectively) define a standard approach in their environment will have an opportunity to set the terms at their own interests.
OPCFW_CODE
❤️ It's great to see you here! I'm currently available on evenings and weekends for consulting and freelance work. Let's chat about how I can help you achieve your goals. OK, so that's maybe not 100% accurate, but we are looking for students who want to get paid to work with us. How can you get paid to work on MetaCPAN? There are currently two really great options: the Outreach Program for and the Google Summer of What MetaCPAN needs from you is to help spread the word to interested students who may want to participate. Here's the pitch: There are lots of things you can learn by working on the MetaCPAN stack. Our stack includes Catalyst, Plack, ElasticSearch, jQuery, Bootstrap and nginx. We also use Puppet for deployment and Vagrant + VirtualBox for development VMs. We have integration with Twitter, Github, PAUSE, Facebook and Google. You will have help and guidance in working with these technologies from people who are quite familiar with them. Your code will deploy on robust hardware (think 30+ GB of RAM). Your code will often deploy within hours or even minutes of being submitted. Your work will immediately be put to the test by our many users on our very busy services. You'll gain experience in NoSQL (ElasticSearch), git, and also in participating in an Open Source project which functions as a highly available web service. You will gain experience not just in writing code, but in participating in the full cycle of code deployment, skills which are quite valuable in the real world. More info on how to get involved is listed on our development page. Maybe you're thinking "So, get to the important stuff already. How much does it pay?" Well, OPfW pays $5,500 and I'm under the impression that GSoC will pay the same amount this year. Both programs are being run in parallel. If your gender identification is in line with the requirements of OPfW then you are actually encouraged to apply for both programs. The important thing to remember is that, at the end of the day, gender will not limit you from participating. There's a program for everybody. The deadlines for both programs are rapidly approaching. OPfW has an application deadline of March 19, 2014 and GSoC has an application window of March 10 - 21 (Don't be fooled if the Google page appears to be blank at the top. Someone should get a GSoC slot just to fix the google-melange site). However, please, don't take my word for anything. Do go to the respective sites to double check dates, requirements etc. Essentially what you need to in order to be able to apply for either of these programs is to get involved in the MetaCPAN project. Send us your resume/CV so that we can get to know you a little better. Then, with our guidance, you can start shipping some code. (This is actually quite easy). Once you've become formally involved with the project, we can work with you on your application. There's not a lot of time, though, so let's get started on those pull requests! We want students to leave this experience with some real world skills, but we'd also like for students to love the project so much that they continue contributing after this summer. We have lots of interesting problems to solve. Please help us find the right students to continue moving MetaCPAN forward. If you know someone who might be interested, please share this link. My contact info is listed at https://metacpan.org/author/OALDERS, but essentially anyone can get help getting started by joining #metacpan on irc.perl.org and just asking for help. The channel is very friendly and there's generally someone around who can offer some guidance.
OPCFW_CODE
In the previous parts, we saw an introduction to ClickHouse and its features. Furthermore, we learned about its different table engine families and their most usable members. In this part, I will walk through the special keys and indexes in ClickHouse, which can help reduce query latency and database load significantly. It should be said that these concepts are only applicable to the default table engine family: Merge-Trees. ClickHouse indexes are based on Sparse Indexing, an alternative to the B-Tree index utilized by traditional DBMSs. In B-tree, every row is indexed, which is suitable for locating and updating a single row, also known as pointy-queries common in OLTP tasks. This comes with the cost of poor performance on high-volume insert speed and high memory and storage consumption. On the contrary, the sparse index splits data into multiple parts, each group by a fixed portion called granules. ClickHouse considers an index for every granule (group of data) instead of every row, and that's where the sparse index term comes from. Having a query filtered on the primary keys, ClickHouse looks for those granules and loads the matched granules in parallel to the memory. That brings a notable performance on range queries common in OLAP tasks. Additionally, as data is stored in columns in multiple files, it can be compressed, resulting in much less storage consumption. The nature of the spars-index is based on LSM trees allowing you to insert high-volume data per second. All these come with the cost of not being suitable for pointy queries, which is not the purpose of the ClickHouse. In the below figure, we can see how ClickHouse stores data: - Data is split into multiple parts (ClickHouse default or user-defined partition key) - Parts are split in granules which is a logical concept, and ClickHouse doesn't split data into them as the physical. Instead, it can locate the granules via the marks. Granules' locations (start and end) are defined in the mark files with the - Index values are stored in the primary.idxfile, which contains one row per granule. - Columns are stored as compressed blocks in .binfiles: One file for every column in the Wideand a single file for all columns in the Compactformat. Being Wide or Compact is determined by ClickHouse based on the size of the columns. Now let's see how ClickHouse finds the matching rows using primary keys: - ClickHouse finds the matching granule marks utilizing the primary.idxfile via the binary search. - Looks into the mark files to find the granules' location in the - Loads the matching granules from the binfiles into the memory in parallel and looks for the matching rows in those granules using binary search. To clarify the flow mentioned above, let's create a table and insert data into it: CREATE TABLE default.projects ( `project_id` UInt32, `name` String, `created_date` Date ) ENGINE = MergeTree ORDER BY (project_id, created_date) INSERT INTO projects SELECT * FROM generateRandom('project_id Int32, name String, created_date Date', 10, 10, 1) LIMIT 10000000; First, if you don't specify primary keys separately, ClickHouse will consider sort keys (in order by) as primary keys. Hence, in this table, created_date are the primary keys. Every time you insert data into this table, it will sort data first by project_id and then by If we look into the data structure stored on the hard drive, we face this: We have five parts, and one of them is: all_1_1_0. You can visit this link if you're curious about the naming convention. As you can see, columns are stored in bin files, and we see mark files named as primary keys along with the Now let's filter on project_id, which is the first primary key, and explain its indexes: As you can see, the system has detected project_id as a primary key and ruled out 1224 granules out of 1225 using it! What if we filter on created_date: the second PK: EXPLAIN indexes=1 SELECT * FROM projects WHERE created_date=today() The database has detected created_date as a primary key, but it hasn't been able to filter any granules. Why? Because ClickHouse uses binary search only for the first key and generic exclusive search for other keys, which is much less efficient than the former. So how can we make it more efficient? If we substitute created_date in the sort keys while defining the table, you will achieve better results in filtering for the non-first keys since the created_date has lower cardinality (uniqueness) than the CREATE TABLE default.projects ( `project_id` UInt32, `name` String, `created_date` Date ) ENGINE = MergeTree ORDER BY (created_date, project_id) EXPLAIN indexes=1 SELECT * FROM projects WHERE project_id=700 If we filter on the project_id, the second key, now ClickHouse, would use only 909 granules instead of the whole data. So to summarize, always try to order the primary keys from low to high cardinality. I mentioned earlier that if you don't specify the PRIMARY KEY option, ClickHouse considers sort keys as the primary keys. However, if you want to set primary keys separately, it should be a subset of the sort keys. As a result, additional keys specified in the sort keys are only utilized for sorting purposes and don't play any role in indexing. CREATE TABLE default.projects ( `project_id` UInt32, `name` String, `created_date` Date ) ENGINE = MergeTree PRIMARY KEY (created_date, project_id) ORDER BY (created_date, project_id, name) In this example, project_id columns are utilized in the sparse index and sorting, and name column is only used as the last item for sorting. Use this option if you wish to use a column in the ORDER BY part of the query since it will eliminate the database sorting effort while running it. A partition is a logical combination of parts in ClickHouse. It considers all parts under no specific partition by default. To find out more, look into the system.parts table for that projects table defined in the previous section: SELECT name, partition FROM system.parts WHERE table = 'projects'; You can see that the projects table has no particular partition. However, you can customize it using the PARTITION BY option: CREATE TABLE default.projects_partitioned ( `project_id` UInt32, `name` String, `created_date` Date ) ENGINE = MergeTree PARTITION BY toYYYYMM(created_date) PRIMARY KEY (created_date, project_id) ORDER BY (created_date, project_id, name) In the above table, ClickHouse partitions data based on the month of the ClickHouse creates a min-max index for the partition key and uses it as the first filter layer in query running. Let's see what happens when we filter data by a column existent in the partition key: EXPLAIN indexes=1 SELECT * FROM projects_partitioned WHERE created_date='2020-02-01' You can see that database has chosen one part out of 16 using the min-max index of the partition key. Partitioning in ClickHouse aims to bring data manipulation capabilities to the table. For instance, you can delete or move parts belonging to partitions older than a year. It is way more efficient than an unpartitioned table since ClickHouse has split data based on the month physically on the storage. Consequently, such operations can be performed easily. Although Clickhouse creates an additional index for the partition key, it should never be considered a query performance improvement method because it loses the performance battle to define the column in the sort keys. So if you wish to enhance the query performance, contemplate those columns in the sort keys and use a column as the partition key if you have particular plans for data manipulation based on that column. Finally, don't get partitions in ClickHouse wrong with the same term in the distributed systems where data is split on different nodes. You should use shards and distributed tables if you're inclined to achieve such purposes. You may have recognized that defining a column in the last items of the sort key cannot be helpful, mainly if you only filter on that column without the sort keys. What should you do in those cases? Consider a dictionary you want to read. You can find words using the table of contents, sorted by the alphabet. Those items are the sort keys in the table. You can simply find a word starting with W, but how can you find pages containing words related to wars? You can put marks or sticky notes on those pages making your effort less the next time. That's how Skip Index works. It helps the database filter granules that don't have desired values of some columns by creating additional indexes. projects table defined in the Order By section. project_id were defined as primary keys. Now if we filter on the name column, we'll encounter this: EXPLAIN indexes=1 SELECT * FROM projects WHERE name='hamed' The result was expected. Now what if we define a skip index on it? ALTER TABLE projects ADD INDEX name_index name TYPE bloom_filter GRANULARITY 1; The above command creates a skip index on the name column. I've used the bloom filter type because the column was a string. You can find more about the other kinds here. This command only makes the index for the new data. Wishing to create for already inserted, you can use this: ALTER TABLE projects MATERIALIZE INDEX name_index; Let's see the query analysis this time: As you can see, the skip index greatly affected granules' rule-out and performance. While the skip index performed efficiently in this example, it can show poor performance in other cases. It depends on the correlation of your specified column and sort keys and settings like index granularity and its type. In conclusion, understanding and utilizing ClickHouse's primary keys, order keys, partition keys, and skip index is crucial for optimizing query performance and scalability. Choosing appropriate primary keys, order keys, and partitioning strategies can enhance data distribution, improve query execution speed, and prevent overloading. Additionally, leveraging the skip index feature intelligently helps minimize disk I/O and reduce query execution time. By considering these factors in your ClickHouse schema design, you can unlock the full potential of ClickHouse for efficient and performant data solutions.
OPCFW_CODE
Moved from WillOpenSourceUndermineTheAmericanEconomy The Economist Magazine: http://economist.com/business/displayStory.cfm?Story_ID=620445 published the following observation about the security of open source software: "Most important, it tends to be more robust and secure, because the source code can be scrutinised by anyone, which makes it more likely that programming errors and security holes will be found." The point they are making is that for software systems, security has everything to do with peer review. Peer review is how all scientific and engineering communities improve the quality of their work. BruceSchneier wrote: "Good cryptographers know that nothing substitutes for extensive peer review and years of analysis." Schneier's own algorithms (like Blowfish and Twofish) are open for peer review. Several security experts recently discussed and refined a summary statement of rationale on OpenSourceSecurityStrategy , for circulation at the "Open Source Lab", featured at the 2003 Government Technology Conference and Exhibition http://www.gtecweek.com in Ottawa, Canada. Following are some additional points... Cryptography and system security are two totally separate issues. In cryptography you can protect security algorithmically. In system software, you must anticipate what people who write applications will do, which is always going to be losing battle against new security holes. Arguably the most secure os is MVS, which along with having really tight system security is also rather unappealing to hackers and fading rapidly into obscurity. The point I was trying to make is that article implies that having security holes found means that the platform is more secure. If no one finds the security hole, then is there a security hole? If someone does find a security hole and broadcasts it, but most systems admins don't care, then there is actually a loss of security for the platform as a whole. --TimBurns The problem with that point of view is that it ignores the fact that the black-hat crackers are motivated to find security flaws, either for tangible gain or just to prove how "133+" they are. MSFT software is a huge target for folks who just want to find the holes for that reason, and they continue to bang away at it and every new version and extension. But, they will not have any interest or motivation to notify you, the IT manager, that there is a new exploit for the software you are running, they'll just use the exploit to make your life more hellish than it already is. So, the white-hats, those hackers who get paid to sift through closed-source systems to find the holes and then tell the people who care about securing them, are your only defense, and at any given time you never know if the white-hats have found all the same exploits the black-hats know about and are actively exploiting. On the other hand, with OpenSource , the white hats and the black hats are on an equal footing from the start. In fact, any developer who is familiar with how to write secure code can satisfy herself that the program is clean, without needing to get into the black arts of decompiling, reverse-engineering and the other convolutions necessary to root out the holes in closed-source executables. -- StevenNewton (Is this related to TestsCantProveTheAbsenceOfBugs?) has extremely good security, from a combination of OpenSource code reviews. Just enabling peer review isn't enough; you have to make sure that someone actually does it. Indeed, the "more eyeballs make all bugs shallow" take on peer review is only partly true for security. It depends on what your peers are looking for. In open source projects, code is seen by many people. But they only look at a small part, or only to hack in a new feature. Very little of the 'peer review' is security related (and of quality). -- PieterVerbaarschott It is incorrect IMHO to try to analyze security out of social context. Most systems can be hacked in some manner (look at military history). The comment that "security by obscurity" is no security at all doesn't hold water - it all depends on who the hacker is. Also, AFAIK there is no OS/Server software in common usage which has not had several security holes in it due to coding errors, either open source or not. And since a miss is as good as a mile (i.e. it only takes one bug to be compromised) all systems are potentially insecure. It would be unwise to rely on any single system for security. -- PeterForeman See also OpenSourceIsLessSecure
OPCFW_CODE
Encryption… it’s one of the oldest methodologies that began thousands of years back during the time of the Greeks. With advancement in engineering and technology in the 21st century and with larger and powerful supercomputers there’s a necessity to create stronger encrypting algorithms with an mathematical approach to the subject than a digital one.“Elliptical Curve Cryptography” being the most recent advancement in this field .The topic of encryption as a whole is very vast to cogitate and is out of the scope of this article. I will leave it to the reader to do more self-reading on latest available methodologies of encryption. Encryption in SQL Server Encryption in SQL Server comes in two forms - TDE(Transparent Data Encryption) - Column level encryption TDE encrypts the entire database including the database backups whereas cell level encryption provides encryption at far more granular level i.e. at the column level. Both these methods are primarily certificate based. In this article we will focus into cell level or the column level encryption where you may want to for example cipher a column in a SQL Server table that stores Credit card numbers or SSN or other types of sensitive data. The biggest advantage with column level encryption is that you simply don’t have to develop your own encryption algorithm for its application but on the flip side the biggest disadvantage is that the schema should be modified to varbinary irrespective of the source data type and this change hurts the performance while querying the encrypted column. In SQL Server encryption of the data occurs at the page level on the disk however once these pages are moved to the memory buffer pool they’re decrypted and saved as clear text. The supported algorithms for column level encryption and TDE are AES with 128,196,256 bit keys and three key triple DES. Implementing column level encryption in SQL Server is a simple four step method - Create a Masterkey - Create a certificate - Create a certificate key and secure it with the certificate created earlier - Encrypt the column with the symmetric key In the following example we will encrypt a single value 1 using the above 4 steps. We will use the inbuilt AES_256 encryption algorithm for encryption. - Create a Master Key with a password : CREATE MASETR KEY ENCRYPTION BY PASSWORD=’#123$’’ - Create a certificate with a subject : CREATE CERTIFICATE EncryptCer WITH SUBJECT=‘This my encryption’ - Create a symmetric key and cipher it with the certificate created earlier with an AES_256 encryption algorithm : CREATE SYMMETRIC KEY EncryptKey WITH ALGORITHM=AES_256 ENCRYPTION BY CERTIFICATE EncryptCer Note : You can query the sys.certificates and sys.symmetric_keys system catalogs to check the creation of the certificate and symmetric keys. - Open the symmetric key with the encryption certificate : OPEN SYMMETRIC KEY EncryptKey ENCRYPTION BY CERTIFICATE EncryptCer Once the above four steps are done, use the KEY_GUID function that takes symmetric key (which was created in step 3) as a parameter and the value that needs to be encrypted. In this case we are encrypting value 1 DECLARE @MyVar UNIQUEIDENTIFIER SET @MyVar=KEY_GUID (‘EncryptKey’) SELECT ENCRYPTBYKEY (@var,CONVERT(VARBINARY(256),1)) The output of the above command will be a very strong encrypted value. Decryption is done by using the inbuilt DECRYPTBYKEY function that takes the parameters of keys by which the original value was encrypted and is then converted back to the original data type of the encrypted value. SELECT CONVERT (INT, DECRYPTBYKEY (ENCRYPTBYKEY (KEY_GUID (‘ EncryptKey’), CONVERT (VARBINARY (256), 1)))) The output of the above command is 1 which is the original value that was encrypted. Always make sure that the symmetric key is closed.It can be done using the following command CLOSE SYMMETRIC KEY EncryptKey In the production environment one would not want to grant rights to the encryption certificates to all the users that exist in the database. To grant full rights to the key and certificate to a privileged user, use the following commands. GRANT CONTROL ON CERTIFICATE:: [EncryptCer] TO [User] GRANT VIEW DEFINITION ON SYMMETRIC KEY :: [EncryptKey] To [User] The performance takes a big hit on tables that has data saved in encrypted format. For example if a column stores SSN numbers and has a clustered index on it , querying this column will return data quite quickly due to presence of the clustered index(provided there is not much fragementation present on it). But if you would want to implement encryption on a column the data type has to be changed to varbinary which inadvertently will void the index. A simple query on a column would change from this SELECT * FROM YourTable wherever SSN=’XXX-XX-XXXX’ SELECT * FROM YourTable wherever CONVERT (NVARCHAR (30), DECRYPTBYKEY (SSN)) =’XXX-XX-XXXX’ In one of the performance tests it was observered that there was a performance degradation of as much as 20% on a very simple and basic query that was executed against a encrypted column.The performance inversely scaled against an increased workload which became more worst when the whole database was encrypted. One way to get past the performance issue is to create another computed column which stores the hash values for the encrypted column and while querying use the checksum function to compare the input values to the encrypted computed column to return the matching resultsets. Cell level encryption offers a rather more granular approach than TDE because it doesn’t cause the overhead of maintaining the security at a database level. If performance is of secondary importance then in built SQL Server Cell level encryption is one of the best choices and can be enforced in an exceedingly larger scope. Additionally the cell level encryption is available on all versions of SQL Server whereas TDE is available only with the Enterprise and Developer Editions. Sachin Nandanwar, Senior SQL Developer – Avitas Technologies
OPCFW_CODE
This complete, helpful guide, that I put together, may have your security cameras up and running in no time! By this time, the dream of a computer to be manned by one particular person was closer than many had anticipated. For example, an operator will enter data via the keyboard to the pc and the processor will manipulate the info for display or storage, depending on the intended wants and/or makes use of. Thin shopper networking is about using a pc to entry and run files, applications, and the working system off a server instead of in your actual laptop. It involves social issues, corresponding to entry rights, working place monitoring, censorship and junk mail; skilled points comparable to professional accountability and code of conduct; authorized issues similar to authorized obligations, knowledge protection, laptop misuse and software piracy. Predating USB, these two schemes were designed initially to support larger flexibility in adapting onerous disk drives to quite a lot of completely different laptop makers. There is no comparison with regards to zooming in for the headshot, and all of the sudden the cable snags on the nook of your monitor stand. However, to start with, let us take a look at the components that we will require to assemble a computer. Adding more RAM, upgrading the CPU, video card, motherboard, and switching to a strong state drive or sooner onerous drive for your Home windows or Linux working system drive will make your computer faster. Windows XP Pro, Windows XP … Read More In this hub, pc hardware elements defined, we’re going to take a look at some of the hardware components that make up a computer. Usually, when frozen, the computer is not going to mean you can do something for a few seconds and then it’s going to resume. The minor resistance from the cord is noticeable, and you’ll pretty much move it wherever, I’ve my pc underneath the desk, and there are a ton of cables in the best way. That’s to say, laptop hardware cannot function with out software program and neither can software operate with out hardware. Or just the resistance of the cable even when only rubbing against the corner of your desk. When I bring up the sluggish moving of technology into schools folks like to deliver up successes like Khan Academy. After you clear the pc it’s good to drive up to one hundred miles to present it a chance to observe all of the sensors and register the results. You possibly can lookup the beep codes on your particular pc to determine its particular drawback. The parts of the pc which might be used to store information in whatever form are labeled as storage gadgets. Frequent complaints discovered among children obsessive about games are eye strains, wrist, neck and again pains, and so forth. When I attempt to switch on my laptop and press on the facility button, the monitor does not give any signal of switching on. It simply black out and doesnt … Read More Whether you’re in search of a laptop , pill or desktop computer , you will find a vary of computing expertise to swimsuit every budget. I all the time favored the R.U.S.E video games although they typically get less praise than they deserved (like many video games earlier than them). Learn about activities and events to extend interest and data in laptop science for your students and entire school. I used to be working for a personal laptop retailer in the early 80s when the primary Mac was released. I have turned my previous desktop laptop into a file server utilizing FreeNAS, to again-up my residence laptop and gaming/work/Squidoo COMPUTER and have written a information to constructing a file server on new website Build My Own Laptop. A smartboard also ships with an electronic pen and eraser which can be used to input, edit and erase graphics, and the ultimate work might be saved onto the pc that’s related to it. When you withdraw money from an ATM, scan groceries on the store, or use a calculator, you’re using a type of pc. Excessive-constancy sound methods are one other example of output devices usually labeled as pc peripherals. Shopper Reviews’ laptop evaluations will give you trustworthy shopping for advice that you can trust. Pc is a sophisticated electronic gadget that takes uncooked knowledge as input from the person and processes these data underneath the control of set of instructions (referred to as program) and offers the consequence (output) and saves … Read More Steering and expertise for the pc science degree program is supplied by lecturers from top universities and business leaders from world companies who compose our Business Administration Dean’s Office and advisory board to create a top quality, aggressive diploma program. The one worth I see in gut feelings in science is that they can present the motivation and the direction to make advances. I believed to myself: Televisions constructed right now are additionally skinny they usually have nice audio with built-in audio system. Notable improvements in computerization included production of the Ferranti Mark I (1948), the primary commercially produced digital pc. Used with Viewsonic’s LED backlighting expertise this monitor is able to produce extra accurate grays and blacks. In a method or one other it set the tone over what the future computer was to appear like. When I attempt to swap on my computer and press on the facility button, the monitor doesn’t give any signal of switching on. It just black out and doesnt display anything in any respect. Along with giant computer systems for institutions, there are private computers, laptops, notebooks and cellphones with laptop capabilities numbering within the tons of of hundreds of thousands. Which means that your automobile doesn’t support that status monitor and you don’t should be involved about it. There are also pc vacuum cleaners, and blowers which are designed to blow out, or vacuum out the mud from your computer with out damaging your computer like an atypical vacuum which generates a … Read More The Influence Factor measures the typical number of citations acquired in a particular yr by papers published in the journal throughout the two previous years. Whole Miner (Whole Miner: Forge) is printed by Greenstone Video games and was launched in late 2011 for the Xbox Reside (indie game part). A hybrid laptop is mixture of both analog and digital pc i.e. part of processing is finished on analog computer and a part on digital laptop. Those who play games on COMPUTER’s and require screens that have response instances of 5ms or better might be glad to know that the majority modern HDTV’s at the moment are in that range. While arithmetic video games stimulated brain activity in both the left and proper hemispheres of the frontal lobe. The built-in prime quality speakers in an HDTV eliminated the need for having one other item on their desk. Reprinted with permission of the Division of Pc Science, College of Manchester, Eng. I recommend you find a computer which has 512MB of RAM, and a 1GHz CPU, and a 20GB ( Over time, the more programs you install on your laptop, and recordsdata you make or obtain will eat up loads of free area in your hard drive, so you must not less than get a 20GB drive or use a USB drive to store your information.) or larger hard drive for those who choose to run Windows XP or Ubuntu since they each can use a whole lot of RAM, and CPU … Read More
OPCFW_CODE
Techtronic has been replaced by Not Just Gaming (http://notjustgaming.com). NJG features all of the great posts from Techtrionic, as well as articles, blogs, community, social profiles, forums, downloads, videos, and more! My thoughts: Less and less do I find myself paying any care (or money) to developers latest projects…. as I feel I have wasted enough interests and money already on poor technologies…. but this from Tweak Guides: Microsoft has officially announced details of DirectX 11.0, the successor to the current DirectX 10 API. DirectX 11.0 will be backwards compatible with DX 10/10.1 hardware, will be Vista-only, and will allow for a range of additional features including support for tessellation, multi-threaded resource handling improvements and use of the GPU as a parallel processor. A release date is not yet provided. At yesterday’s Wedbush Morgan Securities conference, Atari founder Nolan Bushnell claimed that a stealth encryption chip will “absolutely stop piracy of [PC] gameplay.” “There is a stealth encryption chip called a TPM that is going on the motherboards of most of the computers that are coming out now,” explained Bushnell, according to a GamesIndustry report. “What that says is that in the games business we will be able to encrypt with an absolutely verifiable private key in the encryption world–which is uncrackable by people on the internet and by giving away passwords–which will allow for a huge market to develop in some of the areas where piracy has been a real problem.” Piracy has been a hot-button issue in the PC gaming industry for some time now, with renowned PC developers such as Crytek, id, and Epic claiming that the high rate of pirated PC software forced them to put games on other platforms. Read more about this story at ShackNews In general, the world is evolving and leading developers are opening up to small time developers big time in the mist of recession. The web and applications are becoming more open source every day. On February 5, MySpace will open its system to developers so that they can begin building applications (similar to Facebook applications). MySpace intends to offer advertisement-revenue sharing to developers while avoiding the feed/request pollution that Facebook has. If you want to write apps for MySpace, you can pre-register on their developer site now. Valve reveals Steamworks for Developers. Steamworks will provide game services, development tools, and retail backend services for free. Steamworks can be used whether a game is distributed digitally via Steam or in traditional retail stores. This will open up revenue share for amatuer developers wishing to distribute their games and applications via the Steam network.
OPCFW_CODE
I am fairly new to electronics/hardware and have a few questions to further my understanding of how the emonTx works at the circuit level. The schematic I am looking at, from the emontx github, is below for reference. Each input for CTs 1-3 has two 22 Ohm burden resistors placed in parallel. Is this to account for power dissipation ratings of the resistors or is it perhaps to help accuracy or noise of the voltage signal in some way? The ADC inputs for the CTs and the AC-AC adaptor all have diodes which, if I understand correctly, are to protect the ADC by providing a path to ground if the voltage rises above the diode breakdown voltage. Are the 1K resistors (R-19,20,25,26,28) placed before each ADC in order to prevent a short circuit in case of this breakdown or do they serve another purpose? And under normal operation, can we assume that the ADC inputs draw little enough current such that the voltage drop these resistors would introduce is negligible for measurement purposes? Why do we need cap C17 on the AC voltage measurement section? I know theoretically that a capacitor acts like a short-circuit when an instantaneous voltage change is applied, and in this case we are expecting AC input, so the voltage is fluctuating consistently. Does this mean we assume that C17 acts mostly like a short-circuit? The Power sections notes “for AC-DC circuit 20mA max”. What sets this limit? Is that the max draw from the MCP1754 LDO? Any tips/explanations are very much appreciated! Thanks! I’ll have a crack at the first two: 1). A true CT outputs a current proportional to the primary current it’s wrapped around. In order to turn that into a voltage that you can measure with an ADC (or voltmeter) you need to put it across a burden resistor. Some “CTs” have an internal burden resistor so they output a voltage proportional to the primary current, but the ones used here don’t so the burden resistor is installed on the emonTX. If you open circuit the output of a true CT (one without an internal burden) then it will still try to maintain the current it’s expected to, but into an open-circuit which results in very high (potentially dangerous) voltages if there is no other protection mechanism. That’s why it’s always recommended that you open up the CT and remove it from the primary cable (or ensure there is no current flowing in the primary cable) before you unplug the CT output from its burden R. - The diodes look to me as though they’re trying to add protection in the case where the ADC input swings negative (i.e. below GND). The 1K series resistor limits the current into the ADC. Normally that current is used to charge a small internal Sample-n-Hold capacitor. The amount of source impedance determines whether or not that cap gets charged in time before the conversion starts. The AVR ADC inputs are designed for signals with a source impedance of less than 10K, so 1K is plenty low enough not to impact normal operations, although you’d have to add it in to all the other series impedances in the signal. You can read more details about the ADC input circuitry in the Atmel datasheet. But when things go wrong (too big a swing out of the CT) it’s possible that the ADC input can exceed Vcc (or drop below GND). The AVR has internal protection diodes that start conducting at Vcc+0.5V (from memory) but they’re only good for 1mA. The 1K resistor, in conjunction with the other series impedances, helps to limit that current during such a fault condition. I read the first question to be asking something different so to add to dBC’s info. Not actually true, 2 are shown on the schematic and there is a place for a through hole resistor (of any viable value) on the board, but only the SMT 22R resistor is fitted. You can remove the SMT burden (or keep it in parallel) and fit another value to change the sensitivity and range of the CT channel (ie if both 22R were actually fitted it would effectively be an 11R burden). and the “20mA max” limit is the the limit imposed to avoid distorting the AC sampled waveform when powering the emonTx via the same 9v AC Adapter. You could could probably draw considerably more than that with out any “power supply issues” but the voltage measurement would be not be accurate, therefore neither would the power values. I believe the C17 cap is also there to help reduce that effect, but I couldn’t explain the theory behind it EDIT - see next post! C17 is providing d.c. isolation between the measurement input (referenced to mid-rail) and the power supply (referenced to GND). Now that you highlight it, I concur. Apologies Cole for telling you stuff about burden resistors you probably already knew. Oh no you can’t - the 3.3 V rail will collapse on every mains cycle as the reservoir capacitor empties, most probably resetting the processor. The current is strictly limited, the RFM69 alone is too much at worst-case mains voltage and component tolerances, without any additional current drawn by temperature sensors etc. The series resistor that imposes that limit is there to restrict the dip that the a.c. adapter’s internal impedance generates as the reservoir capacitor charges on the rising peak of each cycle. Here Question about the half wave rectifier | Archived Forum, I posted a screenshot from a simulation showing the ‘dent’ in the a.c. measured wave that the current spike creates. Yes - to a.c. The value is quite high - much higher than necessary for amplitude accuracy - to minimise phase shift between the mains and the measured waveform, which is important in real power calculations. Thanks all for your responses, they were extremely helpful! I think I’ve got a good understanding of all those components now. I don’t doubt what you are saying in the slightest, but this excerpt from the emonTx v3.4 wiki suggests up to 60mA can be drawn from the AC:AC without “damage” but with “impaired operation”, Which I would read to mean the voltage sample will be distorted/invalid between when drawing between 10mA and 60mA, not that things would start coming apart at the seams. Important note regarding powering with AC: powering via AC is only recommended for standard emonTx operation without auxiliary sensors (apart from up to 4 DS18B20 temperature sensors) or equipment (e.g. relay modules) connected. Correct operation via the ac supply is critically dependent upon using the correct ac-ac adapter. If you are using the recommended ac-ac adapter and the current draw exceeds 10 mA and the mains supply is below the minimum allowable, then the circuit operation will be impaired, adversely affecting the reading accuracy of the emonTx. To avoid damage to the emonTx V3’s circuits, the current drawn from the AC circuit should never exceed 60mA - see technical wiki for more info. If more than 10 mA of current is required it is recommended to remove jumper 2 (JP2) and power the emonTx via 5V USB. When JP2 is removed, the AC-AC adapter if connected will only be used to provide an AC sample. It will not power the emonTx. I rarely power even a basic emonTx from the AC:AC so have no experiences to call upon, I just recalled reading this in the wiki, maybe it needs clarifying further?
OPCFW_CODE
package adc import ( "testing" "github.com/stretchr/testify/require" ) func Test_User_GetStringAttribute(t *testing.T) { u := &User{ Attributes: map[string]interface{}{ "one": "string", "two": 2, "three": []byte("bytedata"), }, } require.NotEmpty(t, u.GetStringAttribute("one")) require.Equal(t, "string", u.GetStringAttribute("one")) require.Empty(t, u.GetStringAttribute("two")) require.Empty(t, u.GetStringAttribute("three")) require.Empty(t, u.GetStringAttribute("nonexists")) } func Test_GetUserArgs_Validate(t *testing.T) { var req GetUserArgs err := req.Validate() require.Error(t, err) req = GetUserArgs{} err1 := req.Validate() require.Error(t, err1) req = GetUserArgs{Id: "fake"} errOk := req.Validate() require.NoError(t, errOk) } func Test_Client_GetUser(t *testing.T) { cl := New(&Config{}, withMock()) err := cl.Connect() require.NoError(t, err) var badArgs GetUserArgs _, badReqErr := cl.GetUser(badArgs) require.Error(t, badReqErr) args := GetUserArgs{Id: "entryForErr", SkipGroupsSearch: true} _, err = cl.GetUser(args) require.Error(t, err) args = GetUserArgs{Id: "userFake", SkipGroupsSearch: true} user, err := cl.GetUser(args) require.NoError(t, err) require.Nil(t, user) // Too many entries error user, err = cl.GetUser(GetUserArgs{ Id: "notUniq", SkipGroupsSearch: true, Attributes: []string{"sAMAccountName"}, }) require.Error(t, err) require.Nil(t, user) dnReq := GetUserArgs{Dn: "OU=user1,DC=company,DC=com", SkipGroupsSearch: true} groupByDn, err := cl.GetUser(dnReq) require.NoError(t, err) require.NotNil(t, groupByDn) require.Equal(t, dnReq.Dn, groupByDn.DN) args = GetUserArgs{Id: "user1", SkipGroupsSearch: true} user, err = cl.GetUser(args) require.NoError(t, err) require.NotNil(t, user) require.Equal(t, args.Id, user.Id) require.Nil(t, user.Groups) args.Attributes = []string{"something"} user, err = cl.GetUser(args) require.NoError(t, err) require.NotNil(t, user) require.Equal(t, args.Id, user.Id) require.Nil(t, user.Groups) args.SkipGroupsSearch = false user, err = cl.GetUser(args) require.NoError(t, err) require.NotNil(t, user) require.Equal(t, args.Id, user.Id) require.NotNil(t, user.Groups) require.Len(t, user.Groups, 1) } func Test_User_IsGroupMember(t *testing.T) { u := &User{} require.Equal(t, false, u.IsGroupMember("group1")) u.Groups = []UserGroup{ {Id: "group1"}, {Id: "group2"}, } require.Equal(t, false, u.IsGroupMember("group3")) require.Equal(t, true, u.IsGroupMember("group1")) require.Equal(t, true, u.IsGroupMember("group2")) } func Test_User_GroupsDn(t *testing.T) { u := &User{ Groups: []UserGroup{}, } require.Nil(t, u.GroupsDn()) newGroup := UserGroup{Id: "someId", DN: "someDn"} u.Groups = append(u.Groups, newGroup) require.NotNil(t, u.GroupsDn()) require.Contains(t, u.GroupsDn(), newGroup.DN) } func Test_User_GroupsId(t *testing.T) { u := &User{ Groups: []UserGroup{}, } require.Nil(t, u.GroupsId()) newGroup := UserGroup{Id: "someId", DN: "someDn"} u.Groups = append(u.Groups, newGroup) require.NotNil(t, u.GroupsId()) require.Contains(t, u.GroupsId(), newGroup.Id) }
STACK_EDU
Classroom as Coffee Shop? I see it all the time on Twitter. “You should <INSERT NAME OF FAMOUS COFFEE SHOP HERE> your classroom." Good idea? Or bad idea? The real answer is that it depends. The focus on redesigning classrooms should be to create the spatial conditions that support the realization of the student experience you want kids to have at school. This can mean many things depending on the school, but jumping to an immediate solution that the classroom space should be a “coffeehouse” is probably not a viable or realistic first step. On the other hand, considering how cafes, coffeehouses and other spaces beyond school can inform the design of classrooms, and schools themselves, is an interesting question to explore. Design Based on Student Experience My provocation to you: Design intellectual spaces for learning based on a community-held set of beliefs associated with what you want kids to experience as learners while at school. Start there. And when you unpack that, I’m guessing the design that you’ll arrive at is not a coffee shop. On the other hand, students work more and more in these types of spaces. As a freelancer that employs many different types of spaces in the work that I do, I see it all the time. Coffee shops are on-demand, highly social, filled with technology (and food and drink), very fluid, loud, and interactive. And kids find a way to make all of that work, and have come to expect that these types of spaces will be available to them. They are part of the equation for students today. So, it does make sense to understand these spaces in the context of student learning and what they might offer with regards to learning space design. What Makes Coffee Shops So Popular? What is interesting about coffee shops is the human dynamic of interaction present there, and how that could potentially inform spatial design in schools. So it’s not about the things of a coffee shop, which most people mention when they talk or write about coffee shopping, but understanding the patterns of interactions that occur there. I’m not interested in recreating the “coffee shop as classroom” at all, but I am interested in understanding the conditions that make such spaces compelling for human beings, and how these conditions could inform a school spatial solution. That also includes understanding other types of spaces, such as coworking locations, incubators, start-ups, and makerspaces - spaces beyond school - that present unique conditions that students will eventually encounter and will be required to understand and employ as part of their career or life. While focusing on a coffee shop classroom is trendy and a blog post about it will get retweeted wildly, think beyond trying to create a space not really intended for education as the primary space for education. Real space design focuses intimately on identifying the student experience first and uses those expectations as a framework for pivoting to a spatial response that can support those expectations. If components of spaces beyond school can contribute to and inform that response, good. If not, that’s ok too. Visiting such spaces with a careful eye focused on understanding human interactions in those spaces is the key, not the cafe table, or the other things in the space. Getting ideas is always good, and having new ideas from other professions and locations can be a valuable and healthy way to initiate and direct educational change. David Jakes - A recognized leader in the educational technology field, David Jakes focuses on using the design process to support the organizational growth, development and change required to create relevant and meaningful conditions for student learning in schools. David’s thought leadership includes addressing the increased need to develop agile, connected, and personalized learning environments that support a contemporary education, and how the use of technology can be reimagined to create boundless opportunities for learning. Before his current position as Chief Design Officer of David Jakes Designs, David spent almost three decades in education as a teacher, technologist, and administrator. David's design experience includes working as a Digital Designer and Strategist for CannonDesign and The Third Teacher+, a leading architecture firm and learning space consultancy. David is a frequent presenter at national and international educational conferences where he speaks about the power and promise of a new expedition for learning, and the roles that all educators have in shaping that journey. Learn more now with materials from these toolkit and resource collections:
OPCFW_CODE
using System; using System.Collections.Generic; using System.IO; using System.Runtime.CompilerServices; namespace FixedPacker { public interface IPacker<TDefinition> { void Pack(IReadOnlyList<TDefinition> data, Stream stream); } public static class PackerExtensions { public static byte[] Pack<TDefinition>(this IPacker<TDefinition> packer, IReadOnlyList<TDefinition> data) { using (var m = new MemoryStream()) { packer.Pack(data, m); return m.ToArray(); } } } public struct Packer<TDefinition, TGenerated> : IPacker<TDefinition> { public delegate void NewFunction(TDefinition d, Func<int, int> getIndex, ref TGenerated g); public delegate IEnumerable<ReadOnlySpan<byte>> GetBinariesFunction(TDefinition d); private NewFunction _new; private GetBinariesFunction _getBinaries; public Packer(NewFunction @new, GetBinariesFunction getBinaries) { _new = @new; _getBinaries = getBinaries; } public unsafe void Pack(IReadOnlyList<TDefinition> data, Stream stream) { var size = Unsafe.SizeOf<TGenerated>(); var buffer = stackalloc byte[size]; ref var p = ref Unsafe.AsRef<TGenerated>((void*)buffer); using (var w = new BinaryWriter(stream)) { w.Write((long)data.Count); var binaryIndex = 0; foreach (var d in data) { _new(d, len => { var i = binaryIndex; binaryIndex += len + 4; return i; }, ref p); for (int i = 0; i < size; i++) { var xx = buffer[i]; w.Write(xx); } } if (binaryIndex != 0) { foreach (var d in data) { foreach (var b in _getBinaries(d)) { w.Write(b.Length); w.Write(b.ToArray()); } } } } } } }
STACK_EDU
In the early ages of computing, disc space was scarced, many people did not have a hard-drive and had to save their projects on slow floppy-discs. This is the reasons why the original AMOS saved the applications as one single file, packing all the graphics, sounds and code into one single, simple to manipulate file. Things have changed, and today everyone has a hard-drive. Modern editing tools deal with single files (text, images, sounds, videos etc.) and the original concept of 'all in one file' would not have been practical for amos 2. AMOS 2 application are therefore contained in a folder, and each one of the component of the application are separate files, organised in a logicial directory structure. In a future version, you will be able to directly compile .amos files. The compiler will extract all the elements of the application (images, sounds, source code etc.) and create the resulting directory structure for you. As a result, it will transform your old games that were only editable on the Amiga into real modern projects editable with modern tools on nowadays machines. The basic structure of an AMOS 2 application is as follow: Root_Directory (name of the application) bankname_banknumber (example menu_5, picpac_10, tracker_7, data_11....) This file contains the compiler and runtime properties of the application. A detailed explanation of this file is done in the next chapter. This file contains the source code of the application, in normal UTF-8 format. Any source code editor can be used to modify it (such as Visual Stutio Code, Notepad++, Jetbrains line of products etc.). The compiler understands all cariage return formats and will compile applications saved on Linux, Windows or MacOS. This directory contains the 'memory banks' of the application, the content of which saved as individual files. This directory represents memory bank #1, the 'Sprites' bank. The images of the bank should be PNG or JPG (formats that browsers can recognize). The number of each element is indicated in the filename itself, for example the file '1.png' will be available within the amos 2 program as sprite #1 in the bank, and the follwing instruction will display it as a bob: bob 1, 100, 100, 1 Also note (not yet completely debugged in the current version), that you can save pictures with real filenames, for example 'mysprite.png' and that the bob and sprites command will allow you to call the images by their original names (example: bob 1, 100, 100, "mysprite.png")... The goal of this enhancement is to make AMOS 2 program clearer and simpler to make when you have a large amount of sprites. This folder contains the elements of memory bank #2 in original AMOS, the 'icons' bank. As for the sprites, each icon is a numbered image file. Files in the icon bank can only be png, and any other file extension will be ignored by the compiler. This directory contains the elements of bank #3 of the original AMOS, the "music" bank. musics are not yet implemented in the compiler. This directory contains the elements of the AMAL bank, bank #4 of the original AMOS application. AMAL should be implemented in the next weeks, more information will be added to this documentation soon. This directory contains the elements of the Sample bank, bank #5 of the original AMOS application. As for sprites and icons, each sound should be a numbered file. Format supported will be the ones allowed in a browser, WAV, MP3 and OGG. The five first banks were 'reserved' memory banks in AMOS, but you could also define your own banks and save them with your application. AMOS-2 improoves this functionality, and allows you to define any kind of bank, and include any file in it. The principle is simple: A bank is defined by the name of the folder: 'bankname_banknumber'. For example, a folder named 'data_8' will be listed within the amos-2 application as a 'data' bank, and will bear the number 8. The underscore indicates the separation between the name of the bank and its number. The compiler will generate an error if there are more than one underscore in the folder name, or if the characters after the underscore and not parsable into a integer number. It will also generate an error if the number is below 5 or already used in another folder. The CONTENT of the bank is build from the files found within it. When compiling, AMOS 2 scans each directory and lists the files, then sorts them alphabetically. It then includes the binary content of the file into the bank. Each element will be available with a new function (yet to implement), = Start Bankitem( bank, element ) and = Length Bankitem( bank, element ) that will be complementaty to the Start and Length functions. Note that if the bank only contains one element, the Start and Length function will be 100% compatible with the original AMOS. Some names of banks are reserved, and can only contain specific data related to their use: - "picpac" : contains images packed with original AMOS picture compactor - "menu" : contains menus to be display with AMOS menu instructions - "resource" : contains the definition of dialog and buttons for the AMOS interface instructions - "tracker" : contains .mod files and other compatible formats This directory allows you to include Amiga Fonts and / or Google Fonts in your application. Including an Amiga Font AMOS-2 is directly compatible with Amiga Fonts. All you have to do to enjoy them is to copy from an original Amiga disc the folder containing the font definitions for the various sizes as well as the .font file associated with the font. Including a Google Font AMOS-2 supports the more modern and free fonts from Google, the famous Google Fonts. Google Font (as Amiga Fonts) are available to the programmer through the 'Text' commands of screens. To include a Google Font: - In your favorite text editor, open the file name 'template.googlefont' that you will find in the 'amos-2/runtime' folder. - Goto to the Google Font website, choose your font and its characteristics and click on the + sign on the top-right of the font - Open the bottom right panel to display the information about the selected font - Select and copy the "link" line and copy it in the template at the INSERT_LINK location. - Select and copy the "CSS" part and copy it in the template at the INSERT_CSS location. - Save the template as "name_of_fon.googlefont" in your application's 'resources/fonts' directory - Compile your application, the new Google Font will now be present within your application and be listed as a 'disc' font. The filesystem folder allows AMOS 2 compiler to emulate the Amiga filesystem within your application, and ensures compatibilities with applications that loaded elements from the disc. It also represent a very simple way to append data to your application. - The root filesystem folder contains subfolders, each sub-folder represents a 'drive' in the Amiga sense of the term. Files are forbidden at this level, and will generate compilation errors. The name of each folder defines the name of the drive when the application runs. Examples of names: - "DHO" : your application will have a "DHO:' drive mounted when it runs - "DF1" : your application will have a "DF1:" drive mounted when it runs - "application" : even if it is not defined, an "application:" drive is available to each application, and this drive is the default path of the application. Creating the folder in the "filesystem" folder allows you to populate it with files and folders. - Files and folders located within each "drive" directory will be reflected in your application as if they were present on the real disc. AMOS 2 instructions work as if they were using the real original filesystem. You can list the files with the Dir command, you can change the directory with the PATH$ reserved variable, got o the parent directory with the Parent instruction etc. Load, Bload, Load Iff are of course also supported. Please refer to the "filesystem" chapter for more information.
OPCFW_CODE
I highly recommended that you install Visual Studio 2012 Update 3 if you still haven't done so. Start Visual Studio 2012, select TOOLS | Extensions and Updates… and then click Updates | Product Updates. The Extensions and Updates dialog box will display Visual Studio 2012 Update 3 with an Update button on the right-hand side (Figure 5). Click on Update and you will have Visual Studio 2012 with Windows Phone SDK 8.0 and the latest update. I'll assume you have this Update installed. Figure 5: Visual Studio 2012 Update 3 displayed in the Extensions and Updates dialog box. Windows Phone 8 App Types When developing apps for Windows Phone 7.x, there were two different worlds: Silverlight and XNA. When you target Windows Phone 8, you have the following choices for the UI framework: - Mixed mode XAML and Direct3D You can think of XAML apps as the new version of Silverlight for Windows Phone apps. In fact, you will be able to reuse most of your existing knowledge of Silverlight for Windows Phone in XAML apps that target Windows Phone 8. If you feel more comfortable with HTML5 and CSS, there is a Windows Phone HTML5 app project template that enables you to easily create an app that displays HTML5 pages. The project template is available for both Visual Basic and C# languages, but it includes XAML pages and managed code. Thus, you cannot consider the project a pure HTML5 solution. You can use both C# and Visual Basic to create Windows Phone 8 apps and interact with .NET for Windows Phone, the Windows Phone runtime, and Direct3D in mixed mode with XAML. Windows Phone 8 has discontinued the XNA app model,so you won't be able to create XNA games if you want to target Windows Phone 8. Direct3D has replaced XNA. However, Windows Phone 8 can run existing XNA games and the project templates include XNA for backward compatibility with Windows Phone 7.x. Windows Phone 8 includes support for native code development with C++ (Figure 6). Native code development is very useful for games that require the best performance and use Direct3D. However, as happens in other mobile platforms, it is also possible to create runtime components and libraries that take advantage of the higher performance of native code but can be consumed in managed code apps. The latest Visual Studio 2012 updates added the Windows Phone Unit Test App to simplify testing. Figure 6: Windows Phone 8 project templates for C++ (native code). Windows Phone 8 uses the Windows Phone Application Store as the main mechanism for distributing apps. The Store was formerly known as the "Windows Phone Application Marketplace." However, the Store isn't an appropriate source for distributing enterprise apps, so Windows Phone 8 includes additional ways of distributing apps to solve this issue: - Internet or Intranet downloads through Internet Explorer on Windows Phone 8. - Email attachments. - Loaded on a microSD card that then is inserted into the phone. Performance Improvements for Apps and Background Tasks Windows Phone 8 made a big change in the process of submitting an app to the Windows Phone Application Store. When you submit, the process precompiles the app for the ARM CPU that powers Windows Phone devices. This way, when the end user downloads the apps from the Store, the app package includes ARM code that doesn't require the Just-In-Time (JIT) compiler, thus removing overhead, which translates into faster startup times for the apps and better performance. In addition, the removal of the JIT compiler overhead improves battery life. Background execution is one of the biggest problems for mobile devices because apps with background execution are often reduce of battery life. The Windows Phone 8 UX motivates apps to provide live tiles, and users usually expect apps to continue providing information even when they aren't interacting with them. An app running unconstrained tasks in the background can detrimentally affect the performance of the other apps running on the phone and reduce battery life. So Windows Phone 8 provides background services that permits developers to perform common tasks on behalf of the apps in an efficient way. The Background Transfer Service enables apps to perform HTTP transfers that keep the necessary balance to avoid having a negative impact on network traffic for the main foreground app. The Alarms API, for example, allows apps to create reminders without having to write code that runs in the background to keep track of time. This way, the apps that require reminders use the services provided by the Alarms API and don't need to reinvent the wheel to create efficient background mechanisms. The Windows Phone 8 project templates for managed code include the following three scenario-based agents that you can use for common background tasks: - Windows Phone Audio Playback Agent - Windows Phone Audio Streaming Agent - Windows Phone Scheduled Task Agent If your app requires audio playback, you create an agent based on the Windows Phone Audio Playback Agent and you benefit from reusing the existing infrastructure for playing and controlling background audio. This way, Windows Phone 8 makes sure your app doesn't consume unnecessary additional resources simply to play audio. You will use the Windows Phone Scheduled Task Agent when you need to perform either a periodic or a resource-intensive task. For example, when you have an app that needs to regularly fetch the latest news to update the app tiles, you just need to create and consume a Windows Phone Scheduled Task Agent to perform that task. As happens with Windows 8 apps, Windows Phone 8 managed code apps with .NET for Windows Phone and the Windows Phone runtime also trust in the new asynchronous programming model to avoid blocking the UI. Thus, asynchronous methods are an essential part of Windows Phone 8 manage code apps. If you aren't familiar with the new await modifiers that are part of C# 5.0, have a look at "Using Asynchronous Methods in ASP.NET 4.5 and in MVC 4" before attempting Windows Phone 8 managed code app development with XAML and .NET. The article talks about ASP.NET 4.5 and MVC 4, but the explanations for asynchronous methods are also useful for Windows Phone 8 app development. In the next article in this tutorial, I will be using asynchronous methods to interact with the different speech APIs. Portable Class Libraries and Windows Phone 8 One last point before wrapping up: If you want to share code between Windows Phone 8 apps and other Microsoft platforms, after you have installed the Windows Phone SDK 8.0, you will be able to add Windows Phone 8 as one of the target frameworks for Portable Class Libraries (see Figure 7). Figure 7: Windows Phone 8 as one of the targets for a Portable Class Library. If you want more information about the benefits of Portable Class Libraries to share code between multiple Microsoft platforms, you can read "Access Data with REST in Windows 8 Apps." Note that you can also consume the Portable Class Libraries in Windows Phone 8 apps. In this article, I've put in place the resources you need to develop apps for Windows Phone 8. I've provided a brief overview of some of the most important things that you must consider before starting your adventures in Windows Phone 8. In the next article, I'll explain how to develop apps that take advantage of the new features related to the speech APIs: Voice Commands, Speech Synthesis, and Speech Recognition. Gaston Hillar is a frequent contributor to Dr. Dobb's.
OPCFW_CODE
I am back at work full-time this summer at ImageX Media, where I have been over the past two summers as well. It’s a Mac shop. I don’t use Macs. And since I always seem to forget the process of setting up an ideal development environment on a Mac, I have recorded it here. Install the wonderful MAMP (the free version works just fine) in your If you want to have your local web server on port 80 instead of the default 8888, there is a small extra step. Open up the /Applications/MAMP/conf/apache/httpd.confconfiguration file with your favourite editor (yes, I said favourite and not favorite). Look for a line that says Listen 8888(for me it is on line 219) and change the port to 80. Now you can go ahead and start up the server. If you didn’t change the port, feel free to use the Dashboard widget included in the MAMP directory (just double-click on it to install). If not, open up a terminal, change directories to If you’re like me and use the command line a lot, you’ll want to create a symlink to the MySQL binary so that you don’t have to type in the full path every time you want to use it. Run these commands: sudo ln -s /Applications/MAMP/Library/bin/mysql /usr/local/bin/mysqland sudo ln -s /Applications/MAMP/Library/bin/mysqldump /usr/local/bin/mysqldump. I found that the easiest way to develop multiple sites is to keep each one in a separate directory, and access each directory with a separate hostname. Add this line to the bottom of your Create directories in your vhosts location. For example, if I have three Drupal sites (site1, site2, site3) I would For each directory you create in your Sites directory, make sure the hostname is pointing to localhost. Edit your /etc/hostsfile to read: 127.0.0.1 localhost site1 site2 site3. Drupal uses a lot of memory. Especially with imagecache. Open up your php.ini file ( /Applications/MAMP/conf/php5/php.ini) and fine the line that reads memory_limit 8M(for me line 232) and change the 8M to 96M to ensure you never receive any out-of-memory errors. One last tweak. Unzip your Drupal site in ~/Sites/site1. For pretty URLs to work you are going to need to make one little tweak. Open up the .htaccessfile, and uncomment the line that reads RewriteBase /(line 103) if Drupal is found in your site’s root directory, or change the path appropriately. And you’re done! Now you can access your sites by going to http://sites1/, which points to localhost, gets accepted by Apache, which looks up /Users/yourname/Sites/sites1, finds that it exists, and runs the index.php file. Capiche? Virtual hosts are powerful indeed. Final details: - By default, the MySQL username and password are both - To add a new site, create the directory, and add a corresponding entry in the Now it’s back to development for me.
OPCFW_CODE
Pointers don't work (render or interact) in the latest build. Source of VRTK (Unity Asset Store or Github) GitHub, latest version as of yesterday. Version of Unity3D (e.g. Unity 5.4.4f1) Unity 2018.1.0b4 (64-bit) Hardware used (e.g. Vive/Oculus) Oculus Rift CV1 SDK used (e.g. OpenVR/SteamVR/Oculus Utilities) Oculus Utilities were installed Steps to reproduce Load the 003_Controller_SimplePointer scene or build a pointer sample yourself Expected behaviour I expect for the pointers to interact with the scene and render a line from the controller Current behavior Nothing is happening, when moving the thumbstick or any controls nothing renders or happens I also followed a tutorial from youtube bit by bit and where they got a line rendering on their pointer I got nothing. Unity 2018 is a beta, we don't support betas. Unity 2017 is only supported by VRTK 3.3.0, WIP branch is here. I suggest to join if you need more help. GitHub issues aren't good for troubleshooting. I'll join the slack and see if I can get some help there. I just tried it all again with Unity 2017.3 and the same issue still happens. Using the Unity asset store version of VRTK. Let me repeat myself here :) Unity 2017 is only supported by VRTK 3.3.0, WIP branch is here. Same problem here. Unity 2018.1.1f1 (non beta) is not working with VRTK and oculus (Oculus plugin 1.14). The pointer is sitting static somewhere. The hands (avatar) and controller move OK..but no pointer is attached Та же проблема здесь. Unity 2018.1.1f1 (не бета) не работает с VRTK и oculus (плагин Oculus 1.14). Указатель сидит где-то неподвижно. Руки (аватар) и контроллер двигаются нормально ... но никакой указатель не прикреплен Hi, did you find any solution? I still have the same problem.. but why? Hi @ahmadsay I made it work, yes. Here is a test scene with just the SDKManager setup for Vive and Oculus. ViveAndOculusDemo.zip Unity version is 2018.1.1f1 and I forgot which Oculus version I managed to get it work...but here is my Assets/Oculus folder zipped (it could be the latest but I'm not sure) Oculus.zip Good luck Hi @ahmadsay I made it work, yes. Here is a test scene with just the SDKManager setup for Vive and Oculus. ViveAndOculusDemo.zip Unity version is 2018.1.1f1 and I forgot which Oculus version I managed to get it work...but here is my Assets/Oculus folder zipped (it could be the latest but I'm not sure) Oculus.zip Good luck Thanks! I will test. For me, this problem very often appear. Or my lasers don't see in the scene, or it somewhere and froze.
GITHUB_ARCHIVE
Microsoft announced today that the second preview version of its Visual Studio 17.10 software developer tools is now available to download. It includes some additional GitHub Copilot features. Windows 11 22635.3495 adds Start menu recommendations and more windows 11 insider preview promo As a photographer, I used only my Galaxy S24 Ultra on holiday, it was great! galaxy s24 ultra ULA's Delta IV Heavy took off for the very last time, see it here Horizon Forbidden West on PC is the definitive way to experience this game horizon forbidden west Github copilot RSS Microsoft's GitHub has released GitHub Copilot Chat for JetBrains IDEs such as IntelliJ IDEA. It's available for everyone to use now following a private beta but requires a subscription. GitHub has launched a new Learning Pathway for GitHub Copilot. It's for business leaders who'll choose whether to deploy this tool for their developers. GitHub is also working on an upcoming pathway. Microsoft-owned GitHub has made GitHub Copilot Enterprise generally available. It costs $39 a month per user and will provide businesses and organizations AI code creation and assistance. Meta releases its new AI-assisted code-writing tool, Code Llama 70B. It is trained on 500B code tokens to generate longer sequences. It understands code structures with a special technique. Microsoft will end support for Visual Studio 2013 on April 9, meaning that it will no longer receive security updates or fixes. Developers using older versions will see reduced support over the GitHub Copilot Chat, the generative AI-based chatbot for software developers, is now generally available for both individual developers as well as organizations after a long beta preview period. Visual Studio Preview users with GitHub Copilot can now get smart suggestions for variable, method, and class names. This should help users make their code easier to understand for others. Google has announced the general availability of Duet AI and Gemini Pro, its AI-powered code completion tool for developers. The tools now leverages partner datasets to help developers. Microsoft-owned GitHub announced that its GitHub Copilot Chat will become generally available in December 2023, and will offer GitHub Copilot Enterprise in February 2024 for $39 a person per month. Free to download report: find out how two major AI-powered code generation tools - GitHub Copilot and Amazon CodeWhisperer compare, along with the benefits and drawbacks for developers. A new report on the growing number of AI tools claims Microsoft is losing lots of money per user for its GitHub Copilot service. Some users are reportedly costing the company as much as $80 per month. GitHub Copilot Chat was launched earlier this year for business customers, but now the beta version of the generative AI code assistant is available to everyone across Visual Studio and VS Code. Microsoft says that if someone who uses its Copilot generative AI products gets sued by someone who claims it violated its copyrighted content, Microsoft will help defend that person legally. GitHub's Copilot Chat AI is now available in public beta. It gives developers a ChatGPT-like experience when coding. The tool will initially be available to businesses and organisations. GitHub has said that generative AI tools for developers will lead to a huge $1.5 trillion boost to global GDP by the year 2030. It said people are still learning what it can do to help them. Microsoft has updated Visual Studio Code. It comes with several updates of interest to developers including GitHub Copilot Chat improvements and read-only mode for specific files and folders. Apple has told its employees to stop using generative AI tools like ChatGPT and GitHub Copilot because it's worried about confidential data leaking out to other companies that could abuse it. GitHub Copilot X will add a chat interface to the editor that suggests code and examines it, proposes edits to fix broken code, and more. Sign-ups are available for its technical preview. Microsoft has announced GitHub Copilot for Business with the promise of transforming developers' productivity and helping them code faster. It has also announced "greater benefits" for organizations. This week's edition of Microsoft Weekly is jam-packed with news about Windows 11, including items related to its performance and bugs, growth, as well as some new features that were recently added. A programmer has filed a lawsuit against Microsoft, GitHub, and OpenAI for their GitHub Copilot tool, alleging that it basically allows Microsoft to profit off of the work of others, without consent. Microsoft has announced the general availability of GitHub Copilot, its AI-powered development assistant. If you're looking for an AI to help you write code, you can give it a go at $10/month. Microsoft has announced Dev Box, a Windows 365 solution that enables developers to access powerful workstations in the cloud through any device with access to a web browser, including Android and iOS. Microsoft's GitHub has collaborated with OpenAI to develop an extension for Visual Studio Code that will write code for developers as long as they use meaningful comments and function names.
OPCFW_CODE
import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.document.Document; import org.apache.lucene.document.IntPoint; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.index.IndexReader; import org.apache.lucene.index.Term; import org.apache.lucene.queryparser.classic.ParseException; import org.apache.lucene.queryparser.classic.QueryParser; import org.apache.lucene.queryparser.xml.builders.TermQueryBuilder; import org.apache.lucene.search.*; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import java.io.IOException; import java.nio.file.Paths; public class Searcher { public static void main(String args[]) { try { // Load the previously generated index (DONE) IndexReader reader = getIndexReader(); assert reader != null; // Construct index searcher (DONE) IndexSearcher indexSearcher = new IndexSearcher(reader); // Standard analyzer - might be helpful Analyzer analyzer = new StandardAnalyzer(); // TODO your task is to construct several queries and seek for relevant documents // TERM QUERY // A Query that matches documents containing a term. // This may be combined with other terms with a BooleanQuery. // TODO seek for documents that contain word mammal // as you may notice, this word is not normalized (but is should be normalized // in the same way as all documents were normalized when constructing the index. // For that reason you can use analyzer object (utf8TOString()!). // Then, build a Term object (seek in content - Constants.content) and TermQuery. // Lastly, invoke printResultsForQuery. String queryMammal = "MaMMal"; QueryParser contentQueryParser = new QueryParser(Constants.content, analyzer); TermQuery tq1 = new TermQuery(new Term(Constants.content, queryMammal.toLowerCase())); { // -------------------------------------- // COMPLETE THE CODE HERE System.out.println("1) term query: mammal (CONTENT)"); printResultsForQuery(indexSearcher, tq1); // -------------------------------------- } // TODO Repeat the previous step for a word "bird". // Compare the results for "mammal" and "bird". String queryBird = "bird"; TermQuery tq2; { // -------------------------------------- System.out.println("2) term query bird (CONTENT)"); printResultsForQuery(indexSearcher, contentQueryParser.parse(queryBird)); // -------------------------------------- } // TODO now, we seek for documents that contain "mammal" or "bird". // Construct two clauses: BooleanClause (use BooleanClause.Occur to set a proper flag). // The first concerns tq1 ("mammal") and the second concerns ("bird"). // To construct BooleanQuery, Use static methods of BooleanQuery.Builder(). // Additionally, use setMinimumNumberShouldMatch() with a proper parameter // to generate "mammal" or "bird" rule. // Boolean query { // -------------------------------------- System.out.println("3) boolean query (CONTENT): mammal or bird"); // -------------------------------------- } // TODO now, your task is to find all documents which size is smaller than 1000bytes. // For this reason, construct Range query. // Use IntPoint.newRangeQuery. { // -------------------------------------- System.out.println("4) range query: file size in [0b, 1000b]"); // -------------------------------------- } // TODO let's find all documents which name starts with "ant". // For this reason, construct PrefixQuery. { // -------------------------------------- System.out.println("5) Prefix query (FILENAME): ant"); // -------------------------------------- } // TODO let's build a wildcard query". // Construct a WildcardQuery object. Look for documents // which contain a term "eat?" "?" stand for any letter (* for a sequence of letters). { // -------------------------------------- System.out.println("6) Wildcard query (CONTENT): eat?"); // -------------------------------------- } // TODO build a fuzzy query for a word "mamml" (use FuzzyQuerry). // Find all documents that contain words which are similar to "mamml". // Which documents have been found? { // -------------------------------------- System.out.println("7) Fuzzy querry (CONTENT): mamml?"); // -------------------------------------- } // TODO now, use QueryParser to parse human-entered query strings // and generate query object. // - use AND, OR , NOT, (, ), + (must), and - (must not) to construct boolean queries // - use * and ? to contstruct wildcard queries // - use ~ to construct fuzzy (one word, similarity) or proximity (at least two words) queries // - use - to construct proximity queries // - use \ as an escape character for: + - && || ! ( ) { } [ ] ^ " ~ * ? : \ // Consider following 5 examples of queries: String queryP1 = "MaMMal AND bat"; String queryP2 = "ante*"; String queryP3 = "brd~ "; String queryP4 = "(\"nocturnal life\"~10) OR bat"; String queryP5 = "(\"nocturnal life\"~10) OR (\"are nocturnal\"~10)"; // Select some query: String selectedQuery = queryP1; // Complete the code here, i.e., build query parser object, parse selected query // to query object, and find relevant documents. Analyze the outcomes. { // -------------------------------------- System.out.println("8) query parser = " + selectedQuery); // -------------------------------------- } reader.close(); } catch (Exception e) { e.printStackTrace(); } } private static void printResultsForQuery(IndexSearcher indexSearcher, Query q) { // TODO finish this method // - use indexSearcher to search for documents that // are relevant according to the query q // - Get TopDocs object (number of derived documents = Constant.top_docs) // - Iterate over ScoreDocs (in in TopDocs) and print for each document (in separate lines): // a) score // b) filename // c) id // d) file size // You may use indexSearcher to get a Document object for some docId (ScoreDoc) // and use document.get(name of the field) to get the value of id, filename, etc. // -------------------------------- try { TopDocs topDocs = indexSearcher.search(q, Constants.top_docs); ScoreDoc[] scoreDoc = topDocs.scoreDocs; for (ScoreDoc score : scoreDoc){ Document document = indexSearcher.doc(score.doc); System.out.println(document.get(Constants.filename) + " [" + score.score + "]" + " [ID: " + score.doc + "] [SIZE: " + document.get(Constants.filesize) + "]"); } } catch (Exception ex) {} // -------------------------------- } private static IndexReader getIndexReader() { try { Directory dir = FSDirectory.open(Paths.get(Constants.index_dir)); return DirectoryReader.open(dir); } catch (IOException e) { e.printStackTrace(); } return null; } }
STACK_EDU
Exchangelib: ItemAttachment appears to strip file extension from the attachment name While trying to handle couple attachments of types ItemAttachment and FileAttachment from an inbound email, I notice that the ItemAttachment (representing an email attachment "HELLO WORLD.eml") strips the extension .eml from the name. So I lose that info downstream in my flow. The other types of attachments of type FileAttachment are all fine and keep their extensions. Not sure whether I am missing something or is a defect in the way the ItemAttachment is initialized. Thoughts? Note 1: These attachments are right off the bat like: attachments = message_item.attachments Note 2: exchangelib==3.2.0 ** ATTACHMENT 1 NAME: HELLO WORLD, <== Supposed to have .eml extension TYPE: <class 'exchangelib.attachments.ItemAttachment'> content_type='message/rfc822', <EMAIL_ADDRESS> size=31367, last_modified_time=EWSDateTime(2020, 7, 20, 22, 25, 2, tzinfo=<UTC>), is_inline=False ** ATTACHMENT 2 NAME: Daily Sync-up call.ics TYPE: <class 'exchangelib.attachments.FileAttachment'>: content_type='text/calendar', <EMAIL_ADDRESS> size=76875, last_modified_time=EWSDateTime(2020, 7, 20, 22, 25, 2, tzinfo=<UTC>), is_inline=False, is_contact_photo=False) (some content redacted) Item attachments in EWS are different in that they are not actually files, but references to other items in the Exchange database. So the .ics extension you probably see in e.g. Outlook is a .eml file that Outlook creates from the referenced item and offers for download. But EWS does not know about it. In exchangelib, ItemAttachment.item is an ordinary Item, and you can use it as such. If you need the attachment, you can create a .eml file from the information contained in the item attachment, but you'll have to do that yourself or use a library to help you out. Yes, .ics file becomes FileAttachment and is easy to download. The .eml file coming as the ItemAttachment is also not a problem.But the latter simply loses the extension which came with the file originally. Kind of alters the state of the incoming file (aka its extension .eml) for no reason. Like you suggested, I decided to handle that myself by getting the content and renaming the downloaded file to include the .eml extension. I will try posting that approach as an additional answer as well in case anyone faces the same. Taking into account the accepted answer for my question, to counter the loss of .eml extension I was facing with ItemAttachment, I have adopted an explicit renaming scheme as follows: if isinstance(a, ItemAttachment): attach_name = a.name regex_pat = re.compile(r'.*\.eml$') # regex for explicit .eml extension if not regex_pat.match(a.name) and a.content_type == "message/rfc822": attach_name += ".eml" attachment_file = ContentFile(a.item.mime_content, name=attach_name) An obvious gotcha is my assumption that "message/rfc822" type file has .eml as the extension and not others. But this works for my purposes in my environment as a workaround to reinstate the missing .eml extension. Leaving this approach here for compare/contrast in case anyone comes across this issue.
STACK_EXCHANGE
tliu at ict.ac.cn Wed Nov 24 19:25:03 EST 1999 I have a bootrom of vxWorks and I want to use it to download any files that I need through FTP or TFTP. When I try to do it, I found If I use it to download vxWorks's image through FTP, that's all right. If I use it to download vxWorks's image through TFTP, there is error. If I use it to download a test image(ELF file), there is error too. I think there must be some requirements to the download files, right? Maybe the format of ELF is not correct? I also want to make a bootrom of Linux, can you tell me how to do it? I am a newbie, so if you can, tell me something in detail please. Jim Chapman wrote: > Re: using a vxworks bootrom to load zImage > I am using a standard vxworks bootrom to load a zImage, but I had to > make a few modifications to the zImage startup code to make it work. For > us, it is useful to share the same target hardware between vxworks and > linux developers, without having to reblow the flash bootrom each time > we switch. And by building BOOTP into the vxworks bootrom, we simply > change the BOOTP server entry to have the target boot vxWorks or zImage > without changing the bootrom. However, once we're rid of vxworks > altogether, then a linux-centric bootrom would be a much better > It turns out that the vxworks bootrom ELF support doesn't handle named > ELF sections (it's yet another undocumented feature of Wind River code > -- it silently ignores sections that aren't ".text" or ".data"...), and > since the compressed vmlinux image is objcopy'd into a special "image" > section by arch/ppc/mbxboot/Makefile, I had to find a way to put the > image section inside the text segment so that the image data would be > copied by the vxworks bootrom. There may be a clever way to do that > using ld scripts, but I ended up converting the image data to assembly, > and used a couple of public symbols at the start/end of the data so that > (a modified) decompress_kernel() could find the image. The > binary-to-assembly convertor is a simple perl script which does almost > the same thing as vxWorks' binToAsm tool. > The initrd stuff would need similar treatment, but since I don't use > initrd, I haven't implemented it. > If you want more details (and the binToAsm perl script) let me know. ** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/ More information about the Linuxppc-dev
OPCFW_CODE
IBM Watson Assistant - exclude a specific entity value so as not to match it ever This could be a simple one that I haven't been able to find but I'm trying to exclude a single value ("girlfriend") from being picked up as an entity in a chatbot I'm building. The entity list is currently "dog, cat, pet, mum, horse" with relevant synonyms for each of those entities as well. Watson keeps picking up "girlfriend" and matching it as an entity despite it not being in there which is stuffing up the logic in the conversation. Is there a way to stop Watson identifying similar words in an entity list beyond what is in the list? I have tried turning off fuzzy matching but that just misses spelling mistakes. Please note this is not an intent training issue, it is specifically asking about entity identification. Any help appreciated. -T- Do you have fuzzy matching turned on for your entity? Try to turn it off. Your question is not entirely clear, but likely you want to take a look at how to improve a skill. Because Watson Assistant is built on AI technology, a key part is about learning. You can "teach" Watson Assistant by going back to conversations and correct wrong matches with the right ones. Watson Assistant is going to pick this up and then retrain the dialog. This should result in excluding "girlfriend". I've developed 100s of NLP solutions across a career of 30 years but this is my first time on Watson so I figure I'm missing a simple (not as simple as you're suggesting - retraining) thing. It's not the intent recognition it's getting wrong, it's the addition of a word not in an entity list that it is tagging as that entity. The word isn't in the entiry list, I don't want it to be in the entity list, how do I exclude it from the entity list? Retraining it is not possible as it is an entity, not an intent. Could you add details to your question? Or join this Slack community for discussion with Watson developers: http://wdc-slack-inviter.mybluemix.net/ I had a similar problem. My bot kept picking its own name as a username and I wanted it to ignore its own name even if the user typed it (e.g. Hello Robot, I am Jill) I wanted it to respond to 'Jill' and not 'Robot' but it kept missing it. I later realized the context variables I created had similar values to user names. So what I did was create a variable @bot-name and gave it only 1 value (Robot), no synonyms, no fuzzy match, no annotations. Then tried it again and the Bot recognized its own name, ignored that and picked the second name correctly as the user name. So when I repeated the sentence 'Hello Robot, I am Jill' it recognized @entity:bot-name and @entity:user-name and then responded only to the username. You can try something similar. which platform were you using? Its not clear how you created your entity list. If its was via contextual entities then Watson may be taking "girlfriend" as being in the same "family" as the other entities and adding to the entity list. If the entity list was hard coded, along with the synonyms, then I would guess that one of your synonyms shares some of the spelling of girlfriend, girl or friend. Which via fuzzy logic would match an entity, but with a lower confidence level. To fix you could create a new entity list and have a condition that looks to match entity list one, but not entity list two (girlfriend). Or you could set your condition on the entity list and entity confidence level > 0.8 - but you may then miss some spelling mistakes. (Select a confidence level thats just above that reported for girlfriend). I can't say if this is a solution, However I'd prefer to call it a workaround as it worked for me in my case. Non-Contextual Case: Create a new entity and add girlfriend as a value. Thus it would never interfere with your current entity in a dialog flow. Contextual Case: Train an intent with examples which includes girlfriend and annotate it with new entity.
STACK_EXCHANGE
cannot get data from response when using HttpClient.GetJsonAsync in AspNetCore MVC When using HttpClient.GetJsonAsync, docList can't get Json data with Object List<Doc>. I set breakpoint in LogsController.cs, it's obvious that ActionResult already has Value. But docList is null. The following code snippet is a part of the webapplication. How to get data and use it? Thank you for your support... QueryLogs.cshtml <TextEdit Placeholder="PlayerGuid" bind-text="@account" /> <SimpleButton Clicked="@(async () => await GetLogs(account))" Class="form-control" Color="Color.Primary"> Query </SimpleButton> <table> @if (docList == null) { <p>there are no doc to display</p>} else { @foreach (var doc in docList) { <tbody> <tr> <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> </tr> </tbody>}} </table> @functions{ List<Doc> docList = new List<Doc>(); string account; string document; public async Task GetLogs(string acc) { docList = await Http.GetJsonAsync<List<Doc>>("api/value/" + acc); } } LogsController.cs private readonly ElasticClient _client; public async Task<ActionResult<List<Doc>>> Search(string param) { return _client.Search<Doc>(s => s .From(0) .Size(10) .Query(q => q.Match(m => m.Field(f => f.account).Query(param)))).Documents.ToList<Doc>(); } [HttpGet("{param}")] [EnableCors("CorsPolicy")] public async Task<ActionResult<List<Doc>>> Search(string param) { return _client.Search<Doc>(s => s .From(0) .Size(10) .Query(q => q.Match(m => m.Field(f => f.account).Query(param)))).Documents.ToList<Doc>(); } Thanks for contacting us, @qwertylb. It's not clear from your example what you're experiencing. Are you getting an error? What is it? There is a project that use ASP.NET MVC and elasticsearch to query logs and represent it. In QueryLogs.cshtml, when i input account and click Query button, it will execute GetLogs method and it can get return value of account from LogsController's method Search. But the return value can not pass to the QueryLogs.cshtml by Http.GetJsonAsync<List<Doc>>("api/value/" + acc). Thanks for contacting us, @qwertylb. It's not clear from your example what you're experiencing. Are you getting an error? What is it? There is a project that use ASP.NET MVC and elasticsearch to query logs and represent it. In QueryLogs.cshtml, when i input account and click Query button, it will execute GetLogs method and it can get return value of account from LogsController's method Search. But the return value can not pass to the QueryLogs.cshtml by Http.GetJsonAsync<List>("api/value/" + acc). docList is null. @qwertylb, the result of the query in the controller action you're referring to is not null, because it's represented by an asynchornousely executing task. So that's the Task object you see. Most probably the task's result ends up being null when evaluated, hence you get null when you call it.
GITHUB_ARCHIVE
It is possible to silently install Redstor Online Backup. Please note that this function is supported from version 7 onwards as there were known issues pre-version 7 with this functionality. Whilst Redstor Online Backup can be installed silently on a single device using the command line it is not necessarily the most efficient way of doing so. We recommend the use of a deployment tool like CentraStage to support the mass deployment of Redstor Online Backup silently in a few simple and repeatable steps. For more information on this please review the information below which describes the silent installation process in more detail. There is also a video showing a practical demonstration of how deployment can be achieved using CentraStage. Part One: Command Line Switches For Use With the Redstor Online backup .MSI File The Redstor Online Backup MSI installer enables you to remotely deploy Server Edition and Desktop & Laptop Edition Backup Clients using your preferred desktop management solution, e.g. CentraStage. Use the Deployment Wizard to include the AccountServer and Group settings in the MSI. With these settings populated, you can specify that the Backup Account must be created automatically during the install process. Use the following command to run the installer (ensure that you have administrator privileges): If you do not wish to use the preconfigured AccountServer and Backup Group details, you can use additional parameters to override the default settings configured: - SERVERIP – IP address of the AccountServer - ACCOUNTNAME – Backup Account name - ACCOUNTPASSWORD – Backup Account password. Note: If you do not specify a Account password, a random one will be used. - CREATEKEY – Backup Group Create Key - ACCOUNTKEY – Backup Account encryption key. Note: If you do not specify an encryption key, a random one will be used. - GROUP – Backup Group name Tips for using Environment Variables: - You can use Environment Variables to supply the parameters listed above. - Type “set” in the command prompt to view a list of variables you can use. - You can use these properties as fixed values or use templates based on the variables (e.g. ./BackupClientFileName.msi PREPACCOUNT=Yes ACCOUNTPASSWORD=%USERDOMAIN%). - You can also use a combination of variables (e.g. ./BackupClientFileName.msi PREPACCOUNT=Yes ACCOUNTPASSWORD=%USERDOMAIN%\%USERNAME%). The standard MSI parameters are also available. A few examples are: - /help – Help information - /quiet – Quiet mode, no user interaction - /passive – Unattended mode, progress bar only Example: To deploy the Server Edition Backup Client, run the following with administrator privileges: ...\A5BPSE5.0.msi PREPACCOUNT=YES SERVERIP=SERVERNAME GROUP=COLLECTION01\GROUP01 CREATEKEY=KEY021 /passive /quiet - Redstor Online Backup will use the Windows Computer Name as the Backup Account name. - The password and encryption keys are randomly selected during the Backup Account creation process. - You are advised to install a Group Certificate to the specific Groups as the encryption keys are random. Without this certificate you will not be able to connect to a Backup Account to restore any data, should the computer crash. The password can be changed in the Storage Platform Console. For more information please refer to the attached user manual regarding creating deployment files and use of the "Account Prep" function. Part Two: Silent Deyployment Using CentraStage
OPCFW_CODE
/** * verify.js * Written by: Connor Taylor */ import { validateTemplate, getTemplatePackage, } from 'cyto-core'; /* `verify` will verify the contents of a template for the user for inspection */ export default function verify(program) { program .command('verify <templateId>') .description('Verify that there a cyto template is valid') .action((templateId) => { const templatePackage = getTemplatePackage(templateId); const config = require(templatePackage); try { validateTemplate(config, templateId); } catch (e) { console.log(e.stack); } console.log(`Template ${templateId} is valid :)`); }); }
STACK_EDU
VT Federated Identity Providers Assessment Task 1:Switzerland |EGI Activity groups||Special Interest groups||Policy groups||Virtual teams||Distributed Competence Centres| |EGI Virtual teams:||Main •||Active Projects •||Closed Projects •||Guidelines| - Are personal e-Science certificates available through the Terena Certificate Service in your country? No, the national AAI (SWITCHaai) is currently not connected to the TCS service. - If yes, contact the NREN/institute/company that provides TCS in your country and check that the information about the available certificate types is up to date on the on the Terena webpage. If the information is in the list is incorrect, what needs to be fixed? - If no, are there any plans to introduce the service (including timelines, obstacles identified, etc.)? There are no concrete plans at the moment. - In order to obtain a personal e-Science certificate from TCS, a user has to be affiliated with an institute that is part of the national identity federation and that has established an appropriate Subscriber Agreement. Please collect information about the institutes from which your NGI expects users (e.g. universities, research institutes) and indicate whether: - are those institutes members of your country's identity federation, All of the Swiss institutions using EGI today participate in the Swiss identity federation (SWITCHaai). Private research institutions, who could become interested in EGI in the future, cannot currently join SWITCHaai. A process is underway with the goal of opening up SWITCHaai for participants outside of publicly funded higher education and research. - have those institutions signed the Subscriber Agreement with the NREN, i.e. whether they allow to issue TCS personal e-Science certificates to their members. Not applicable yet. If TCS were to be made available in Switzerland, institutions would have to opt in twice: First, to the inter-federation (e.g. EduGain) that would allow their SWITCHaai identities to be used abroad; second, to the TCS personal e-Science certificate. Also, an institution's IDP would have to be configured to add "entitlements" to those users who should be entitled to use the TCS personal e-Science certificate service. - What is the process to get a personal e-Science certificate from TCS in your country? None yet. Our users can get both long-lived and short-lived Grid certificates from us, using SWITCH's PKI and SLCS services. - What are the rules for an institution in your country to join the identity federation and TCS? - Is there any special fee that an institution pays for joining TCS and/or the identity federation? Only partly applicable today. SWITCHaai participation is covered by the general SWITCH membership fee. In all likelihood, participation in TCS, if offered, would be charged for separately. The consequences for organizations are unclear, because the typical prospective TCS user already pays SWITCH for long-lived certificates today. - Does your NGI or NREN provide any service similar to the TCS? Please choose zero or more from the following and provide a brief description: Couldn't find "following", but see above; we offer both long-lived and short-lived personal certificates for use with Grids. - Any comments you have to TCS utilization in your NGI We have discussed this internally. TCS seems like an attractive long-term option. But we have working systems today, and unless and until our customers demand TCS support, we cannot easily justify the effort to set up TCS. This may be reevaluated in the future.
OPCFW_CODE
Many companies are planning to migrate to Internet Explorer 11, which promises better performances and improvements in stability. - 188.8.131.52 - 03 Apr 2019 - Initial release This pack requires some categories contained in the Shared Categories content pack, please make sure to have it installed in your environment before installing this pack. Migrate successfully to Internet Explorer 11 using end-user analytics: - Planning: assess compatibility with IE 11 against OS requirements and your Web applications - Roll out: track migration status and user adoption of IE 11 but also other browsers (if any) - User experience: establish benchmark to compare before-after and detect issues with the new browser at any time How to use it A) Import the content - Import the category pack in Finder - Import the investigation pack in Finder - Import the content pack (module and widgets) in Portal You can discover the browsers that are currently in use in your organization. The following categories are predefined and can be optionally updated: - "NXT - Web browsers" (used to tag all the applications that correspond to a browser) - "NXT - Internet Explorer versions" (used to tag all IE versions) Assess your environment against IE 11 requirements. The following category is predefined but can optionally be updated: - "NXT - IE 11 support" (used to tag all OS supporting IE 11) Supported Windows OS: Windows 8.1 / Windows 7 / Windows Server 2008 R2 / Windows Server 2012 R2 Unsupported Windows OS: Windows 8 / Windows Server 2012 / Windows server 2008 / Older version of Windows You should identify your critical web applications and decide which ones have to be fully supported. - The category "NXT - Web applications" is used to classify the web applications. Case 1: fully supported Create a keyword (e.g. "SAP", "Office 365", "SharePoint") that will identify the web application and tag the corresponding domains with these keywords. The generic keyword "others - supported" can also be used. Case 2: not supported If the application does not need to be monitored, tag the corresponding domain(s) with the keyword "others - NOT supported". For the rest of the analysis, these domains/web applications will be generally excluded. Additionally, only devices supporting IE 11 are taken into account. You can then visualize the usage of your web applications with all defined web browsers. The previously discovered and supported web applications should be individually tested for IE 11 compatibility. - The category "NXT - IE compatibility" should be used to tag the domain(s) with - "yes" if compatible - or "no" otherwise. D) Roll out Follow the migration of devices to IE 11. Here you can check that the migration does not have any unforeseen side effects such as users adopting other non-compliant browsers. E) User experience Here you can benchmark browser crashes and freezes. Verify that IE 11 performance for resource consumption is acceptable for your standards to ensure that the user experience is preserved. The data can be compared by versions and by browsers. Monitor the user experience to proactively discover any critical issues and preserve the quality of services. You may want to configure and fine-tune the thresholds on the issue widgets.
OPCFW_CODE
So, today Microsoft released their long awaited "World Wide Telescope". I've been eagerly awaiting this release. I love a lot of the free astronomy applications that are out there, including AstroPlanner, Google Sky and Stellarium to name but a few. But this software release promised to deliver quite a lot, and I think it does. The software is online. You download an executable that is stored locally, and it then hooks up to the net to get the data to drive its display. You actually have access to a very nice range of telescope data, including data from the Sloan Digital Sky Survey, different infra-red views, the US Naval Observatory and various radio observatories. The list goes on and on. It's quite exhaustive and could be bewildering to newcomers (to them, I recommend starting off with the Digital Sky Survey). The interface is very user friendly and quite intuitive. You can explore different types of objects fairly easily including galaxies, nebulae and even the Messier Catalog. You can search for specific objects, say a cluster or nebula, by their scientific/catalog designation or even their common name (e.g. "Orion Nebula"). For more advanced users, you can also plug in RA and DEC coordinates and let the software take you there. Another cool feature of this is the Guided Tour feature. Just select this feature, wait for the software to communicate to home base over the Net, then take a tour of galaxies, planets, surverys etc. My favorite so far is the "Interesting Objects" tour, which takes you for a very cool spin around the universe, checking out the more exotic features that are out there. I was very impressed, however, with the Telescope function. Although I haven't tested this yet, this entails being able to drive your GOTO scope using the World Wide Telescope software. You need to download an additional piece of software, but once done, you could really set yourself up for a fun astronomical experience. You could select a guided tour, let the software run through multiple objects based on the tour you select and let it slew your scope to each object. To me, that's a brilliant idea. Provided you can set up your scope and laptop somewhere where you have Net access (Wifi would be best!) you're in for a pretty nifty tour of the heavens! I think this functionality could also serve as a very good teaching tool in the field for public star parties. I will try this next time I am out at Canyon of the Eagles. One word of caution, this can be a bit of a resource hog. When running it on my laptop, other applications do come to a standstill from time-to-time. It's somewhat minimal though, so not too bad. I guess this has a lot of serious scientific applications for more advanced astronomers. For the serious amateur astronomer, this free software presents amazing value through it's rich dataset, intuitive interface and telescope control functionality. I also sat down with my kids tonight and showed it to them, and they just loved it. A big thumbs up from Phil!
OPCFW_CODE
Snoopy is an open-source tool that gives you immediate visual feedback on your React components. Focus on delighting your users. We're fast so you don't break your flow. Preview both in isolation and in context. Whatever your tooling preferences, we support them. To give it a quick try, you can run it with When you're serious, just add it as a dependency to your project (so it'll run immediately, every time): npm install --save-dev @prodo-ai/snoopy The docs are pretty lean right now, as things are changing fast, but it should get you up and running. Snoopy gets things done. When you hit 💾 Save, everything gets updated instantly. No more waiting for Webpack to get around to it. And because you're looking at every component, you don't need to click around to get things into the correct state. They're already there. Computers were made to compute. Snoopy renders everything you've got every time you change anything, giving you all the information you need to make quick decisions. Everything works out of the box. You don't need to configure anything to get Snoopy running and rendering your components. All you need to do is make a few examples. Soon, you won't even need to do that. Snoopy already finds your components for you. Soon it'll be able to analyse the way you interact with them in order to infer the kinds of properties those components need, and provide those properties automatically. This means you won't have to design your examples up-front. You'll just let Snoopy think a few things up, and then tweak to suit your needs. Once you're happy with your design, let the machine double-check it for you. When it comes to design and front-end development, we're not so much looking for "this is right" as "this feels right". Typical development best practices often don't apply. In the near future, Snoopy will take snapshots of your component examples, and make sure that whenever your design diverges from what you expect, you'll be notified. And because it can check each component individually, it can work quickly and effectively. If this intrigues you, tell us on Spectrum and help us design the next phase of Snoopy. Your UI has so much state that bugs are inevitable. Figuring out what's going on at any given moment is nigh-on impossible. And diagnosing bugs found by your users is even worse. In the future, Snoopy will help you with this, making sure that you can always track the series of events that got you into a particular state, without the overhead of conventional approaches. And we're going to make it more fun, too. 😉 If this intrigues you, tell us on Spectrum and help us figure out how we can help you solve your problems with state management. You've scrolled this far. You're intrigued. Try it out and see how it goes. And let us know what you think:
OPCFW_CODE
I remember when I first became a lead of a small development team. This was scary, since I really never managed time or resources for anyone but myself. Beyond that, these smart and capable developers were now looking to me for career development and people management help, in addition to technical insight. This was all new to me; so I started to look for training, books, coaching, basically anything I thought that could be helpful. By a little planning and a lot of luck, I ended up attending this brown-bag by an experienced HR manager. One thing that I remembered and used for many years from that talk was that you had to understand the people in your team. She emphasized that the manager had a good chance of using opportunities for developing, rewarding and guiding the team in ways team members would want and like the most. And for that, the manager had to know the people. She had a method for it that she used for many years, it's called the Heart-Tree-Star method. I have seen variations of this used by folks, perhaps having attended the same talk and morphed it in their own way. I want to share the way that I used it and matured it over the years, with the hopes that it may help others as well. The method involves you asking three questions to any and all new team members. You may be getting assigned to a new team leadership role, or a new team member may be joining; works both ways. One way of doing this is to make your first 1:1 a rather informal "getting to know each other" meeting and in the end asking the team member to complete an assignment. The assignment is for them to answer the Heart-Tree-Star questions, but to do so in the next meeting. I find that this part of the first 1:1 meeting can take many forms, but mostly one between two significantly different extremes. One is that the person jumps into answering the questions or part of them, sometimes even before listening to descriptions, right there and then. The second is that the person may ask for more details, descriptions, expectations, format of delivery when they are ready, etc. There are many variations between these two of course. If you use this method, you will see extremes as well as the middle of the spectrum. If nothing else, it will tell you about the people in your team and how to interact with them next time, especially when giving assignments. Let's talk about how you ask the questions and give the assignment, and what you learn from the answers. Short way of asking the heart question is: "Where is your heart ?" When you get that empty look from your team member, which I promise you will sometimes, you can go on to explain that this is the technology area or field that they feel most excited about. The kind of project that they think of when they are on a long drive, in the shower, or right before they go to sleep or when they wake up. This is not a conscious, planned career thought, but specifically, where your heart is. What is most exciting to you ? What gets your blood boiling ? What kind of projects ? What kind of technology ? What kind of work would you do, if you were to decide only on the type of work... not money, not location, etc. ? Answer to the heart question can change over time, but it rarely does fluctuate too far from a theme. Depending on the person, the answer to this has been as specific as "xyz algorithms...", or as vague as "build stuff". In the end, it is a great entry to finding out what excites and motivates an individual. The conversation does not have to be one way, they can hear about where your passion lies too, especially about how it ties to the team. Given that either you or the team member is new to the team when you are having this conversation, they will want to know about you, as much as you want to know about them. This answer helps a great deal in finding experts or go-to folks in the team over time. If you can figure out the interests of the people in the team, it makes it much easier to form focused teams or grow experts too. Short way of asking the tree question is: "What does growth look like for you ?", "What would you like to be in 5-7-10 years ?" or "Who would you like to become in a few years ? Any role models ?" The answer to the tree question changes over time. People grow, their priorities change for various reasons and although what they want to work on may not change, how they want to work on that may. It is important to understand this to make sure you have the right expectations from your team. In some cases, it may help you with succession planning too. In some others, you may find out that you really need to change assignments for folks in the team, to match desires, skills and positions better. Tree conversation is one that needs to be repeated at least every 2 years. The tree question answer depends on the company and the team models a great deal. If the company values or encourages deeper organization charts with small teams with managers, etc. it is very likely that you will find a lot of folks choosing management as a path. Interestingly, in those environments, team members who would like to grow as individual contributors find this opportunity invaluable to express that they want to grow, but not as a manager. In environments where flatter organization charts and less formal managers exist, you may find team members who like being technical leaders without managers looking for opportunities to shine, or someone who is contemplating what formal management responsibilities would look like. Either way, it is a great way to explore what team members really want. Answers to this question shattered some of the stereotypes and presumptions I had over the years, making for pleasant surprises. Star is the hardest of the questions to ask and answer. It is also the one question that will get you the most insight about what your relationship will be with a team member. The short way to ask this question is: "Aside from financial rewards, what is the best way to reward you ? And what is the best way to give you bad news or feedback ?" I promise you will get the boilerplate, obvious answers like "no bad feedback in public" or "recognize and encourage" etc. I have been having a lot more fun with this question since I added the "non-financial" clause into it. You may find some folks consider career guidance and coaching a privilege and more of it a reward. You may find folks considering being left alone, as autonomous as possible, to be the greatest reward. Sky is the limit in terms of what you can hear as an answer for this question and that is normal. Individual interpretation of behaviors multiplied by their expectations of recognition creates infinite possibilities. The harder part of this conversation is of course the negative feedback part. It is hard to tell someone how they should tell you that you may not be doing well. Nobody wants to even think about that, but it matters. An open conversation on this matter will help build trust and open communication, even if you get no other benefit or never need this kind of a message to be delivered. There is also the style component in the star question. Of course you will recognize someone, but what is the best method ? People have varying preferences. Here are a couple of examples. One case I ran into was with a great engineer in my team. We did have this conversation, but despite that, I failed him in one occasion. As such, this case became an example I share with team members when I ask the star question. As customary, after completing a milestone we sent a mail announcing completion. I replied all to the team and thanked this person for his extraordinary contribution, only to hear back from the engineer that I should not do that again. He was shy. He considered his name being mentioned in public, even if it was for this kind of positive topic, a negative event. So I learned and adjusted. I'd like to think that having had the star conversation with him prior actually opened him to be able to tell me this. Another case is from my own experience. I don't like public recognition for time served. I believe that recognition should be merit based, and time served in a job does not accrue to merit, unless you are in the armed forces, a survivor show, or something similar. To take pride in surviving a time period in a job, would either be accepting that what your contribution is not enough and you made an effort to hide, or the job itself by nature creates an elimination structure or threat. When one of my managers wanted to give me an award for seniority in company in an all hands meeting, I asked him not to do it. He was surprised. I had to explain myself. To this day, I think that he might have been offended. If he had asked me about the star before, I would have told him. As I mentioned, these are just methods and tools that we use and we benefit from them to the extent that we make them ours. You might have heard this method being used by others, you may even be using it yourself. Yours may be the same, or different in some specific way. Does not matter. It is all about making the workplace more fun, personal and productive. To quote our marketing professor:The phrase "it is not personal, it is just business" is not valid, because business is personal.
OPCFW_CODE
I came across an article in my favorite tech news site, ZDNet, that said Microsoft had predicted 10 years ago that the Internet is the next platform. But, Microsoft still spent bazillions of dollars making Windows XP and the new Windows Vista. Meanwhile, under Microsoft's radar, 2 Stanford students develop something in their dorm room, a search engine, and in 2005, they are big. HUGE. Google. With Google's way of innovation, and their ideas and having the top minds in the field (except they don't have me yet :p ), they are developing a lot of things, and they are not platform specific, but they are for the internet. Gmail, Maps, etc. They have more, and you'll see them by visiting their beta section. So, now Microsoft is feeling the heat. Without having a specific operating system, you can use any of Google's Internet products. Microsoft just wants to take over the world, so they will fight this, and start doing their own, or they just don't want Google to get too big, because then they can go stealing all their smart employees, paying them the big bucks, giving them the Presidential Suites, etc, and using them to develop products specifically targeting Microsoft products, instead of Microsoft doing it to them. The playing field is leveled a bit. It's an interesting concept, the Internet as a platform. How I picture it, the possibilities are endless. Before I had that vision though, and before I read that article, I had pictured something a little different, something like Google's platform, the latest desktop search. Plug in components into a base platform, and the base provides a lot of the functionality that the components need, providing quicker development. Think of Mac's Widgets. There's a widget container that can provide lots of functionality to the widgets, and then there are widgets that you can plug in. I always imagined something like an application container. I could have small apps that plug in, and you can open any of them from this container. I had thought of this before Mac's widgets, but instead turned towards internet applications. My main reason for this thought process was because of how Java works. I didn't know if you could make a Java program automatically run by double clicking it, you always have to open them with another program. Of course, have everything run under one program. (I later found out about JNLP, Java Network Launch Protocol, which launches 'JAR' files containing a Java program) Sometimes solutions are so obvious for one problem and they aren't even considered for another problem. What wasn't obvious to me is that this idea had already been done! In fact, everyone is doing it! When you visit a website, you are typically using an application written for the web. An application. Written for the web. My container application, the platform for running every program I write, is in fact your web browser. This seems like a great platform. Some obvious aspects that you have to watch out for are backing up data, security, limitations of certain web browsers, certain web browsers not following web standards, downtime, scalability, application flow, user experience, and users. Some great benefits to web applications are deploying, updating everyone's version instantaneously, data stored in a central location, and if you secure the server, it's virtually unhackable... if you develop it to be that way. Having a client application obviously has its benefits. You can access local resources (disk drives) and do stuff that you can't do in a web application, like video games and accessing hardware, and stuff that would kill the resources on a web server if too many people did it at once... intense applications. Basically, it depends on the application, whether you should make it a client application or a web application, and whether you can make it a web application. There aren't too many downsides to writing a web application, but they are pretty big downsides. There is another one. HTTP. HTTP is pretty primordial. HTTP is the protocol in which web servers communicate with the world. It consists of numbered codes and data separated by line breaks. It was developed before XML. However, XML has its obvious downsides. It's heavy, lots of text. Depending on your data, XML can double the size. It's mainly used for text, so you wouldn't normally go storing your images in there. I only bring this up because of client/server applications, or server to server communication, which still falls under client/server. This is why SOAP was invented. SOAP is an XML format that was developed for multiple applications, infinite applications, to send XML data over HTTP. A standardized format is a good start. HTTP can stay as it is, as long as everyone uses SOAP. This was the advent of web services; small applications written to run on the server and communicate with the client. Usually just a function or two. There's a huge history there (search the internet for RPC or "Remote Procedure Call", you'll see what I mean), and the idea was to make a standard way, rather than hundreds of developers fending for themselves, all writing a different way to call functions over the internet. Google has realized this. Maps and Gmail use AJAX extensively. It is the way of the future, and it is important enough that soon every browser will have it. But this isn't just about writing a web application that appears friendly to the user. It's about writing many applications that are all friendly with each other, and that all appear friendly to the user. Imagine an internet portal, a website that you go to as the first page you visit on the web. It has everything. News, stocks, your email, messages sent to your IM client that you missed, emails from other accounts you have, voice mails from work and from your cell phone, reminders about events in your calendar, and anything else you can think of. This is Google's vision... probably. Imagine having all this personal data on one website, collected from many different web applications, each using SOAP to communicate with each other, sending XML to the user's browser on each AJAX request, and reading all this personal data on the fly, determining which advertisements to show that user. Advertising is Google's main source of income still, besides selling stock. "But Google's also buying up loads and loads of dark fiber and buying wireless internet technologies and WAPs" you say... Yes, they have invested in a company that can triangulate exactly where you are when you connect to a wireless network. So you can search for the closest guitar shop to the exact point on which you are standing. This on a portal full of all of that other information I mentioned would just be showing off. This is where I think Google is heading. As with its search technology, I think the Internet can do better. I must emphasize this. I've mentioned this before, here. I think all of Google's web applications will supply their data this way. I quote myself: "Imagine, if Google, instead of just reading all of the HTML through a website url, can just ask a website "Yo, what's your deal?!" and the website can respond back "Dude, I am a guitar shop, here are my wares."" RDF is this for news. Somehow Google is able to extract prices of goods on websites as well, and build a shopping cart around them. But instead of Google just being able to search these results for items you may be looking for, what if there was no website that actually sold this stuff, but Google just read data from a server, through another protocol, and did everything: shopping cart, credit card processing, etc. Google would be the only online shop. Or, what if someone else did this. Like me! No, there's an "end of the world" scenario in there somewhere. No more online shops, just Google, and less jobs, and less money, and more Google. It could be bad, let's hope that they're only doing the portal mentioned above :)
OPCFW_CODE
Customer churn prediction using machine learning will help understand why customers stop buying or using the services you provide completely. It is helpful to understand customer churn; every brand in today’s time faces this problem. Customer churn is the worst nightmare of every business. Once your customers decide that they are no longer interested in your products or services, it doesn’t take long for them to jump ship. Your focus shifts from acquiring customers to retaining your existing customers; this is where the focus on customer churn prediction proves its worth. How to get started with customer churn prediction Customer churn prediction is a business challenge every company faces at some point. It does not matter what size the business you are into and the model of your operation; if you are someone into selling the products and the services. It is important to know that every business faces the problem of customer retention using machine learning. The first step toward customer churn prediction is to define what it means to keep customers. The second step is to identify all possible factors that affect customer retention and determine which ones impact your business’ performance. Once you have done this, it is time to find out which factor impacts customer retention and how much it costs you if one of your customers decides to leave your service or product. Churn Rate refers to the percentage of customers that leave over a specific period (usually monthly). It’s calculated by dividing the total number of customers who left during a given period by the total number of active customers at the beginning of that period, then multiplying by 100%. Customer Churn Prediction: It refers to predicting when customers will leave so that appropriate actions can get taken before they do so. Top reasons affecting customer churn Customer churn is a problem that can affect any business. If you know how to get started with customer churn prediction, then knowing the main reasons that affect customer churn will help predict the churn and help avoid it. Poor customer service is one of the main reasons that can cause customer churn. Another factor is the price; you must keep comparing the prices with the competitors. If your product does not meet the needs of your customers, then they will leave you for another brand that meets their needs. Another reason for customer churn is when you do not meet their expectations; if they do not get what they expect from your product or service, they will leave you and go somewhere else where they can get what they need and want. How to work on predicting customer churn using machine learning Customer churn prediction is a critical part of customer success and growth. It helps you understand why customers are leaving and how to prevent it. The best way to customer churn predicts using machine learning. The reason is that machine learning can help find any risky customer and help you understand why they want to leave. It’s also an excellent way to improve your marketing campaigns and make them more effective, so your customers stay with you longer. There are many types of machine learning algorithms you can use for churn prediction. Here are some of them: Decision trees: This algorithm uses decision trees to identify the riskiest customers based on their historical data. Decision trees are easy to understand but can be very complex because they have many branches and leaves. If you use this technique, ensure all branches are tested for accuracy before choosing one branch as the final result because it may not be accurate enough for your needs. Neural networks: This algorithm uses artificial neural networks (ANN) to find patterns in data sets and then make predictions about future events or outcomes based on those patterns. ANNs learn from previous experiences and apply what they’ve learned during training sessions when making new predictions; this makes them more accurate than others. There is a five-step process to help in predicting customer churn using machine learning. They are as follows: Knowing the problem and the goal: The first step in any machine learning project is to understand the problem and the main goal related to the analysis. That will help in determining the type of machine learning to use. For example, if you want to predict customer behavior, you might use a classification model (like an SVM or a Naive Bayes). However, if you are trying to determine whether an image has been altered, you probably want to use a regression model (like a neural network or linear regression). Once you have determined what type of model you want to use, it is time to collect data and prepare it for training. Data collection: Once you have finalized the type of machine learning, you have to finalize the data sources needed for modeling and forecasting. That includes: Data collection: Data should be collected from all relevant sources, such as web pages, social media posts, and emails. Data cleaning: Data can contain errors and duplicates, so it must get cleaned before being used in the analysis. Data preparation: Data may need to be aggregated or converted into a uniform format before analysis. Preparation of data: It is that stage where the collected data gets converted into the best format that is helpful in machine learning. The main purpose is to prove that all the information units get collected using logic and the data is consistent. Testing and modeling: In this stage, machine learning prediction will get created. It will also have validation of the model and performance monitoring to help get the correct customer churn prediction from the historical data. Monitoring and implementation: It is the final stage of machine learning development to help predict customer churn. A customer churn-based model got created on machine learning. Customer churn prediction using machine learning is difficult for companies because churn is not one-dimensional. To accurately predict customer churn, you need data on each customer interaction; thus, gathering such data is a difficult task. Even if we have multiple data points like product usage, transaction history, and engagement behavior, the root of customer churn can be delved into using just simple customer profiling. The biggest challenge in this process is how organizations put their customer holdings into different segments or buckets to measure loyalty and propensity to churn.
OPCFW_CODE
Scandisk & Defrag are both utilities that are in Windows. Scan Disk checks the health of your hard drive, while defrag gathers up all segments of a program & places them a together in one place on you hard drive. Running each of the utilities once a month should help your system. To run the programs open My Computer - then RIGHT CLICK on any hard drive you choose. You will then have a window with 2 to 4 tabs across the top General - Tools - Sharing CLICK on the Tools tab. You will then see a window with 2 or 3 sections in it, depending on you configuration. The top section will have the heading " Error-checking status" with a button "Check Now" CLICK this button. In the next window you can select any drive you wish to check, not just the drive you right clicked on. I always do a "Thorough" scan & have "Automatically fix errors" ticked. NOW BE WARNED this can take some time. If you have a screen saver running TURN IT OFF If a fault is found scandisk will restart, or if the contents of the hard drive changes it will restart. If scandisk restarts 10 times a notice will pop up saying something like "Scandisk has restarted 10 times would you like to continue to receive notification of errors. Just click NO & let it get on with it. Defrag is in the bottom section of the tools window with a "Defragment Now" button. CLICK this & then the "START" button. This will also take some time (especially the first time it is run) Scandisk & Defrag can also be started from the start menu. But the exact position of the short-cut can be changed. Try this sequence. START - PROGRAMS - ACCESSORIES - SYSTEM TOOLS - then CLICK on DISK DEFRAGMENT or SCANDISK Some computers struggle to complete the above task. This is usually because of another program running in the background. If this is the case try running your computer in safe mode. Restart you computer & press the ( F8 ) key. Press press & press F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 F8 & so on, until you see a menu with a list of options on it with "Safe Mode" being No3. Chose this option to start your computer with a minimal set of drivers & run "Scan Disc" & "Defrag" in "Safe Mode" then restart your computer normally. Win XP, NT & 2000 Can be open as above - BUT For Win XP, NT & 2000 using NTFS file system you can schedule a scan the next time windows starts. In other words tick the boxes to schedule the scan then restart you computer, to get it done out of the way.
OPCFW_CODE
from collections import Counter import lemmy import nltk from nltk.corpus import stopwords import spacy # splits an input doc into its component sentences def split_sents(text): return nltk.sent_tokenize(text) # splits a doc into sentences and prints them out def print_sents(text): for sent in split_sents(text): # loop through print("\n" + sent) # sentences # prints out the syntactic info for an input doc def print_syntactic_info(doc, nlp): for sent in split_sents(doc.text): print("") # loop through sentences for token in nlp(sent): # for each token, print syntactic info print(f"{token.text:{15}} {token.dep_:{20}} {token.pos_:{20}} {token.tag_:{20}}") # removes the stopwords from an input text def remove_stopwords(text, nlp): # add to list if not in stopwords list words = [word for word in text.split() if word not in stopwords.words("swedish")] return nlp(" ".join(words)) # return the stopwords # prints the word frequency for 5 most common tokens def print_word_frequency_list(doc, nlp): no_stopwords = remove_stopwords(doc.text, nlp) # remove stopwords as # these are particularly common and not of importance words = [token for token in no_stopwords if not token.is_punct] freq = Counter(words) # add to Counter object if not punctuation # output the 5 most common tokens print("\n" + str(freq.most_common(5))) # prints the POS tag frequency list def print_pos_frequency_list(doc): POS_count = doc.count_by(spacy.attrs.POS) print("") # count number of each POS tag for i, v in sorted(POS_count.items()): # output to user print(f"{doc.vocab[i].text:{5}}: {v}") # print the tokens in an input doc def print_tokens(doc, nlp): for sent in split_sents(doc.text): print("") # for each sentence, loop through tokens for token in nlp(sent): # output token print(token, token.idx) # print the stopwords in an input doc def print_stopwords(doc): stopwords_list = [] for token in doc: #for each token, add stopword to list if token.text in stopwords.words("swedish") and not token.text in stopwords_list: stopwords_list.append(token.text) print("") for stopword in stopwords_list: print(stopword) # output to user # prints the dependency skeleton for an input doc def print_dependency_skeleton(doc, nlp): for sent in split_sents(doc.text): print("") # for each sentence, output the token and # morphological/syntactic information for token in nlp(sent): print(f"{token.text:{15}} {token.dep_:{20}} {token.head.text:{20}}") # prints the lemmatised form of tokens in an input doc def print_lemmatise_doc(doc, nlp): lemmatiser = lemmy.load("sv") # load up lemmy and loop through sentences for sent in split_sents(doc.text): print("") for token in nlp(sent): # loop through tokens and print lemmas lemma = lemmatiser.lemmatize(token.pos_, token.text)[0] print(f"{token.text:{15}} {lemma:{15}}")
STACK_EDU
Docker is an open-source container virtualization application mainly written in Go (like PhotoPrism). It is ideal for running applications on any computer without extensive installation, configuration or performance overhead. All you need to do is download an image and start it. However, Docker is not commonly used by end users and more popular among developers / admins. For that reason, PhotoPrism must also be usable without running in a container, at least on Linux and OS X. Continuous Integration / Deployment¶ Build and push of an updated container image to Docker Hub is automatically performed by Travis CI whenever develop is merged into master and the tests are all green. For that reason, we don't use semantic versioning for our binaries and container images. A version string might look like 181112-edc7c2f-Darwin-i386-DEBUG instead. Travis CI uses the photoprism/development image for running unit and integration tests on all branches and for pull requests, see Dockerfile. When creating new images, Docker supports so called multi-stage builds, that means you can compile an application like PhotoPrism in a container that contains all development dependencies (like source code, debugger, compiler,...) and later copy the binary to a fresh container. This way we could reduce the compressed container size from ~1 GB to less than 200 MB. Most of that is used by Darktable, TensorFlow and Ubuntu 18.04. Our photoprism binary is smaller than 20 MB. FROM photoprism/development:20181112 as build # Build PhotoPrism WORKDIR "/go/src/github.com/photoprism/photoprism" COPY . . RUN make all install # Base base image as photoprism/development FROM ubuntu:18.04 WORKDIR /srv/photoprism # Copy built binaries and assets to this image COPY --from=build /usr/local/bin/photoprism /usr/local/bin/photoprism COPY --from=build /srv/photoprism /srv/photoprism # Expose HTTP port EXPOSE 80 # Start PhotoPrism server CMD photoprism start - https://forge.sh/ - Define and deploy multi-container apps in Kubernetes, from source - https://www.telepresence.io/ - a local development environment for a remote Kubernetes cluster - https://hub.docker.com/r/multiarch/qemu-user-static/ - quemu for building multiarch images with Docker - https://github.com/opencontainers/image-spec - standard labels for Docker image metadata - https://github.com/Yelp/dumb-init - A minimal init system for Linux containers
OPCFW_CODE
import format from 'date-fns/format' import { GROUP, Scope, SCOPE, AuthScopeTree, AUTH_API_SCOPE_GROUP_TYPE, ScopeGroup, AccessFormScope, MappedScope, } from './access.types' export const DATE_FORMAT = 'dd.MM.yyyy' export const formatDelegationDate = (dt: string | Date) => format(new Date(dt), DATE_FORMAT) /** * Checks if scope is a scope group type */ export const isApiScopeGroup = (scope: Scope): scope is ScopeGroup => scope.__typename === AUTH_API_SCOPE_GROUP_TYPE /** * Gets scopeTree current model index based on order in list and take into account if scope bares children. * This makes sure the model indexes will all be unique and in sequential order, i.e. 0,1,2,3,4,5,6,7,8,9,... */ const getScopeTreeCurrentModelIndex = ( authScopes: AuthScopeTree, authScopesEndIndex: number, ) => { let index = 0 for (let i = 0; i < authScopesEndIndex; i++) { const scope = authScopes[i] // If scope has children, add the number of children to the index if (scope.__typename === AUTH_API_SCOPE_GROUP_TYPE) { index += scope.children ? scope.children.length - 1 : 0 } index++ } return index } /** * Extends scope with model property for form state * @param scope current scope * @param index current scope index in list * @param scopes list of scopes */ export const extendApiScope = ( scope: AuthScopeTree[0], index: number, scopes: AuthScopeTree, ): Scope[] => { const currentModelIndex = getScopeTreeCurrentModelIndex(scopes, index) if (scope.__typename === AUTH_API_SCOPE_GROUP_TYPE) { return [ // Scope parent of the children { ...scope, model: `${GROUP}.${currentModelIndex}`, }, // Scope children ...(scope.children?.map((s, childIndex) => ({ ...s, model: `${SCOPE}.${currentModelIndex + childIndex}`, })) || []), ] } return [ { ...scope, model: `${SCOPE}.${currentModelIndex}`, }, ] } type MapScopeTreeToScope = { item: AccessFormScope scopeTree?: AuthScopeTree validityPeriod: Date | null } /** * Maps and flattens scope tree to a list of scopes */ export const formatScopeTreeToScope = ({ item, scopeTree, validityPeriod, }: MapScopeTreeToScope): MappedScope | null => { const flattenScopes = scopeTree ?.map((apiScope) => { if (apiScope.__typename === AUTH_API_SCOPE_GROUP_TYPE) { return [apiScope, ...(apiScope?.children || [])] } return apiScope }) .flat() const authApiScope = flattenScopes?.find( (apiScope) => apiScope.name === item.name[0], ) const validTo = validityPeriod ?? item.validTo if (!authApiScope || !validTo) { return null } return { name: authApiScope.name, displayName: authApiScope.displayName, // validityPeriod has priority over item.validTo validTo, description: authApiScope?.description, } } export const accessMessages = { dateValidTo: { id: 'sp.settings-access-control:access-item-datepicker-label-mobile', defaultMessage: 'Í gildi til', }, }
STACK_EDU
Use github.com/containerd/platforms package Update the platforms package to alias to the new platforms package. I added cherry-pick labels (for the first commit). I added cherry-pick labels (for the first commit). Hm.. we may want to check the go module versions though; it looks like some versions specified in the new module are pretty recent (hcsshim v0.12.0-rc.2). We should check what the minimum version is that works, and pick that (to allow go module MVS to do its thing). I opened a PR to downgrade the minimum required version of hcsshim, so that this module can more easily be consumed by other projects and in other branches (1.6, 1.7); https://github.com/containerd/platforms/pull/5 Perhaps we should wait for https://github.com/containerd/platforms/pull/5 (tag as v0.1.1); that way we can update at least the v1.7 branch to use the aliases. For 1.6 we'd have to update hcsshim to v0.10.0 (to be discussed), but 1.7 should already meet the required version. (currently trying with my PR branch in https://github.com/moby/moby/pull/47142) The good news; it now builds (v0.1.0 forced a GRPC updated, which didn't work well); the "bad" news is that I have 2 tests failing that expected a platform foobar to be invalid, but somehow it now doesn't produce an error; https://github.com/moby/moby/pull/47142#issuecomment-1902446112 Could be some bug on the moby side, but we'd have to dig into that (perhaps @dmcgowan has time to check). I'm sure that can be fixed (just curious what changed in the behavior). Ah! I think I found the cause; the need module doesn't use containerd's errdefs package; looks like the change of errors is a breaking change; and the new errors are not exported, so cannot be detected / typed. We should find a solution for this but that change is already in main and our 2.0 betas. Splitting out errdefs may be the best solution. We could use interfaces to test for errors but that wouldn't help in cases where we use errors.Is. We should find a solution for this but that change is already in main and our 2.0 betas. Splitting out errdefs may be the best solution. We could use interfaces to test for errors but that wouldn't help in cases where we use errors.Is. Yeah, agree; for main / v2.0 (beta) this is probably find if we already have that errdefs change in current main; let's update this PR to use v0.1.1 (to at least include the changes from https://github.com/containerd/platforms/pull/5) work on the errdefs situation; we may need a common implementation of the errdefs definitions across branches, so that error-types can be matched in situations where multiple containerd versions (v1.x and v2.x) are in use, and errors cross boundaries. "LGTM" after updating the first commit to v0.1.1 /retest
GITHUB_ARCHIVE
MATH32012 Commutative Algebra - 2012/13, Semester 2 The Online Test is currently accessible via the MATH32012 course content page in Blackboard You may retake the test for revision purposes (e.g., to practise the computation of Gröbner bases). It will not affect your coursework mark. (The coursework marks have been finalised and are available via the Grade Centre in Blackboard.) The following materials are available: (single file, lectures only) ANSWERS to revision questions Session notes (most recent first)Week 11, examples class * Week 11, revision lecture (slides) * Week 11, lecture 1 Week 10, example sheet * Week 10, examples class (CORRECTED) * Week 10, lecture 2 * Week 10, lecture 1 Week 09, example sheet * Week 09, examples class * Week 09, lecture 2 * Week 09, lecture 1 Week 08, example sheet * Week 08, examples class * Week 08, lecture 2 * Week 08, lecture 1 Week 07, assessed homework 1: model solutions * Week 07, example sheet * Week 07, lecture 2 * Week 07, lecture 1 Week 06, example sheet * Week 06, examples class * Week 06, lecture 2 * Week 06, lecture 1 Week 05, assessed homework 1 * Week 05, example sheet * Week 05, examples class * Week 05, lecture 2 * Week 05, lecture 1 Week 04, example sheet * Week 04, examples class * Week 04, lecture 2 * Week 04, lecture 1 Week 03, example sheet * Week 03, examples class * Week 03, lecture 2 * Week 03, lecture 1 Week 02, example sheet * Week 02, examples class * Week 02, lecture 2 * Week 02, lecture 1 Week 01, example sheet * Week 01, examples class * Week 01, lecture 2 * Week 01, lecture 1 Module description and prerequisites Please make sure that you read the You should have general facility for dealing with algebraic structures: complex numbers, sets, groups, rings, fields. For this reason, MATH20212 Algebraic Structures 2 is a prerequisite. About the course Many find MATH32012 Commutative Algebra the most advanced abstract algebra course they take as part of their degree. Nevertheless, the content of the course is not just a sequence of theorems and proofs. You are expected to learn methods of algebraic computation relating to polynomials in several variables. Solving equations has been a driving force of algebra at least since the Babylonians learned to solve quadratic equations some 3700 years ago. The subject matter of this course is, however, informed by more recent developments. The work of Hilbert in late 19th - 20th century was key to the modern treatment of multivariate polynomials and provided a basis for commutative algebra and algebraic geometry. His result that every (consistent) system of polynomial equations over an algebraically closed field has at least one solution is known as the Nullstellensatz. But an efficient method of finding such solutions by elimination was not found until 1965, when Buchberger invented Gröbner bases. In the course, key theorems about the ring of polynomials in several variables will be rigorously proved. Algorithms relating to polynomials will be explained and supported by examples. This includes factorising polynomials into irreducible factors and computing a Gröbner basis of an ideal. Results and methods of Commutative Algebra have applications in various branches of mathematics and computer science. Here are some puzzles which we may use in the course as an illustration for the main content. You are welcome to have a go at solving them! - Question 1 (Fermat, 17th century). Find all integers a "sandwiched" between a square and a cube. - Question 2. How many ways are there of placing 8 queens on a chessboard so that no two queens attack each other? What about n queens on an n×n chessboard? - Question 3. How many distinct Sudoku boards are there? (A Sudoku board is a 9×9 square with a number from 1 to 9 in each cell, satisfying the Sudoku constraints.) images from Wikimedia commons There will be 2 pieces of assessed coursework: Assessed homework 1 (see a link above): a take-home problem sheet set on Wednesday 27 February (week 5), due on Tuesday 12 March (week 7) at 4pm. Blackboard-based online test: a timed, open-book test which the students complete online; multiple attempts are allowed Previous years' exams Commutative algebra exam papers from years 2008-2012 are available here - Tuesday 1pm-1:50pm, in Schuster Blackett; Thursday 3pm-3:50pm and 4pm-4:50pm, in Ellen Wilkinson C5.1 - Dr Yuri Bazlov - yuri.bazlov, append AT and manchester.ac.uk - 2.220 Alan Turing building - office hours: - Tuesday 2:30-3:30pm. I intend to be available in my office during the office hours, but students may come to see me at other times as well, or make an appointment by email. What is a Gröbner Basis? , a short expository note by Bernd Sturmfels
OPCFW_CODE
import { parseString, stripComments } from 'strip-comments-strings' import { type CommentSpecifier } from './CommentSpecifier' export function extractComments (text: string) { const hasFinalNewline = text.endsWith('\n') if (!hasFinalNewline) { /* For the sake of the comment parser, which otherwise loses the * final character of a final comment */ text += '\n' } const { comments: rawComments } = parseString(text) const comments: CommentSpecifier[] = [] let stripped = stripComments(text) if (!hasFinalNewline) { stripped = stripped.slice(0, -1) } let offset = 0 // accumulates difference of indices from text to stripped for (const comment of rawComments) { /* Extract much more context for the comment needed to restore it later */ // Unfortunately, JavaScript lastIndexOf does not have an end parameter: const preamble: string = stripped.slice(0, comment.index - offset) const lineStart = Math.max(preamble.lastIndexOf('\n'), 0) const priorLines = preamble.split('\n') let lineNumber = priorLines.length let after = '' let hasAfter = false if (lineNumber === 1) { if (preamble.trim().length === 0) { lineNumber = 0 } } else { after = priorLines[lineNumber - 2] hasAfter = true if (priorLines[0].trim().length === 0) { /* JSON5.stringify will not have a whitespace-only line at the start */ lineNumber -= 1 } } let lineEnd = stripped.indexOf( '\n', (lineStart === 0) ? 0 : lineStart + 1) if (lineEnd < 0) { lineEnd = stripped.length } const whitespaceMatch = stripped .slice(lineStart, comment.index - offset) .match(/^\s*/) const newComment: CommentSpecifier = { type: comment.type, content: comment.content, lineNumber, on: stripped.slice(lineStart, lineEnd), whitespace: whitespaceMatch ? whitespaceMatch[0] : '', } if (hasAfter) { newComment.after = after } const nextLineEnd = stripped.indexOf('\n', lineEnd + 1) if (nextLineEnd >= 0) { newComment.before = stripped.slice(lineEnd, nextLineEnd) } comments.push(newComment) offset += comment.indexEnd - comment.index } return { text: stripped, comments: comments.length ? comments : undefined, hasFinalNewline, } }
STACK_EDU
With Ripple, how is base fee (transaction fee) calculated? The Ripple wiki says (emphasis added): If the reference transaction should cost 10 millionths of a ripple, the "base fee" should be set to 10. This is the current value. This makes it very simple to adjust fees to keep them sensible -- just figure out how many millionths of a Ripple the reference transaction should cost and set that as the base fee. Yes, and how does the system figure out what a reference transaction would cost? It would be great to know how the transaction fee is determined and how it is adjusted in response to deflation (rising value of XRP or falling supply of the units). Manish, you're flooding a lot of Ripple questions onto here at a higher rate than people can vote on them or answer. Can I ask why? @eMansipater Yes, sir, you may ask, and I can tell you that I was reading the specs and I posted the questions as they popped into my head. Is that bad form? I could just write down my questions in a text editor and post them at intervals of an hour or two, if that's better. I certainly didn't mean to "flood" anything. no real problem. Sometimes flooding a lot of questions is an indicator that someone doesn't quite get how to use Stack Exchange so I was curious where the questions were coming from. Would you say that the sorts of things you were asking about are missing holes in the existing Ripple documentation, or more just that you were using this as a way to kind of get Ripple into your head? Generally Stack Exchange works well for individual answers where someone will take the time to give an an-depth answer. With the rapid fire style you might have better luck on the Ripple forums. (You'll note that although David Schwartz was able to keep up with basic answers he didn't really have time to answer all the questions in depth.) Stack Exchange shines best when someone can take a few specific questions and answer them so effectively that anyone in the future of the internet would prefer that answer to any other treatment of the question. When questions are getting fired off too fast, it's hard for the experts to provide that level of detail and quality. That's all. @eMansipater Frankly, I don't know what you're talking about. If you can point me to one question I posted about Ripple in those few minutes that didn't meet the quality standards of this site, then I'll admit you have a point. If the documentation already contained the answer, then I wouldn't be asking the question. "No real problem." OK, case closed! "point me to one question"--it's not really about that, more about continually improving the quality of the site. One weird thing for people to wrap their heads around with Stack Exchange sites is that even an obscure question will often be used by 50 people who later visit, and more than 20,000 people for a really good one. So the real trick to creating a good Stack Exchange site is working to make questions and answers 20,000 times better than they "need" to be. That's what people like me are always on about--the 20,000 other people who could benefit from an improvement. @eMansipater Thanks, I really appreciate the work you put into the site. I have learnt a lot from it for sure. The base fee is contained in the ledger and can only be changed by a pseudo-transaction that gets into the consensus set. It's managed by consensus the same way the reserve levels are. "It's managed by consensus the same way the reserve levels are" Details on that process are here: https://ripple.com/wiki/Change_Process
STACK_EXCHANGE
Certify working beautifully on dev and test servers. So easy! On production server though, we’re getting Validation of the required challenges did not complete successfully. The key authorization file from the server did not match this challenge [long-key-here] != [different-long-key]. Things we have tried: - two domains, same result - from outside of firewall, browsed to http://one.example.com/.well-known/acme-challenge/configcheck and get “Extensionless File Config Test - Ok” - edited one of the configcheck files to ensure pointing to the correct place - Certify Test runs successfully - Tried manually setting website root directory, manually setting domain match - Tried web.config in acme-challenge folder with mimeMap fileExtension="." mimeType=“text/json” and allow users="*" - Click New Certificate - Give it a name, select website from select list, ensure domain is checked in list to include - Set Challenge type to http-01 - Run Test - Request Certificate - Windows Server 2012 R2 - Server has other domains using traditional Comodo certs - Certify v. 18.104.22.168 It’s so easy that I can’t tell what I’m doing wrong. Suggestions? Hi, thanks for getting in touch. There’s a known bug that affects some users related to the Let’s Encrypt account id. Can you try setting your account email again under Settings (you can update it to the same email address), this will reset the Let’s Encrypt account id internally. By default the app will use the builtin http challenge server, so if that works OK then none of the website directory.web.config setting matter, plus the error message your seeing is that the response doesn’t quite match what’s expected not that it doesn’t get the response at all. That worked. Thank you very much! For anyone that finds this on Google, my exact steps were (not sure if all needed): - In Certify app, select Settings, click New Contact, changed email address. I actually changed to a different address and then back, but apparently not necessary. - Restarted Certify service, again, might not be necessary. - Went back to Managed Certificates, selected domain and clicked Request Certificate All works as expected. So easy. Thanks @Ted - out of interest did you start on v4.0.10 or were you upgrading from an older version (or had an older version already installed)? I’m keen to track down this accountid bug. I started with v4.0.10. When comparing the settings to one of the working installations, I noticed that it was 22.214.171.124 so I downgraded the troubled server and got the same result.
OPCFW_CODE
I’ve tried quite a few graphical programming languages over the years, such as Pure Data (pd), but having experience with more traditional text-based languages, was always left frustrated by the seemingly roundabout way of data entry. The same things that attract me to vim and Dvorak made me long for more convenient methods. At the same time, there are some things which are much better suited to graphical programming environments. It’s easier to keep track of variables, since they just sit right in front of you. Controls can model traditional interfaces such as knobs for tweaking values, making real-time manipulation much more friendly. Obviously, both systems have their advantages and disadvantages. While typing ‘x = y + z*2’ is much quicker and more concise than navigating multilevel menus to create discrete operators and operands in a visual system, finding exactly that right shade of indigo is much nicer with a typical palette finder rather than guess and checking with RGB triplets. Both systems are equally capable, but some tasks lend themselves more naturally to one system than the other. Being able to pick and choose which to use at any given time led me to attempt a parallel model. In Canvasthesia I’m using Python to implement much of the higher level functionality. It’s quite a versatile language, and has the somewhat unusual ability to interpret code from an interactive console. I have a few classes which represent various types of entities, such as a Renderable entity, which startlingly enough, renders something in a scene. Adding a custom Python descriptor (EntityConnection) to a class derived from one of these base entity classes lets the system know that a particular type of entity can be connected to it. class Test(vj.Renderable): testlink = vj.EntityConnection(vj.EntityType.Renderable) def __init__(self): vj.Renderable.__init__(self) def Render(self): pass This simple renderable entity does nothing of interest, as evidenced by its minimal Render method. However, it does contain a link to another renderable entity via its testlink attribute. Because it is an EntityConnection descriptor, it is automatically added to any instance’s corresponding visual control. A few lines typed into the console creates a couple objects and adds them to the scene: test = Test() vj.MainScene.Attach(test) test.testlink = Test() Although only a single Test instance was directly attached to the scene, a simple depth-first search reveals that a second actually exists. Additionally, it is evident that there is a connection between them via the first’s testlink attribute. The patchbay shows this intuitively: Likewise, a new entity could be created in the visual editor and subsequently accessed by the Python console. They are simply parallel frontends manipulating the same backing objects, allowing whichever is convenient to be used at any time. It’s the best of both worlds.
OPCFW_CODE
It seems that there is no need to join anymore... Test period is over and joining just makes problems... Last edited by ventrical; March 6th, 2013 at 09:38 AM. This is Rolling Release Warnings for New Beta Testers& Helpful Terminal Commands: Running Trusty /devel/@ 5.120GHz32bit/ Please put [ prefix] on New Threads! you see those videos that pop up a little ad rectangle ? those are the only ones that will require flash, if you do the embed or tv# trick you can watch them just fine... What I find presposterous is that in internet explorer 10 youtube will play everything without need for tricks or edits. more people should refuse to use flash, otherwise html5 and mp4 that should already be in use everywhere will never starts gaining ground. similar question last September, and your post answers it. Thanks. it's when you open a video that says " I can't play without flash" like this one: and we change the url into: and it plays without flash also you can use external applications like minitube... there are tricks to speed up youtube streaming and tricks that germans and the use to see music videos (they are banned in germany) and documentaries etc... again what annoys me is that no such tricks are needed in windows where you simplyc an watch all videos without flash and without having to do little tricks Last edited by nomenkultur; March 6th, 2013 at 06:40 PM. by that mindset we could all have just stayed with windows since it sort of 'just works' and never demand better technologies... I can see that you have no option but to use flash with bbc/cbs/etc... but if instead of complaining about adobe flash's security bugs/bad support/glitches people just refused to use it those sites would have no option but to switch to html5. As much as I don't like apple, one thing they got right; flash needs to die Just some observations... I messed around last night with different browsers, opting in to the 'HTML 5 trial' and disabling Flash. Ubuntu 13.04 x64; Firefox, Google Chrome, Opera Win8 Pro x64; Firefox (Nightly), Google Chrome, IE10 Oddly enough, some videos wouldn't play without Flash in YouTube under Ubuntu /w Firefox & Opera. Whereas the same videos were fine with Google Chrome. The kicker is in Win8, all browsers worked with the videos I tested on YouTube (HTML 5). Obviously this is inconclusive as I couldn't test every YouTube video -- lol! Nonetheless, interesting results so far.
OPCFW_CODE
Started at 2018! … and already combining data from 67+ sources. Soon moving to open integrations, mobile app, a new kind of search engine, US & world patents, etc just to mention few... also Estonian open data to follow Just getting started! Rows in database So far covered See how the data is distributed. Annual growth of 100+% for second year in a row! Bringing data to You Everything about websites related to Finnish companies. Not limited to .fi domains. Amount of data growing at incredible speed. Speed analysis, enhanced content analysis, etc coming soon! All possible data about the Finnish public companies. Fully automated incremental company data collection always at your disposal. New features, sources and analytics added all the time. All EU IPO (applicants, designs, trademarks just for you. Full XML files available with objections and other nitty-gritty details. WIPO and USPTO data to follow later. Finnish tenders. As the original data quality leaves a lot to desire we have fixed it especially for you. Analytics and original XML files also available. Additional analytics and API coming soon. EU tenders since 07/2018. For now only used for analytics purposes but in future with advanced search functionalities, analytics and tailored alarms. Free collected data for all – of course! This is the main distribution domain for all open data collected by all Otacode projects. Ideas are welcome! A new kind of search engine for all data available in verkkotunnus.eu and yritys.io. (Kookkeli is a tongue-in-cheek name for Google in Finnish language). No user data collection – built-in privacy. Not all city and community purchase data can be mapped with public companies hence a site to provide superior search and analytics. It’s your money (=taxes) and we need transparency! Your idea could be the next big thing. Lets get in touch and create something beautiful! World is far from ready – lets start building! Helping small business thrive Eläinlääkäripalveluja suurella sydämmellä. Kotikäynnit vaivattomasti Nurmijärveltä joka viikonpäivänä (Otacode installed and hosts the publication platform. Website design by Otacode) Kattavat ympäristöpalvelut osaavasti. (Otacode installed and hosts the publication platform. Helping with the email, DNS, etc services) ElFys provides photodetectors and related services for various light detection applications. We design, develop and manufacture customized photodetectors as well as standard products to fulfill the demanding requirements of our customer. (Otacode installed and maintains the publication platform. Helping with collaboration tools and site development)
OPCFW_CODE
When you click on a content topic, you see a message that says the page cannot be displayed, access if forbidden, or the page is blank. We have had reports of this issue occuring on all browsers and have experienced this issue ourselves. The issue occurs randomly, and it is frustrating. We have found that at least one of the steps suggested below remedies the problem most of the time. - Check to see that this problem is not isolated to a particular content topic. Select several content topics in more than one course if possible. If this problem occurs for only one content topic or in one course but not others, contact your instructor because this suggests files are missing or links are broken. If this problem occurs for all content topics, delete your browser Temporary Internet Files, Cookies, and Cache. (Need help with this?) - If the content topic is not a webpage but is a different file-type (Word, RTF, PPT, etc.), Internet Explorer may block the file from being downloaded until you give permission. Look for a light yellow information bar across the top the page. If you see this information bar, you will need to give your browser permission to download the file. Follow the directions provided in the information bar to allow the file to be downloaded. - Use a different browser like Firefox. Our Downloads page contains a list of several free browsers to use as alternatives to Internet Explorer. No matter which browser you prefer, it is valuable to have more than one browser installed on your system because you will encounter problems like these from time to time on other websites as well. - Logout of D2L, close all browser windows, restart your browser, and login again. - Set your browser security and privacy settings to Medium. - Run a System Check to ensure that your computer meets the minimum system requirements. The system check ensures that you have the proper core plug-ins, a supported web browser, proper browser settings, and appropriate display settings. If you fail a component of the system check, you will receive an error message explaining why you failed and what actions to take to meet the requirements. - If you are using Internet Explorer, add d2l.rose.edu to your trusted sites. See the steps below if you need help with this. - Click Internet Options in the Tools menu. - Click the Security tab. - Click Trusted Sites. - Click the Sites button. - Type d2l.rose.edu into the textbox. - Click the Add button. - Leave the Require server verification (https://) box checked. - Click the Close button to exit the Trusted Sites dialog box. - Click OK to exit the Internet Options dialog box. - Use the Print/Download link on the Content page, select the topic(s) you wish to view, and click the View printable version icon. This will cause a new window to open displaying the content or links to files on which you must click to open.
OPCFW_CODE
Bug: TV Shows audio_language not found Describe the Bug Using plex_search to filter audio_languages, and it errors out saying the language isn't found. Creating the same filter in the Plex Web UI works as expected Relevant Collection Config collections: "Anime": item_label: Anime genre: Anime plex_search: any: label: Anime all: genre: Animation any: audio_language: - Japanese - Chinese Plex Meta Manager Info Version: 1.12.1 Link to logs (required) [2021-09-07 18:59:55,706] [builder.py:518] [DEBUG] | Validating Method: plex_search | [2021-09-07 18:59:55,706] [builder.py:519] [DEBUG] | Value: ordereddict([('any', ordereddict([('label', 'Anime'), ('all', ordereddict([('genre', 'Animation'), ('any', ordereddict([('audio_language', ['Japanese', 'Chinese'])]))]))]))]) | [2021-09-07 18:59:55,799] [util.py:144] [DEBUG] | Traceback (most recent call last): | | File "//plex_meta_manager.py", line 470, in run_collection | builder = CollectionBuilder(config, library, metadata, mapping_name, no_missing, collection_attrs) | File "/modules/builder.py", line 549, in __init__ | elif method_name in plex.builders or method_final in plex.searches: self._plex(method_name, method_data) | File "/modules/builder.py", line 903, in _plex | new_dictionary = self.build_filter("plex_search", dict_data, type_override=type_override) | File "/modules/builder.py", line 1342, in build_filter | built_filter, filter_text = _filter(base_dict, is_all=base_all) | File "/modules/builder.py", line 1287, in _filter | inside_filter, inside_display = _filter(dict_data, is_all=attr == "all", level=level) | File "/modules/builder.py", line 1287, in _filter | inside_filter, inside_display = _filter(dict_data, is_all=attr == "all", level=level) | File "/modules/builder.py", line 1292, in _filter | validation = self.validate_attribute(attr, modifier, final_attr, _data, validate, pairs=True) | File "/modules/builder.py", line 1405, in validate_attribute | raise Failed(error) | modules.util.Failed: Plex Error: audio_language: Japanese not found | [2021-09-07 18:59:55,800] [util.py:141] [ERROR] | Plex Error: audio_language: Japanese not found in the plex UI do you have more then one Japanese option i know in my library i have three Japanese, japanese, and [jap]japanese I have Japanese and jap. But it looks like only 1 show is listed as jap, so I'll change that to Japanese later. Japanese is definitely correct, capitalization and spelling is the same as what I use in the Plex Web UI Can you send me the url of your advance search from plex &key=%2Flibrary%2Fsections%2F33%2Fall%3Ftype%3D2%26sort%3DtitleSort%26push%3D1%26push%3D1%26push%3D1%26episode.audioLanguage%3Dja%26or%3D1%26episode.audioLanguage%3Dzh%26pop%3D1%26and%3D1%26show.genre%3D4484%26pop%3D1%26or%3D1%26show.label%3D1062041%26pop%3D1&advancedFilters=1 hey try switching to ISO-639-2 language codes. they are 2 letters and they are working very reliable for me. for your example use "zh" for chinese and "ja" for japanese. https://www.loc.gov/standards/iso639-2/php/code_list.php
GITHUB_ARCHIVE
num_complex users should be allowed to write let mut x = (1.0, 2,0)_c64 ... It sounds like you want something like C++ user-defined literals, but there's no such mechanism in Rust. ?? let x= 5.4_f64; is a legit Rust stmt and not a C++ statement .. what I'm suggesting is that crate num_complex should expand this statement to include complex number literals represented by real and imag values On Wed, Feb 24, 2021 at 5:58 PM Josh Stone<EMAIL_ADDRESS>wrote: It sounds like you want something like C++ user-defined literals https://en.cppreference.com/w/cpp/language/user_literal, but there's no such mechanism in Rust. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/rust-num/num-complex/issues/88#issuecomment-785440194, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHJ6BCZHNZEUYG24SIEGUKLTAV775ANCNFSM4YFLZP7A . Rust's f64 is a primitive language-defined type, with built-in support for parsing with the _f64 suffix. Complex is an external type that the Rust compiler knows nothing about when it is parsing your code. I mentioned C++ only for comparison, because that feature does allow C++ users to define new suffixes like _c64. Rust does not have a feature like that, so I can't define any custom suffixes for num-complex. We could possibly create shorter constructors, c64(1.0, 2.0), as discussed in #21, but you would have to use num_complex::c64; first to import that function into scope. shorter constructors like c64(1.0, 2.0) would help a bunch..The use stmt is no major obstacle On Wed, Feb 24, 2021 at 9:50 PM Josh Stone<EMAIL_ADDRESS>wrote: Rust's f64 is a primitive language-defined type, with built-in support for parsing with the _f64 suffix. Complex is an external type that the Rust compiler knows nothing about when it is parsing your code. I mentioned C++ only for comparison, because that feature does allow C++ users to define new suffixes like _c64. Rust does not have a feature like that, so I can't define any custom suffixes for num-complex. We could possibly create shorter constructors, c64(1.0, 2.0), as discussed in #21 https://github.com/rust-num/num-complex/issues/21, but you would have to use num_complex::c64; first to import that function into scope. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/rust-num/num-complex/issues/88#issuecomment-785536951, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHJ6BCZNOHK76QXP6NYYTLLTAW3IFANCNFSM4YFLZP7A .
GITHUB_ARCHIVE
Moving Sound into Core evands at pidgin.im Sat Jun 16 18:15:34 EDT 2007 On Jun 15, 2007, at 3:46 PM, Eric Polino wrote: > I have been brewing over what to do about duplicating code. As > Ethan pointed out, another layer of abstraction is pointless and/or > not needed. Some have suggested to move sound into the core so > that UI's that want sound can hook into it, and those that don't > can just ignore it. I'm not sure exactly how this would work as I > am new to a lot of this, but it seems to make sense to me at a > superficial level. I'm starting up this thread to get a feel for > what people think about moving sound into the core. > As it stands the sound that Finch has is pretty much a carbon copy > of gtksound with a few name changes here and there. > Evan: Sean mentioned about that it would be good to ask > you...consider this asking you. So presently, we have: Libpurple sound.c: Allows the UI to register a function to be called to play a sound at a given path or to play a sound for a given event. For events, looks to some "while_status" preference which appears undocumented in this part of the code. It appears that this preference determines whether sounds should play only when an account is available, whenever an account is online, or not at all. Pidgin and Finch g?tsound.c: 1. Registers for a variety of signals. 2. Responds to those signals by doing various checks specific to that signal, checking if the conversation has focus in the case of the gtk implementation and checking against a preference as to whether it should still make noise, and then passing the appropriate event to libpurple to play. (back in libpurple) 3. libpurple (sound.c) then calls back on g?sound.c to play the sound if and only if enough time has elapsed since the last invocation and no plugin tells it to stop. I suppose (1) could possibly be in the core... except that then any UI is going to either need to accept the sounds provided by libpurple as a whole -or- utilize UI preferences or callbacks (which would need to be added) to disable certain ones. Additionally, that's the minority of the code in question. Most of it is in (2), from my quick look at it. Moving (1) to being a libpurple plugin would just mean even more frustration for FutureUI Implementor; FutureUI would have to worry about whether the plugin is loaded if it's wanting to do different things with sounds, or just eschew the sound API I don't see any advantages to moving this to the core or to a plugin. It's a UI problem. -------------- next part -------------- An HTML attachment was scrubbed... More information about the Devel
OPCFW_CODE
The CPU in Operation To fully understand x64 and X86, one must first look at the CPU because x64 and x86 refers to the size of the CPU instruction set. Simply put, X64 refers to a 64 bit CPU while the x86 refers to the 32 bit CPU. The 64 bit CPU is newer and has the capability to process larger instructions, up to twice the length of the 32 bit instruction set. The CPU is made up of an arithmetic register and a logic register. Its function is to read an instruction and either do math or logic depending on the instruction. The results are sent back to RAM (memory). Some instructions are reprocessed depending on how they combine with other instructions. But the end result is to perform the instruction and present the results so it can be handled by other software and hardware components. For a more technical discussion see What’s Inside a CPU. 32 bit Instructions Sets and 64 bit Instructions Sets Consider the following scenario. There is a grocery checkout line. The checkout is done with a laser scanner and the results are posted on an electronic board that shows the name, quantity, and cost of the item. But more important is where the items are stored before the checkout. They are kept in a grocery shopping cart. Let’s say that all of the carts are the same size. But not all of the carts are filled to capacity. Let’s say that the cart can store 64 items. But some carts only have ten items, some have two, and some have sixty. Each cart has to be handled the same way. Each item has to be processed the same way. The containers in the cart are part of the process, but handling the cart itself is part of the process, such as identifying the cart, moving it forward, and also replacing the items back into the cart or storing them in a different location, like a grocery bag. So each cart is handled. It is more efficient to have each cart filled to capacity or near capacity, instead of having the items fill only part of the cart, so multiple carts would not have to be used to do the function of one. This scenario, this analogy to the CPU, explains how efficiencies can be achieved if the right cart is available, a 32 object (bit) cart or a 64 object (bit) cart. The x64 Operating System and the x86 Operating System Well let’s continue our little metaphor. We know that the more efficient operation would be to load the 64 object cart with 64 objects, not less than that, and certainly not less than 32 objects. That would be a waste of space, as well as a waste of the operation of the checkout line. Now let’s also add the requirement that if the objects in the cart reach 64, the cart is scanned all at once, so that the objects are collectively treated as one, and processed as one. On the other hand, if the objects in the cart are less than 64, then each object must be scanned separately. This will be time consuming. Now it is the operating system that decides how to fill the cart. Obviously, a 64 bit operating system will work well with a 64 bit cart (CPU), A 32 bit operating system will handle up to but not more than 32 objects in the cart, even if the cart can handle the 64 objects. So this is less efficient, or only efficient up to 32 objects. How Objects are Stored in x64 and x86 Systems One last bit of explanation is called for. We know that the cart can store either 64 objects or 32 objects. We also know that the operating system will handle only up to 64 objects or up to 32 objects. One more requirement is necessary to make the cart work efficiently. That requirement is size. This refers to the size of the objects in the cart. Let’s say that the size of the objects in the cart have to be a certain size. For a 64 object cart they can only be a factor of 64, say 32,16,8, 4, or 2; the numbers 12 or 18 or 22 would not work because they do not divide evenly into 64. For a 32 object cart the size of the objects can only be 32, 16, 8,.4, or 2; likewise 6. 20, or 28 would not work either. To properly fill the cart, to maximize the cart, the objects and the number of objects that will be processed must be some factor of 64 or 32. Then the operating system will be able to place the objects in the cart and the processing can take place. Carts = CPU, Objects = Applications, Operating Systems Manage the System Now we are in a position to explain why x64 and x86 are important. The x64 operating system will work best with applications that will fit efficiently with the 64 bit CPU. Likewise the x86 operating system will work efficiently with a 32 bit CPU. This means that the objects have to be designed to work with one type of CPU or the other. An application will work well with a 32 bit CPU if the manager is a x86 operating system. An application that is written to work with a 64 bit CPU will work with a x64 operating system. The most efficient operations will be 64 bit CPU and x64 OS and applications written for x64. Also a 32 bit CPU, will work well with an application that is designed to work with a 32 bit x86. The latest version of Windows7 comes in a variety of flavors that deal with the type of features available the simpler the operating system version like Home Premium will not have as many built in features as the Ultimate or Enterprise edition. However, the CPU determines whether a system will be x64 or x86. The older systems will have a 32 bit CPU, where as the newer ones will more likely have a 64 bit CPU. This will affect how processing occurs; typically a 64 bit CPU will be able to process 64 bit instructions or combine instructions to be handled in one processing operation. 32 bit CPU’s will normally take 2 processing cycles to execute the same instruction. This is the first of two articles in a series that deal with the 32 bit / 64 bit CPU and Operating System. The other article in the series is: Comparing the Difference Between 32-Bit and 64-Bit Windows 7 Other technical articles on the CPU can be found here Can We Achieve 128-bit OS Operability and What Will it Achieve? This post is part of the series: 64 Bit Computing vs 32 Bit Computing As CPU’s get more sophisticated, so do the operating systems that support them. This is happening now with the 64 bit computer and the older 32 bit computer. Understanding how the size of the CPU affects the operating system is the goal of this series of articles.
OPCFW_CODE
Fixing attention masks in HF implementation Hello, When trying to train the HF version of the model to do image captioning, I realized that the MLE loss was (too) quickly decreasing while not being able to generate proper captions and so achieve poor BLEU scores. I digged into it and found that it comes from the attention masks. Indeed, when expanding the mask, tokens which correspond to a 1 in the input attention mask are replaced with -inf, when it should be these corresponding to 0 (1 mean that we are attending to this token for HF tokenizers). I fixed the line and also inverted the computed encoder masks (because it outputs 1 for padding tokens, the opposite of HF tokenizers). Now it is learning properly, hope it helps. PS: I don't know why/how, but I had a different version of modeling_ofa.py when I cloned the repo, and in this one, there was another mistake. In my version, I had this function: def _make_causal_mask(input_ids_shape: torch.Size, dtype: torch.dtype, past_key_values_length: int = 0): """ Make causal mask used for bi-directional self-attention. """ bsz, tgt_len = input_ids_shape mask = torch.full((tgt_len, tgt_len), 1) mask_cond = torch.arange(mask.size(-1)) mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) mask = mask.to(dtype) if past_key_values_length > 0: mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1) mask = mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) # return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) return mask Whereas, the second line should be mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min) (because masked token should be replaced with -inf so the softmax in the attention layer outputs 0 for them). Yet, I cannot find why I had this version, which is different from the one on the repo (even the documentation mention "bi-directional self-attention" whereas on the repository, it has always been "uni-directional" attention). This was why the MLE loss was decreasing so fast, the causal mask was wrong and so future tokens leaked. Thanks for your help! Will have a check about the details in these days:) Seemingly these are our bugs and your changes should help. Hey. Our logic is actually: 1. masks for tokens that should be attended are 0, and those for tokens that should not be attended are -inf. If you check the implementation of the attention module, we actually add the masks to the attention weights, so that those that should not be attended should be assigned with a very low score, whole the others keep unchanged. Hello, Yes, values added in the attention module should be 0 for attended tokens and -inf for the other, so the softmax actually mask unattended tokens. The problem is that, when calculating the attention mask composed of 0 and 1 (just indicating which tokens are padding tokens after the tokenization), HF logic is to have 0 for padding/masked tokens and 1 for tokens we are attending to. Then, this mask is translated by putting -inf for 0 values and 0 for 1 values. So we agree on the implementation of the attention module, which need -inf for masked token and 0 for not masked tokens, but the original "attention_mask" produced by the tokenizer or so, should be 0 for masked tokens and 1 for not masked tokens. This is how it is defined in every HF tokenizer and modules. See : https://discuss.huggingface.co/t/why-does-bart-decoders-attention-mask-mark-relevant-indices-with-0-instead-of-1/6477/2 Yes, this is mainly the problem in encountered. I understand that it might seems like not a big deal (since you do not rely on this for your training), but I still think it might be a good thing to follow "HF logic" for the HF implementation. Especially since the OFATokenizer that is implemented, return this kind of mask. Besides this particular training, every scripts built for HF follow this logic, and I'm pretty sure it will be harmful in the future and even harder to detect that this is the cause. But yeah, it makes sense that you did not face this issue before. Alright, got it! Codes are already merged. Actually I have no experience in training OFA with transformers. @faychu could share some experience, and in the near future, we might update the part of code. Very cool, thanks ! I am actively working on this rn, so if I can be of any help, do not hesitate to reach !
GITHUB_ARCHIVE
As of Tuesday July 31 we’ve passed our feature definition milestone. Top level information on the set of features captured is viewable at: For more human readable information, watch for a blog post on http://kubernetes.io/blog in the coming weeks with more discussion on pending features. Again this is feature definition only, there are many weeks yet for implementation ahead of code freeze. If your SIG feature did not make the deadline yesterday, there is an exception process (https://git.k8s.io/features/EXCEPTIONS.md ). We did not hit our yesterday target for cutting an alpha release for 1.12, but have had an exciting day today working through updating the documentation on and exercising of the build and release mechanism. Googlers have been working away lately shifting the build and release mechanics to be ones a non-Google employee can finally run. To my knowledge prior to 1.11.1 and 1.12.0-alpha.1 all kubernetes releases have been cut by a Googler, so this is a major and significant shift toward the community. The 1.11.1 release had a hiccup and it appears we’ve proved that fixed for the release of 1.12.0-alpha.1 today. We did have a few other hiccups on 1.12.0-alpha.1 though, but all things considered those look minor and fixable and artifacts are now live: As you may have seen the alpha release notification email got slightly wedged in the ether and then our workaround had an issue too. Better the artifacts ship first and the announcement late, than pre-announcing incomplete artifacts! And if a missing Content-Type: text/html is the worst of our issues, we’re in great shape considering all that has changed! Big thanks to Doug MacEachern for sticking his neck out as 1.12 release branch manager during this transition and the many Googlers (Caleb Miles, Ben Elder, and many more behind the scenes) for doing the intrepid pathfinding and debugging to improve this aspect of the release process!!! Our next major milestone will be the shift into code freeze as September arrives. While this is weeks away, this time always goes by FAST! We request you continue to give consideration to documentation and test cases for your features as you’re developing, as well as keep an eye on CI results related to your SIG and be responsive to requests for issue and test failure triage by the release team. It is imperative that we continually improve our CI signal, maintain passing test status, and ultimately achieve a quality release! Begin code slush: Aug. 28, 2018 Begin code freeze: Sept. 4, 2018 End code freeze: Sept. 19, 2018 Release date: Sept. 25, 2018 Detailed schedule available at http://bit.ly/k8s112-release-info Tim Pepper, 1.12 release lead
OPCFW_CODE
Top 23 Filesystem Open-Source Projects A simple, fast and user-friendly alternative to 'find' A cd command that learns - easily navigate directories from the command lineProject mention: Why do so many tutorials use the command line for file navigation? | reddit.com/r/learnprogramming | 2021-05-10 Autojump keeps a database of folders you frequently use so you can jump directly to it (j music) and it'll try to guess where you want to go. https://github.com/wting/autojump Scout APM - Leading-edge performance monitoring starting at $39/month. Scout APM uses tracing logic that ties bottlenecks to source code so you know the exact line of code causing performance issues and can get back to building a great product faster. Abstraction for local and remote filesystemsProject mention: PHP library that wraps the FTP extension functions in an OOP way and more compatible with old FTP servers. | reddit.com/r/PHP | 2021-06-05 Hey, thanks for your comment. Actually, I didn't use Flysystem before, but I've looked at their API methods and I see that they provide simple API methods that can be used in many protocols like SFTP and AWS S3 using adapters for each, so they depend on a file system manipulation abstraction (FilesystemAdapter), and I think that they can't extend it to specific protocol implementation, and as a result, they have a few methods that may be fit the user needs. n³ The unorthodox terminal file managerProject mention: Looking for a file manager having a similar feature to the preview pane similar to Windows; Helping somebody switch. | reddit.com/r/linux4noobs | 2021-06-14 If you are the adventurous type, try nnn. Minimal and efficient cross-platform file watching libraryProject mention: Have you ever thought, how ‘nodemon’ works internally? Let’s build our own ‘nodemon’ in under 10 minutes! | dev.to | 2021-06-02 For watching files to new changes, we can make use of NodeJs inbuilt module, fs. It exposes a function called fs.watchFile but there have been a lot of issues reported by the community saying it’s not reliable. It fires multiple events sometimes for a single file change which results in high CPU utilization. So, to overcome this problem we can use the chokidar package. Node.js: extra methods for the fs object like copy(), remove(), mkdirs()Project mention: Batch with Node.js | dev.to | 2021-04-27 For this purpose we'll use fs-extra since copy/paste seems like not supported by fs API. FUSE-based file system backed by Amazon S3Project mention: Moving my home media library from iTunes to Jellyfin and Infuse | news.ycombinator.com | 2021-06-10 > Are there any approaches to throw your library behind authed CDN or aws s3 with a frontend ios/android/desktop app to get rid of those fancy subscription models?! You can use an S3 compatible storage provider and either mount it via NFS or s3fs, and point Jellyfin to it. SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives A `rm -rf` util for nodejsProject mention: Why I Prefer Makefiles Over package.json Scripts for Node.js Projects | reddit.com/r/programming | 2021-04-30 FUSE filesystem over Google DriveProject mention: You know what I hate? We have the ability to run linux on our Chromebooks but STILL no official support for Google Drive... | reddit.com/r/chromeos | 2021-04-24 I use ocamlfuse and have had no issues whatsoever. I sync my google drive between my ubuntu rig, a few win10 machines and an Android phone. Not sure if it works on linux on a chromebook though. Native filesystem access for react-nativeProject mention: metro-config error while using UI Kitten & react-native-fs | reddit.com/r/reactnative | 2021-04-09 While installing react-native-fs package, I was only able to do so by adding --legacy-peer-deps during the installation as suggested in this issue on their Github page as it has a dependency to react-native ^0.59.5 User mode file system library for windows with FUSE WrapperProject mention: Fake physical local drive for Battle.Net and others? | reddit.com/r/VFIO | 2021-04-21 Dokany will allow you to make a virtual local drive that you can map to anything. https://github.com/dokan-dev/dokany/wiki/Use-Mirror-example a high-performance, POSIX-ish Amazon S3 file system written in Go JuiceFS is a distributed POSIX file system built on top of Redis and S3.Project mention: "JuiceFS is an open-source POSIX file system built on top of Redis and object storage (e.g. Amazon S3), designed and optimized for cloud native environment." | reddit.com/r/programming | 2021-04-20 Windows File System Proxy - FUSE for WindowsProject mention: WinFsp 2021 – FUSE for Windows | reddit.com/r/CKsTechNews | 2021-06-08 Gluster Filesystem : Build your distributed storage in minutesProject mention: HPC design choices | reddit.com/r/HPC | 2021-04-20 Do you mean https://www.gluster.org/ ? Find files with SQL-like queriesProject mention: Awesome Rewrite It In Rust - A curated list of replacements for existing software written in Rust | reddit.com/r/rust | 2021-05-27 I really like fselect, which I use more than fd File Attachment toolkit for Ruby applicationsProject mention: Image Uploading with Shrine | dev.to | 2021-04-23 Once I knew I wanted to upload images I stated to look around for different ways to do so. I came across a gem named shrine which can be found here https://shrinerb.com/. This is were the hard part came in. Aside from installing the gem like normal, there was a second step that needed to be done to allow images to be rendered. I needed to install https://imagemagick.org/index.php ImageMagick onto my system. After this I was able to add images. Copy files and directories with webpackProject mention: My first public React 17 Boilerplate (with Webpack 5, Tailwind 2) | dev.to | 2021-01-02 copy-webpack-plugin - Copy files to build directory PHP library that provides a filesystem abstraction layer − will be a feast for your files! A little fail-safe filesystem designed for microcontrollersProject mention: Little fs, a file system for embedded applications. | reddit.com/r/AskComputerScience | 2021-05-07 Recursively mkdir, like `mkdir -p`, but in node.js a featureful union filesystemProject mention: What should I use to join several drives together into 1 volume for plotting on Linux? | reddit.com/r/chia | 2021-06-08 What should I use to join these into 1 volume for plotting? I am running ubuntu. Limited searching returns.... https://github.com/trapexit/mergerfs Encrypted overlay filesystem written in GoProject mention: Encryption with KDE Vaults | reddit.com/r/linuxquestions | 2021-05-31 EncFS, CryFS and GocryptFS What are some of the best open-source Filesystem projects? This list will help you:
OPCFW_CODE
NIFI-3950 Refactor AWS bundle This PR provides a separate NAR for the AWSCredentialsProviderService API, and a new nifi-aws-abstract-processors jar so that custom processors can take advantage of the abstract base classes without importing duplicates of the existing implementations. The JIRA ticket mentions the need for a transition plan; I'm not sure what is required there, but happy to help provide whatever is needed. The failed CI builds seem to be failing for other PRs and do not appear to be caused by this change. Reviewing... I agree the CI failure may not be related to this change. mvn clean install -Pcontrib-check worked OK for me. @christophercurrie - With respect to a transition plan, I'm not sure exactly what we need. I'll have to get back to you on that. In vague concept, users who have built custom processors and custom controller services against the existing API should have a smooth upgrade experience to the new one. I'll try to work out a more concrete definition for 'smooth'. How smooth their experience will be will probably depend upon their internal project practices. Since the nifi-aws-processors package would have been a required dependency for such custom processors, the main change would be to require adding nifi-aws-service-api-nar somewhere in their nar parent chain; this change is, AFAICT, unavoidable given the nature of controller-processor decoupling. Not a problem on the reverts. The original instructions asked for a squashed commit. Should I create a new smaller PR, or add the revert commit to this one? Please add the commits to this one. It usually helps reviewing to see the commits separately, and it's easy enough to squash at the end. Regarding the migration strategy... Lets make sure we document on the migration wiki page [1] that anyone who built custom components using the AWS controller service should rebuild them with the new dependency as part of their upgrade process. Given our component versioning, it would actually be possible to run the old stuff and the new stuff at the same time, but it would require a complicated setup. I think you'd have to leave the 1.3.0 AWS NAR, 1.3.0 standard services API NAR, and possibly 1.3.0 standard NAR, plus the users custom NAR that depended on 1.3.0 AWS NAR. Probably not the recommended approach, but a fallback option. [1] https://cwiki.apache.org/confluence/display/NIFI/Migration+Guidance Thanks, @bbende, I'll take up the migration guidance for the wiki. As part of reviewing this PR, I am building some sample processor and service projects to reproduce the problems and check the fix. I plan to work through the migration steps myself and can document the process. It's very likely that I'll have more questions for you as I get into details of it. @jvwing I have pushed a commit that removes all unnecessary changes from the PR. Sorry for the delay. Thanks for the update, @christophercurrie . This PR is looking pretty good: Passes the full suite of unit tests with contrib-check. AWS processors and controller service still work OK in my testing. Provides a good migration experience -- just rebuild against NiFi 1.4.0 nars -- better than I feared. More below. One thing we still need is a set of LICENSE/NOTICE files for nifi-aws-service-api-nar, similar to what is now in the nifi-aws-nar. I believe the NOTICE file can be pared down to only reference the aws-sdk. Migration Experience I created a simple AWS bundle targeting NiFi 1.3.0, and went through the exercise of migrating it to 1.4.0 as of this PR. It seems "smooth" enough to me. Advancing the NiFi dependency version to 1.4.0 and rebuilding is enough, maintaining the NAR dependency on nifi-aws-nar. For bundles that only implement controller service interfaces, they may optionally change their NAR dependency to nifi-aws-service-api-nar. Since nifi-aws-nar already has this NAR dependency, I believe this is a recommended, but not strictly necessary step. More migration notes: My 1.3.0 AWS processor worked OK without modification when used with this PR. My 1.3.0 AWS controller service did not work with just the NAR file, with the expected incompatibility error: AWS Credentials Provider service' validated against '8902b809-015e-1000-46c3-e321c1d9a1b4' is invalid because SampleAWSCreds - 1.3.0 from sample - sample-aws-services-nar is not compatible with AWSCredentialsProviderService - 1.4.0-SNAPSHOT from org.apache.nifi - nifi-aws-nar However, if nifi-aws-nar-1.3.0.nar was included side-by-side with nifi-aws-nar-1.4.0.nar, the 1.3.0 processor worked with the 1.3.0 controller service, unchanged. @bbende , I'm not sure the last option is what you described above. I was expecting to add more NARs. But it seemed plausible that the monolithic nature of nifi-aws-nar-1.3.0.nar might make it easier to dump in side-by-side? @jvwing sorry I never responded to this... The results you got seem correct. The part I wasn't sure about was that the nifi-aws-nar-1.3.0.nar has a dependency on nifi-standard-services-api-nar-1.3.0.nar, so I thought you may have to leave the whole change of dependencies around. Looking at the code during start up, I think when loading the nifi-aws-nar-1.3.0.nar we would detect that there is no 1.3.0 version of standard-services-api-nar present, but there is a 1.4.0 version so just use that, which is probably why it worked. @christophercurrie I think we're pretty close on this PR, any interest in continuing? Yes, though I'm not sure what action items are left for me at this point. @christophercurrie The outstanding item for now is a set of LICENSE/NOTICE files for nifi-aws-service-api-nar, similar to what is now in the nifi-aws-nar. I believe the NOTICE file can be pared down to only reference the aws-sdk. Ok, I've added those files. Let me know if they look OK and if there's anything else that needs doing. @christophercurrie, thanks again for the work on this infrastructure project. I merged these commits to master, except for the last commit "Limit NOTICE to AWS SDK" for nifi-aws-service-api-nar. I lied about not needed all those dependency notices. After checking with Maven, I understand HttpComponents, Joda Time, etc. are transitive dependencies of the AWS SDK, and we need them even for the interfaces. I apologize for the confusion about that, and thank you for adding it as a separate commit. I'll follow up with some text for the NiFi 1.5.0 migration notes.
GITHUB_ARCHIVE
Dot Net Core Interview Questions and Answers 1. What is .NET Core, and what are its advantages? .NET Core is a cross-platform, open-source, and modular framework used to build modern web applications and services. Its advantages include improved performance, higher security, and increased scalability. 2. What is the difference between .NET Core and .NET Framework? .NET Core is a cross-platform framework used to build web applications, whereas .NET Framework is a Windows-only framework that's used to build desktop and web applications. .NET Core is cross-platform, whereas .NET Framework only runs on Windows. .NET Core is also open-source, whereas .NET Framework is proprietary. .NET Core is designed to be modular, so you can use only the parts you need, whereas .NET Framework is a monolithic framework. Finally, .NET Core is designed to be used with containerization and microservices, whereas .NET Framework is not. 3. What are the main components of .NET Core? The main components of .NET Core include the Common Language Runtime (CLR), the Base Class Library (BCL), and the .NET Core SDK. 4. What is the CLR, and what is its role in .NET Core? The CLR is the execution engine of .NET Core. It provides the runtime environment where your code runs, manages memory allocation and garbage collection, and handles exceptions. 5. What is the BCL, and what does it contain? The BCL is a collection of reusable classes, interfaces, and types that are used to build .NET Core applications. It contains types for collections, IO, threading, and more. 6. What is the .NET Core SDK, and what does it include? The .NET Core SDK is a set of tools used to build and deploy .NET Core applications. It includes the CLI, the runtime, and libraries. 7. What is ASP.NET Core, and how does it differ from ASP.NET? ASP.NET Core is a cross-platform framework used to build web applications and services. It's a redesigned version of ASP.NET that's optimized for performance and scalability and supports cross-platform development. 8. What is middleware in ASP.NET Core? Middleware is a component that handles requests and responses in an ASP.NET Core application. They can perform tasks such as authentication, routing, and caching. 9. What is dependency injection, and how does it work in .NET Core? Dependency injection is a design pattern used to manage dependencies between objects in an application. In .NET Core, it's implemented using the built-in DI container, which manages the creation and lifetime of objects and their dependencies. 10. What is Entity Framework Core, and how does it differ from Entity Framework? Entity Framework Core is a lightweight, cross-platform version of Entity Framework that's optimized for performance and supports cross-platform development. It provides an object-relational mapping (ORM) framework for accessing data in a database. 11. What are the benefits of using Entity Framework Core? The benefits of using Entity Framework Core include reduced boilerplate code, improved developer productivity, improved performance, and better security. 12. What is a migration in Entity Framework Core? Migration is a way to update the schema of a database to match the changes made to the data model. It's implemented using the EF Core command-line interface (CLI) and generates SQL scripts to apply the changes. 13. What is the difference between LINQ and SQL? LINQ is a language-integrated query language used to query data in .NET applications. SQL is a query language used to query data in relational databases. LINQ provides a more natural and expressive way to query data than SQL. 14. What is a lambda expression, and how is it used in .NET Core? A lambda expression is a concise way to define an anonymous function in .NET Core. It's used extensively in LINQ queries and can also be used as a parameter to a method. 15. What is the difference between an abstract class and an interface in .NET Core? An abstract class is a class that cannot be instantiated and can contain both abstract and non-abstract methods. An interface is a contract that defines the methods and properties that a class must implement. A class can inherit from only one abstract class but can 16. What is the purpose of the StringBuilder class in C#? The StringBuilder class is used to efficiently manipulate strings. Unlike the string class, which is immutable, StringBuilder allows you to modify a string in place without creating a new object each time. 17. What is the difference between synchronous and asynchronous programming in .NET Core? Synchronous programming blocks the current thread until a task is completed, whereas asynchronous programming allows the current thread to continue executing while the task is completed on a separate thread. Asynchronous programming can improve performance by allowing the application to make more efficient use of system resources.
OPCFW_CODE
- me: Jeremy! How do I run a migration on guerrero? and what is the password for sms.odeo.com and staging-sms.odeo.com? - on mission rake deploy - make sure the database.yaml_staging is pointed at brannan - oh, and su - oadmin - me: i'm getting an application error there, and not sure why. - Jeremy: after scrum? - me: can i get access to the logs on guerrerro? - Jeremy: or now? - me: yeah - Jeremy: which? - me: anytime - Jeremy: ok now - Jeremy: you have sudo on guerrero now - the rails log is in /var/log/user.log - apache logs to /var/log/apache2/access.log (and error log) - me: could you reset my password? it doesn't seem to be working - Jeremy: this will be changing, as apache will be logging everything to syslog, and all logs will end up in the /var/log/remote/$HOST directory on valencia starting tomorrow or friday at the lasted - me: thanks! This was posted 3 years ago. It has 9 notes. - me: biz! i like the new look - Biz: yeah? - me: yeah, i'm changing it now. This was posted 3 years ago. It has 24 notes. - me: your whole tiwitter is green! - did you set the <style>? - Biz: yah, i put a bkg color in my update - me: nice. - i put a span tag in mine. - so it should just be that status entry that's green. - Biz: i used a font tag on my latest deal - but my twitter page is wacked causa the old one - : P - me: yeah, we'll have to minimize that. - maybe you can set another background to change it back? - Biz: is it on the front page for you now? - me: just says happy st. patricks day - Biz: and that's making the whole page green? - its not on firefox - I'll change it - me: not for me. only when i go to your page. - click on biz, that is. - Biz: oh right, yah - its the one that says" Biz: listening to the pixies live from dublin" - that's doing it - me: the background? - Biz: yah - me: want me to change it? - Biz: sure - me: this is why we need that strikethrouh. maybe it cancels out all styling as well. - Biz: that'd be good - me: oh, you put a body tag in there. - we should disallow that. - Biz: yah - me: how about that - Biz: good its white again This was posted 3 years ago. It has 16 notes. - Biz: my update button won't work anymore - me: how about now? - Biz: still won't work - is it cause i tried to embed video? - me: weird. could be. - it just does nothing? - Biz: oh man. I broke my twitter! - plus, the video won't play - me: i'll try to delete that message. - Biz: nice! - you fixed it - me: let's play with that. - maybe it was missing an ending tag - Biz: i copied and pasted from google video - me: hrm This was posted 3 years ago. It has 18 notes. - csshsh: oh! by the way.. that statuses_including_friends thing with a time range works: - users(: jack).statuses_including_friends(10.hours.ago, 7.hours.ago) - me: nice! - csshsh: we can remove this: - user can choose to receive status change notifications via Email for followed users - since we will do sms right away? - me: yeah! all the email stuff - csshsh: cool! looks like we are doing good timing wise - me: yeah! on track This was posted 3 years ago. It has 8 notes. - me: we're going to work with jeremy to deploy tomorrow. but we have a good start. people are using it. it's on my machine, but broken currently. our current progress is on the trac wiki. - tonystubblebine: great - me: still waiting on the sms integration. i'll do that next week when florian is out of the loop. - tonystubblebine: are you going to want a companion next week? - me: i don't think i'll need one, but if anyone has free cycles they want to spare, that would be great. we'll have most of the rails stuff in place though. - tonystubblebine: what are the hurdles on the sms side? - me: just getting the random temp short code from simplewire. then being able to get something listening at sms.odeo.com/post to the xhtml that comes over. we have the ruby libraries in place and ready to go for both sending and receiving. - tonystubblebine: what's the hold up on the random temp short code? - me: uh, seems to be the application process. noah is handling that. i'll have to check the current status with him. - tonystubblebine: ok, thanks - i have to steal florian for a bit tomorrow - me: cool This was posted 3 years ago. It has 17 notes. - me: we have following working now on my machine - Biz: sweet! - will your deal work for me now? - me: should. just don't hit the help link - worked for ev this morning - Biz: nice! - works great! - me: cool. yeah, add ev's phone number and you should see his stuff - we're working on stats now. - Biz: sweet! - me: and fixing this up: http://jacks.local.:3000/status/ - Biz: ev needs to update his status! - oh wait - i think I added evs number wrong or something - no that's correct - me: did it work? - should say ev. - maybe not us a 1 - Biz: now there is "ev" and another person named "14158459000" - me: huh. will look into that. - Biz: ev is in my random person section thoguh - me: yeah, put his number in without the 1, that should fix it.. - Biz: ah - can't delete friends - me: yeah, just remembered we need to do that. - Biz: that werked - me: i'll add it to the wiki. - the protect thing should work too. keep you out of global - Biz: is protect working different than we orignially thought? - we had two modes - gonna fiddle around - me: no. just the secret word mode is not working yet. the deal was to make it look like the sharing stuff on itunes - you can protect, and you can further protect with a password, if you enter it. - just more concise instead of 2 checkboxes - Biz: so "protect my updates" means you are not global and only folks who know your number can follow you - and the secret word is extra protection - that was the plan right? - me: yeah, that's it. - Biz: cool, the colon after "protect my udpates:" threw me - made it seem different - me: oh yeah, need to clean that up with css and the extra text eventually. - Biz: who else is on here - whats florians number - dude i just starred florian - me: 4152999000 - Biz: neat - me: we probably shouldn't be able to star globals yet, have to work on that. - Biz: yah - also, i didn't get florian from that number - got a number like before - should I use the 1 this time? - don't worry about it - its coolz - me: he used 12345 This was posted 3 years ago. It has 35 notes. - Jeremy: hey, which server are we going to have respond to http://sms.odeo.com/post ??? - me: uh, any that are easy to put something on. i'm not sure yet. suggestions? - Jeremy: is something rails? - me: yes. though, it could just be a python or perl script if need be. - but rails ideally. - Jeremy: ok, it points to guerrero now - me: nice! thanks - Jeremy: we'll have to setup some svn and rails / vhost stuff - me: cool. i think we'll be ready to start on that in the next day or two - Jeremy: ok This was posted 3 years ago. It has 6 notes.
OPCFW_CODE
Foreign key from events table 1-1 0r many? I'm likely overthinking a problem here and may well get downvoted but I'm prepared to take the hit. I'm building my first schema in a data warehouse. 2 tables: events and contacts: events(id(pk), cid, other, fields, here) contacts(id (pk), cid(fk), other, fields, here) Someone visits our website and registers. A line item is generated in events column "id" and a "cid" for contacts is generated. A new record is added to contacts. I have two questions: Can I make the primary key of contacts cid? Thus the primary key is also a foreign key? I'm using MySQL Workbench to create the schema. When I create the contacts table I am able to set the foreign key of cid and the cardinality as either 1-1 or 1-many. From the point of view of contacts table, is the relationship 1-1 or to many? There will only ever be 1 cid record in contacts but if that user does multiple things (like receive an email from us etc) they will appear multiple times in events table. So, logically 1-many. But when creating this in Workbench the relation line appears as though it's a 1-many relation with the many part being at contacts, not the other way around as desired. It should be the other way around? What is the relationship between events.cid and contacts.cid? If a user's registration results in a single contact_ record while each user visit to the web site (each Session started) results in an event_ record belonging to that user’s contact_ record, then you have a One-To-Many relationship. `contact_` = parent table (the One) `event_` = child table (the Many) Notice how I boiled down that relationship into a single sentence. That should be your goal when doing analysis work to determine table structure. Relationships are almost always defined as a link from a primary key on parent table to a foreign key on a child table. How you define the primary key is up to you. First decide whether you want a natural key or a surrogate key. In my experience a natural key never works out as the values always eventually change. If using a surrogate key, decide what type. The usual choices are an integer tied to an automatically incrementing sequence generator, or a UUID. If ever federating data with other databases or systems then UUID is the way to go. If using an integer, decide on size, with 32-bit integers handling a total of 2-4 billion records. A 64-bit integer can track 18 quintillion records. The foreign key in child table is simply a copy of its assigned parent’s primary key value. So the foreign key must have same data type as that parent primary key. If a particular parent record owns multiple records in the child table, each of those child records will carry a copy of that parent’s primary key. So if the user Susan has five events, her primary key value appears once in the contact_ table and that same value appears five times in the event_ table stored in the foreign key column. If cid uniquely identifies each contact_ record amongst all the other contact_ records, then you have a primary key. No need to define another. Hi @Basil thanks for the thorough response. I'll use contact.cid as primary key in contact table which will then be a foreign key in events table. When I posted my question I think I had things in reverse. In my mind the event happened first, and on the back of the initial event a cid is generated resulting in a new record in contact so from that pov I thought of event table as parent. Regarding account creation: Mark the particular "event_" record. Perhaps the "event_" table has a column of "type_" in which you store "account creation". Or columns of "summary_" and "detail_" where "summary_" contains "account creation" phrase and "detail_" contains various other info such as date-time, their IP address, or whatever. If very important, add a Boolean column "account_creation_" on "event_", perhaps indexed. In other words, account creation is not special, it is just one of several types of events that can happen, such as "email sent", "bill paid", "tweet posted". If you have a business rule that says creating a "contact_" row always involves creating at least one "event_" row marked as "account creation", then the crows feet in the diagram should be changed. The diagram says: Each "contact_" row can have zero or more "event_" children. But with that business rule it should be: Each "contact_" row can have one or more "event_" children (never zero). Hi @Basil OK this is great thanks. Yes there will be an event_type field and having an account creation event sounds really obvious now so I can just filter on that for analysis. Again, this is my first schema! Regarding "crow's feet" in SQL Workbench this appears to be showing with a bar so I believe that does imply 1 or more Crows foot notation was never standardized afaik. So variations exist. I prefer single bar to mean "one and only one" while others use a double bar. Presumably in a double-bar the first bar replaces the zero in zero+bar that means "zero or one" as seen my comment’s link.
STACK_EXCHANGE
Tailor Made - OutWit Development Services Here are a few things to know before initiating developments with OutWit or any third party developer you may mandate through OutWit: We can develop the scrapers or automated jobs that will allow you to perform specific extractions (we are not equipped to execute extractions ourselves). When you order a scraper, a macro or a job from OutWit, the developed automators are delivered to you in an encrypted and limited form, for you to test them. After you have accepted and paid, we will send you a key to unlock the limitations. At that point, unless otherwise stated in the order agreement, you can have access to the source code of the automator that was developed for you, copy it, alter it and reproduce it for other extractions. We will give you an estimate to the best of our knowledge when you order. As you are always free to accept or refuse the final development, it is in our interest to be as accurate as possible in this assessment. The price may however vary (in either direction) depending on unforeseen difficulties -or shortcuts- we may find. Ad Hoc developments are not our primary activity. We are glad to do it, if it helps you make the most of our programs and gives you working examples of how to build automators, so that you can create your own in the future. Do not hesitate, however, to use internal development resources you may have, or to have third party developers do the work for you if you believe they will be more available, more local… or less expensive (we are located in France and the hourly price of our developers is clearly higher than in some other parts of the world). Data Source Changes The Web changes. Contents, form, technologies are constantly evolving. If you are lucky, a source of data will remain the same for months or years, but it can also be less. Developing a workflow to extract data from an online source means making sure it works with the current state and behavior of the source. It is impossible, however, to foresee how long it will work for or even if it will still be technically possible to keep extracting the data if the source or its technology change. We will of course do our best to help you maintain workflows we have created for you, but all additions and modifications will have to be considered as new projects and will require new estimates. If You Need A Database Data extraction is our skill and we are pretty good at it. We will give you the tools to do it but you will need to store and manage the extracted data. If it is just a series of Excel files, no problem; but if it is thousands (or millions) of rows of data and image files, you will need local talents to organize it in a database or in a hierarchical file system. It is not possible for us to help remotely on this part. Intellectual Property on Data Sources We can only help you build extractors for Web sites or pages if you are authorized to extract and use their data. We cannot check the situation of our users in respect to the sites’ terms of service. Therefore, when you ask OutWit or freelance developers to execute ad hoc developments on your behalf, you agree that your order and acceptance of the developed software mean you have verified that you are allowed to collect and use the data. In many cases this is not an issue (free, public sources of data, providers you are working with, your own company data which is easier to harvest this way, etc.) in other cases, things are not so simple or the automatic collection can be clearly forbidden without authorization. DO CHECK THE SITE’S USAGE AND COPYRIGHT TERMS before you order a scraper from OutWit.
OPCFW_CODE
03-08-2010, 07:25 PM (This post was last modified: 03-08-2010, 07:32 PM by Terrorkarotte.) I know it sounds stupid. It is not my server/config. I just looked into a friends and saw this. It was obvious that Servers that produced the message were manually added with the server.cfg. I removed that ones and the error stoped appearing till now. I posted here because i thought maybe other people added the masters in their config too and should take a look into it. I heard valve is/was moving Servers. So if it is true that they changed something and people are still refering in their config to the old master setup that could explain the error. 03-09-2010, 02:06 AM (This post was last modified: 03-09-2010, 02:08 AM by Tommiiee.) Since I ran the HLDSUpdateTool (after deleting ClientRegistry.blob, HldsUpdateTool_**.mst and InstallRecord.blob), the problem had disappeared.. Nevermind.. I got the message again in console.. I suppose if master servers IP's have been changed users defining their masters that don't exist anymore are having the error. Looking for a game server? Visit fullfrag.com and pick one up as low as $2.50 / mo! yep thats what I wrote... Till now the server i was refering to did not show the error again. aparently doing some research the ip is one of 2 i belive ip for the master servers your server is hosted on wich means all A2C_PRINT from 22.214.171.124:27011 : means your server isnt connected to the master server or servers for me i remain conected to one of them while this one 126.96.36.199:27011, cant connect os it means no one sees your server annd the reason that the one ip is open is becouse thats the lan ip so you your self can join but no one out side your network can :<. I'm not defining any master servers (in my server.cfg), and I'm still getting the error.. Im geting this too A2C_PRINT from 188.8.131.52:27011 : A2C_PRINT from 184.108.40.206:27011 : I just reinstalled my whole server today - and updated everything... I take it we have to wait huh... gah the problem reappeared just now. my friends cant join my server again. so just say it how to solve this problem? does anyone know? sorry bout that post last night, i thought i solved it and it reappeared today. There is only one way to fix this: Wait till valve does it... 03-18-2010, 12:40 AM (This post was last modified: 03-18-2010, 10:32 AM by dinosaw419.) u mean... the A2C_PRINT thing is the cause people can't see or join my server? thats... gonna take how long to solve the problem?
OPCFW_CODE
ITM Slaves! moved to Pleroma I have an mastedon account and a ITM account, but when i try to log in the site is says a-mails adress or password invalid. how can i solve this? I believe the issue is with the ITM site; I just tried to log on there and got the same result. I'll reach out to the site operator and see if they are aware of any issues. The ITM Slaves! instance changed from Mastodon to Pleroma, but they were not able to migrate accounts. You should be able to create a new account with your previous credentials. However, it looks like the API integration has changed. I'm going to use this issue to track fixing it. Thank you for reporting the issue! Thank you for your response, i tried to log in again, but is still says Invalid Username/Password mark1985 <EMAIL_ADDRESS>do i have to do someting else? Thank you for your response, i tried to log in again, but is still says Invalid Username/Password mark1985<EMAIL_ADDRESS>do i have to do someting else? Howdy, admin of ITM Slaves here. You'll have to create a new account to use the site. When we switched to Pleroma from Mastodon earlier this year I was not able to migrate the existing users. Sorry for the inconvenience! Pleroma has been working really well so you won't have to do this again. Also, even once you do that, the attempt to log on to Jobs, Jobs, Jobs will give you the same error; that's what I need to fix (once I figure out how Pleroma does that). For version 2.2.2, the ITM, Slaves! button is disabled, with a note that the integration is broken. Hopefully I'll be able to figure it out shortly! Let me know if I can be of any assistance! In my initial research, it looks like Pleroma supports the same API as Mastodon (with a few noted exceptions*). I think the main issue comes down to the client ID and secret being unknown to the new server. On Mastodon, I got that by editing my profile, then clicking "Development". There, I could register an application, declare what scopes it needs, etc., and it generated the client ID and secret there. This is what I can't find on Pleroma. If you can see something like that in the admin area, let me know, and I'll send you the details for the application (but not publicly - LOL). * None of the exceptions apply to this application. Jobs, Jobs, Jobs uses exactly 2 API calls; one verifies that the user has allowed the application to access their profile (which also verifies that they have a valid account), and the other retrieves the user's basic account details so we can display names and handles. It's an intentionally minimal implementation. In my initial research, it looks like Pleroma supports the same API as Mastodon (with a few noted exceptions*). I think the main issue comes down to the client ID and secret being unknown to the new server. On Mastodon, I got that by editing my profile, then clicking "Development". There, I could register an application, declare what scopes it needs, etc., and it generated the client ID and secret there. This is what I can't find on Pleroma. If you can see something like that in the admin area, let me know, and I'll send you the details for the application (but not publicly - LOL). None of the exceptions apply to this application. Jobs, Jobs, Jobs uses exactly 2 API calls; one verifies that the user has allowed the application to access their profile (which also verifies that they have a valid account), and the other retrieves the user's basic account details so we can display names and handles. It's an intentionally minimal implementation. Found this, relevant? (sorry for the delay) It looks like those are overall settings for how OAuth requests are handled, not for defining specific applications / consumers. @johnwtrain I wouldn't spend a whole lot more time on this. (story time ahead) When I first conceived of this site, tying login to No Agenda Social accounts seemed to be a great way to make this site "by NA for NA". With registration at NAS closed off, I ended up opening it up to ITM Slaves! and Liberty Woof - but I declined from Podcast Index Social, as that's much wider than No Agenda. What I was trying to do was walk the line where people could drop some of the anonymity of their NAS handle (if they wanted), but know that only other members would be able to see it. That hasn't been how it's worked out, though. Uptake has been light, and I've never seen a person share either a job listing or an employment profile on NAS (though, admittedly, I don't have time to crawl the Local timeline all the time). I discussed this a bit with Adam, and he mentioned that it might work better if they pitched it during the show more than just in passing, as has happened a couple of times. I think they did it once, but that didn't really move the needle either. It's my understanding that the art generator, the meet-up site, etc. are all sites where anyone can sign up and participate; and, to my knowledge, they don't have issues with other people showing up and causing problems. I've been leaning toward moving that way with this site, and this difficulty may just be the final indicator that I should go that way. I'd want to make sure people were able to preserve their existing profiles once they create an account, but I may just do a manual mapping for folks there. There are 18 users of ITM Slaves! registered, 3 of whom have employment profiles, and no job listings. At any rate, if you've found the right combination, I'm happy to fix what's there; otherwise, I may defer to a v3 that uses its own authentication. I've used Auth0 on other projects, and it's nice because you can accept other logins through them, as well as do your own username/password if you want. Also, let me know what you think about that plan. I have another project I'm working to get to release-candidate status, so I won't be jumping on this right away - I still have lots of time to ponder. Version 3 will not be dependent on external Fediverse servers, and I'm beginning work on it. I'm closing this particular issue out, but stay tuned on NAS and ITM, Slaves! for v3 release information. Version 3 will not be dependent on external Fediverse servers, and I'm beginning work on it. I'm closing this particular issue out, but stay tuned on NAS and ITM, Slaves! for v3 release information. Daniel, Everyone on ITM Slaves has moved over to NAS now that registration is open for a bunch of new people. We'll be shutting down at the end of the month so you don't have to work on supporting Soapbox at this time. Cheers! John.
GITHUB_ARCHIVE
For some pose, the rotation vector between the teach pendant and serial output of the robot is 180 out of phase. Is there any way to workarround? I believe that the two poses are at exactly the same point, but there may be multiple solutions for the rotations, that fulfill this position. the rotation axis is almost the same, one (0,0.713680855,-0.700471011), another is (0,-0.713775653,0.700374412). While the rotation angle is a bit large, i.e. 3.149166725 against 3.13403797 in rad (>0.5 degrees). @chlai I am seeing the same thing for this point. I am using the RTDE to get back the actual joint positions as well as the TCP actual position. So to confirm that there are two solutions I went in and switched the polarity of the RX, RY and RZ manually and as soon as I press OK the values change back to what they were before I changed them. So I ran another test and wrote a program that just assigns the current tool pose to a variable and the result of that was: which matches what the RTDE is saying but if I check the move screen I am still seeing the original inverted values I will be the first to admit it has been over 20 years since I took kinematics and I am definitely no roboticist when it comes to actually calculating joint positions and kinematics but its unclear how the data is wrong in the one case and then matches in the second case. Could this be a bug in the move screen? @jbm @rwi any ideas about this? Its actually causing us some issues in a program we are trying to run calculating the distance between the target point and the actual pose. If I punch in the point as listed on the screen of the controller then I never resolve to get to that point as the TCP is returning inverted rx, ry, and rz and so the distance I am calculating says I am not at the point. If I then rotate the wrist ever so slightly and get back a new vector the issue goes away and the screen and RTDE are reporting the same vector. I had about 6 out of 16 points last night that I had to do this to so that the program would continue running when it hit those points. @jbm is right, this rotation vector does not give a unique solution. The differences most likely come from numerical errors in calculating the forward kinematics (in both the controller and PolyScope separately) or tiny changes in sensor values due to noise. @mbush, have you seen pose_* script functions? Maybe they can help you do the transformations you require? @rwi its a little more complex than that unfortunately. We are running the code that is operating the robot completely off the robot. The points as poses are stored in the cloud and fed to the program as needed. The program is then using port 30002 and sending the program to the port such as movel(pose, a=accel, v=vel, r=rad). All of the parameters for the move, (a,v,r) are stored in the cloud as well. We are then determining that we are at the desired point by calculating the distance from our current pose as received from RTDE and the target pose as stored in the cloud. One of the issues that I have is when we see this inversion of the rotation vector is that its not consistent. The RTDE will actually feed us both versions, sometimes from one cycle to the next as we watch what is coming in. I tried using the robot_status_bits output from RTDE to know when a move is complete and 98% (guess of the percentage) of the time that works perfect but occasionally I was getting a weird glitch with that too and either a move would terminate early, or it would never terminate I may have to result to storing all the poses in the cloud as joint values instead of a pose but that throws in a little complexity of its own as well but I might be able to solve that complexity easier on the server side. Yes, that is because the axis-angle representation is not unique. The two vectors represent almost the same rotation, but on either side of the singularity. The difference between two axis-angle rotation vectors can among others be calculated like this: 0) Transform both vectors into Rotation matrices - R_from_to = transpose(R_from) * R_to; - Transform rotation matrix back to axis-angle representation. Alternatively, you can maybe use a Quaternion representation? @mbush Did you find the solution for that or you stored the poses as joint values? I’m trying to calculate a target pose based on a reference pose and a vector, all from the server side, so using the joint values without acess to the foward kinematics is not useful. In PolyScope the rotation vector is scaled so that it looks more stable and doesn’t flicker so much. The data on port 30003 is unscaled, but when scaled it corresponds to the GUI values shown in PolyScope. Below is a Python program that exemplifies the scaling. When run it produces the following results: PolyScope SCALED value: [0.0012193680503253582, -3.166495598686568, -0.03951768623096099] PolyScope SCALED value: [2.4759166894662425, -5.364486160510192, 1.6506111263108283] The first vector below “v_init” is the default startup position when the URControl is simulated. Note, the input and scaled vector are pretty similar, but not the same. The second vector “v” is based on example data. Notice that to calculate the scaled rotation vector, the position vector (x,y,z) is not needed - therefore it is left out of the calculation completely. from math import * v_init=[-0.0012, 3.1162, 0.03889] v=[-0.06, 0.13, -0.04] def length(v): return sqrt(pow(v,2)+pow(v,2)+pow(v,2)) def norm(v): l=length(v) norm=[v/l, v/l, v/l] return norm def _polyscope(rx,ry,rz): if ( (abs(rx) >= 0.001 and rx < 0.0) or (abs(rx) < 0.001 and abs(ry) >= 0.001 and ry < 0.0) or (abs(rx) < 0.001 and abs(ry) < 0.001 and rz < 0.0) ): scale = 1 - 2*pi / length([rx,ry,rz]) ret = [scale*rx, scale*ry, scale*rz] print "PolyScope SCALED value: ", ret return ret else: ret = [rx,ry,rz] print "PolyScope value: ", ret return ret def polyscope(v): return _polyscope(v, v, v) polyscope(v_init) polyscope(v)
OPCFW_CODE
Sort Listings By By Customer Review Libraries & Classes HS NMEA GPS C Source Library 1.0 HS GPS is a software library (with full C source code) which provides access to a NMEA-183 compliant GPS receiver via a serial communications port, decoding NMEA sentences: $GPGGA, $GPGSA, $GPGSV, $GPGLL, $GPRMC and $GPVTG. Decoded parameters include: time, date, position, altitude, speed, course and heading, according to standard - NMEA-183 (National Marine Electronics Association, Interface Standard 0183.) HS X.25 C Source Library HS X.25 is a software library in C (supplied with full source code) which implements ITU-T recommendation X.25 - Interface between Data Terminal Equipment (DTE) and Data Circuit-terminating Equipment (DCE) for terminals operating in the packet mode and connected to public data networks by dedicated circuit. Includes HsDL (Data Link) and HsSock (Winsock interface) for X.25 over IP applications (XOT)) HS XMODEM C Source Library HS XMODEM is a software library in C (supplied with full source code) that provides a programmer with the off-the-shelf support for XMODEM protocol data transfer capability. Support for both sender and receiver are provided. Other features include 1024 block size vs 128, CRC vs checksum, configurable timers and retries. HS GSM SMS C Source Library HS GSM SMS is a C source library that provides a PC-based user application with access to a mobile phone's Short Message Service (SMS) functionality, according to ETSI standards: GSM 07.05 (ETS 300 585) - "Use of Data Terminal Equipment - Data Circuit terminating; Equipment (DTE - DCE) interface for Short Message Service (SMS) and Cell Broadcast Service (CBS)" GSM 03.40 - "Technical Realization of Short Message Service - Point-to-Point This is a C++ equation parser which implements the reverse polish notation under the hood. It's quick, small, and efficient. The cpp equation parser supports the following operator and follows operator precendence: '()*^-+/' it can easily be extended to accomodate more. C++ Fraction class This is a very simple fraction class which allows you to keep those decimals in radical form rather than decimal. E-XD++MFC Library Professional E-XD++ MFC Library Professional Edition is a MFC/C++ framework for developing Microsoft Visio like interactive 2D graphics and diagramming applications. E-XD++ stores graphical objects in a node (scene) graph and renders those objects onto the screen. E-XD++ product supports both vector and raster graphics on the drawing surface.with E-XD++ Enterprise you can easily build Visio 2003 like applications within minutes! E-XD++MFC Library Enterprise E-XD++ MFC Library Enterprise Edition is a MFC/C++ framework for developing Microsoft Visio like interactive 2D graphics and diagramming applications. E-XD++ stores graphical objects in a node (scene) graph and renders those objects onto the screen. E-XD++ product supports both vector and raster graphics on the drawing surface.with E-XD++ Enterprise you can easily build Visio 2003 like applications within minutes! MySharpEbook - Use C# to Build eBooks Would you like to create your own e-book, e-brochure, e-catalog, e-magazine, e-newsletter using Visual C#? Use this C# source codes to create customized e-publications for any occasion or recipient. The registered version of this package contains the full example source codes for a complete Visual C# program that shows you how to create your own e-publications which you can distribute in the form of an executable program. * 25+ fielded & full-text search options * dtSearch’s own document filters highlight hits in popular file types * Optional API supports .NET, C++, Java, SQL, etc. dtSearch "covers all data sources"--eWeek; "Lightening fast"--Redmond Mag. See www.dtsearch.com for hundreds of reviews & case studies; fully-functional evaluations TextCaptureX is a COM library that allows screen text extraction in Windows applications.It is accessible from any COM aware programming languages. You can use it to extract text from any application that doesn't provide communication API's in order to feed another program. You can also use it to extract text from legacy systems, file directories, status bar messages, Windows error messages and more. TextCaptureX is not OCR based so it's incredible fast. So it's convenient to embed it into dictionaries or translation tools. Imagine: your customer captures any text on screen, even when copy/paste is not available, with a hotkey and with a mouse click your dictionary pops up with the text already translated/explained. We provide C++, Visual Basic and C# samples into the trial version in order to demonstrate the text capture library features. COMM-DRV/CE is a professional serial communication library for Windows/CE & Pocket PC. It supports ZModem, YModem, and XModem file transfer protocols as well as modem communication. COMM-DRV/CE does not require that you have a Pocket PC to develop serial communication applications. It was developed with eMbedded Visual C++ 3.0 which includes a Pocket PC emulator that behaves virtually identically to any of the Pocket PCs on the market today. Moreover we have included the DLLs necessary to support both the emulator as well as the actual Pocket PC devices. Major Features: *Support Visual eMbedded C++ 3.0. *Complete source code provided. *Targets devices based on Windows CE 2.11 and Windows CE 3.0. *Compatible with: Pocket PC 2002 Pocket PC 2003 Pocket PC 2003 2nd Edition Smartphone 2002 Smartphone 2003 Smartphone 2003 2nd Edition *Support for ALL Hayes compatible modems (AT command set). *Supports ALL single or multiport cards made for Pocket PC & *Windows/CE devices. *Multiple ports may be active at the same time. *Built-in hardware and software handshaking for flow control (DTR/DSR, RTS/CTS, XON/XOFF). *Adjustable communication buffers of any size. *Support any baud rate in excess of 460K baud that is supported by the underlying hardware. *File transfer libraries allow Xmodem, Ymodem, and Zmodem file transfers on multiple ports at the same time. *On line help. E-mail Validation for .net PatNorSoft Email Validation is a true native 100% managed class that can be implemented for both Webforms and Winforms. It checks email in two steps: 1- Check if the email has a valid syntax. 2- If it is valid then proceed with a special DNS look-up to check the validity of the email address. Although email validation can't be 100% foolproof, WebStorm Email Validation Component can cut off bad email addresses up to 80% in you database. StimulReport.Net is a powerful reporting tool that helps you design flexible, feature-rich reports. It's a suite of 100% managed .NET components written in C#. Our product is compatible with VisualStudio.Net. StimulReport.Net is fully using all abilities of .Net Framework. Package includes the report designer that is available for an end-user. It lets you create and modify report templates and rendered documents. StimulReport.Net is recommended both for a simple report rendering and for an unlimited complexity report rendering. Our product is runtime royalty free. StimulReport.Net includes source codes. Creation and using reports with StimulReport.Net is simple, quick and clear. StimulControls.Net is a collection of 100% native .NET framework managed controls written in C#. All controls distribute with source codes. The collection allows developers to create high-quality applications with a cool design. The package contains DockingManager, Button, CheckBox, RadioButton, ComboBox, GroupBox, ListBox, TextBox, TabControl, ButtonEdit, ColorBox, FontBox, MenuProvider, NumericSpinEdit, OutlookBar, ToolBar, TreeViewBox, TreeView, and GroupBox components. StimulControls.Net distribute in two editions: Free Edition and Commercial Edition. Active Audio Record Component Directly record audio to wav, mp3, wma, ogg, vox, au and aiff format on the fly without temporary files created. Support multiple sound cards and mixer lines Set volume level for mixer line Silence detection during recording. Get audio channel volume level Examples in VB, VBScript are provided, RTF to HTML DLL .Net The RTF-to-HTML DLL .Net is a robust and independent .Net assembly to convert Text, RTF documents into HTML/XHTML documents with CSS. The component is 100% created in managed C# and absolutely standalone. It does not require MS Office or any other word-processor. The component offers full formatting support, allowing you to retain various formatting options from your original RTF files. This includes support for font styles, faces and sizes. Client / Server Comm Library for C/C++ Client/server component C++ library for TCP/IP and UDP/IP sockets communication across a network such as the internet or intranet (LAN). Allows multiple servers and clients to run simultaneously. Servers can handle multiple connections concurrently. Create client / server file transfer. Create proxy, chat, file transfer, HTTP, SMTP, POP3, FTP and DNS client programs. Create SMTP proxy programs extracting a copy of all recipient addresses.Create POP3 proxy programs that filter incoming email for Spam. Secure and private messaging. Supports "one time" passwords for improved security. Data and files can be encrypted and decrypted. Specify the maximum number of connections that the server will accept when listening on any one port. Multiple examples and 43 functions to create 32-bit and 64-bit client server applications. Works with C++, C#.NET, Visual Studio, MFC and C++ Builder. License covers all programming languages. Royalty free. Works with Windows 95/98/Me/NT/2000/2003/XP/Vista/Win7 Winsock Interface Library for C/C++ The Winsock Interface Library simplifies winsock network communications programming and provides support for the most common Internet protocols such as Finger, SMTP, POP3, FTP, NNTP, and HTTP. Includes multiple C/C++ examples. Requires MS, Borland, Watcom, or LCC-Win32 Windows C/C++ compiler. Lazarus Registration Component A simple DLL that lets you add a registration feature to your shareware application or installation script. Includes DLL, definitions file to import into your development environment, and examples of how to use it in NullSoft installer program. Now also includes a graphical registration tool for generating customer keys.
OPCFW_CODE
Development Team Manager NHS.UK are looking for an experienced Senior Developer to make their first step as a Development Team Lead and join a team of another Dev Team Lead and a Tech Lead working with our 40 developers across the programme and within the 150 developers within the Software Development profession in NHS Digital. At NHS.UK we use agile/lean software practices to deliver www.nhs.uk , the largest government website in the UK with over 50 million visitors per month. We are currently undergoing transformation and migration from the old platform to Wagtail, a Django based CMS. We also deliver some of the main NHS campaign sites Change4Life, BeClearOnCancer, Smokefree for the Department of Health and Public Health England as well as MyNHS https://www.nhs.uk/Service-Search/performance/search, a data platform for comparing performance of health suppliers. About the role This role comprises 50% delivery in a product team as a Senior developer and 50% strategic work and will be based at our offices in Leeds or London. As a developer, you need to have good problem-solving skills and be willing to take on responsibilities to get the job done. Senior Developers will work closely with other developers, sometimes pair-programming. They will also work closely with architects, business analysts and product owners from the business, in order to refine stories for new projects. Usually embedded in a cross-functional team, they need to be able to work well with other disciplines and willing to contribute to the team’s success. As a lead, an interest in service oriented architectures would be a plus as well as being able to mentor other developers to assist with cross skilling as we are looking to build on the existing Python skills within our team who have a strong .NET background. You would also be able to work with other leads across the programme to lay down the foundations on how the platform will be able to serve the product teams as well as working towards tech standards across these. You will be involved in our hiring process as well as work closely with some outsourced teams as well. Coaching and career management is something that we expect individuals at this level to be able to provide to other developers within NHS Digital. The leadership aspect of this role is participatory technical leadership not line management. Within the development leadership team as well as with senior management this role will help drive forward the utilisation of the right toolsets and help shape the technology strategy and technology toolkit to support our team. Significant experience in the following areas is essential: - Software design, spanning both the development of systems and their operation when live - Software delivery methodologies such as Agile - Software development life cycle and change management processes combined with technical approaches to enable them to operate in a continuous delivery context - Knowledge of software industry good practice and experience of its adoption to improve the quality of systems.
OPCFW_CODE
Handle ES6 Maps and Sets Add support for ES6 Map and Set by using Array.from. Normally if Map and Set is available, Array.from should be available too. However, I guess one could have polyfills for some data types but not for the latter. So, I'm checking Array.from as well. Coverage increased (+0.03%) to 99.187% when pulling 1600122ca057a898a7dbae72598bf55625309101 on zalmoxisus:master into ecba78dca1c567a291e353ba0c6cbe6f41965e14 on blakeembrey:master. Coverage increased (+0.03%) to 99.187% when pulling 1600122ca057a898a7dbae72598bf55625309101 on zalmoxisus:master into ecba78dca1c567a291e353ba0c6cbe6f41965e14 on blakeembrey:master. Coverage increased (+0.03%) to 99.187% when pulling 1600122ca057a898a7dbae72598bf55625309101 on zalmoxisus:master into ecba78dca1c567a291e353ba0c6cbe6f41965e14 on blakeembrey:master. @blakeembrey not sure how better to proceed with CI tests on Node 0.x. Coverage increased (+0.04%) to 99.194% when pulling 740ee3a3510e691a7484b041e663965c289580cb on zalmoxisus:master into ecba78dca1c567a291e353ba0c6cbe6f41965e14 on blakeembrey:master. Coverage increased (+0.04%) to 99.197% when pulling 999427dfa4dfd3ff514a91f04fc011e2a71ec93f on zalmoxisus:master into ecba78dca1c567a291e353ba0c6cbe6f41965e14 on blakeembrey:master. Thanks for the PR! A thought though - wouldn't it be better to use map.entries() and serialize iterators instead? Well, it's good for parsing back, but not for visual inspection. We need a nice view for Redux DevTools Inspector: I guess seeing an array there is better than a function. Gotcha, this is good 👍 I might just add a little extra whitespace padding around and merge later today. Thanks again. Coverage decreased (-0.7%) to 98.425% when pulling 6a316c8ec415f0fbf396b3bd87f852510319452e on zalmoxisus:master into 6f6f7497bc4d5de32787eae5b5850026e376c096 on blakeembrey:master. Updated this PR to also use toString representation instead of instanceof Set / Map. Nice, thanks! Sorry for the delay on this one, just lots of stuff to get around to and the recent TypeScript release has taken up a lot of my open source time 😄 @zalmoxisus I can't find that comment about using new Error now, but I still firmly believe you should look at using a different stringification process too. Can you point me to where you're rendering types for the inspector? Maybe I can do some PRs for you when I get some downtime 😄 For instance, when dealing with inheritance and whatnot, it's much nicer to do things like show object.constructor.name and switch depending on the more specific types. When it's a plain object, you can use the short-hand syntax and stop serializing after a certain line length, etc. @blakeembrey, thanks a lot for the release and for all the stuff implemented here! In Redux Inspector Monitor we're using it just here for representing the result of the diff returned from jsondiffpatch. Awesome, cheers! I'll add it to my list of things to do and hopefully check it out sometime soon for you. Hopefully I can integrate some useful additions 😄
GITHUB_ARCHIVE
Mastering the _variable: Step-by-Step Guide to Effortlessly Use and Fix Python Naming Conventions: Single and Double Underscores Python has specific naming conventions that utilize underscores (_) to differentiate between public and non-public names, avoid conflicts, and adhere to Pythonic coding styles. Understanding and applying these conventions can make your code more readable, maintainable, and consistent, especially when collaborating with other developers. In this tutorial, we will explore these naming conventions and provide examples of their usage. Public and Non-Public Names Python distinguishes between public and non-public names by using a single leading underscore (_). By convention, a name starting with a single underscore indicates that it is non-public, meaning it should not be accessed or modified directly from outside its containing class or module. Non-public names are considered internal implementation details and should not be relied upon by other portions of the codebase. Creating Public and Non-Public Names Let’s take a look at an example of creating public and non-public names: In the above example, public_method are public names that can be accessed and used by other parts of the code. On the other hand, _non_public_method are non-public names and should only be accessed within the class itself. Using Public vs Non-Public Names The usage of public and non-public names can help in distinguishing between implementation details and intended public APIs. By following this convention, you can prevent other developers from relying on non-public names, keeping your code more maintainable and preventing potential issues when making changes to the internal implementation. Naming Conventions in Modules and Packages Naming conventions also apply to modules and packages. Similar to the usage in classes, a single leading underscore is used to denote non-public names within the module. Internal Variables and Constants In modules, you can define internal variables and constants that are not intended to be accessed directly by other parts of the codebase. By prefixing these names with a single underscore, you indicate that they should be considered non-public. Similarly, when defining helper functions within a module, a single leading underscore can be used to indicate that these functions are not intended to be directly used from outside the module. In certain cases, you might have modules that are meant to be used only within a specific package and not directly by external code. These modules can be prefixed with a leading underscore to indicate their non-public nature. Wildcard Imports and Non-Public Names When using wildcard imports ( from module import *), non-public names are not imported into the current namespace. This behavior is intentional, as it helps prevent name clashes and avoids polluting the importing module’s namespace with non-public names. It is generally recommended to avoid using wildcard imports and instead import specific names explicitly for better code readability and clarity. Class With Non-Public Members In addition to non-public names in modules, you can also utilize non-public members within classes. By following naming conventions, you can create class attributes and methods that are not meant to be accessed directly from outside the class. To define non-public attributes, you can prefix the attribute name with a single underscore. These attributes are intended for internal use within the class and should not be accessed directly from outside. Non-public methods within a class can be defined using the same convention of a single leading underscore. These methods are meant for internal use and should not be called directly from outside the class. Double Leading Underscore: Python’s Name Mangling Python also utilizes double leading underscores to implement name mangling, a mechanism to avoid naming conflicts in subclasses. When a name is prefixed with two underscores, Python performs name mangling to prepend the class name, making the attribute or method effectively private to the class. Understanding Name Mangling Name mangling is used to avoid unintentionally overriding attributes or methods of a base class in a subclass. By prefixing attributes or method names with double underscores, they become hard to access directly from outside the class hierarchy. Using Name Mangling in Inheritance Name mangling helps prevent accidental name clashes in subclasses by effectively making the attribute or method private to the class hierarchy. It allows each class to have its own private implementation, separate from the base class. Other Usages of Underscores in Python Apart from naming conventions related to public and non-public names, underscores have other commonly accepted usages in Python programming: Trailing Underscores in Python Names: Trailing underscores are typically used to avoid naming conflicts with Python built-in keywords or identifiers. For example, if the desired variable name is already taken by a keyword, appending an underscore can resolve the conflict. Dunder Names in Python: Dunder names, also known as magic or special names, are reserved for specific meanings or behaviors in Python. These names have a double underscore at the beginning and end, such as __str__. They provide hooks for customization and are conventionally used for specific purposes. Understanding and following Python’s naming conventions, which include single and double underscores, can greatly enhance your code’s readability, maintainability, and consistency. By differentiating public and non-public names, utilizing name mangling, and adhering to Pythonic coding styles, you can produce code that is easier to understand and collaborate on. Remember to always consider the intended audience of your code and apply the appropriate naming conventions for a better development experience. Continue learning about Python’s naming conventions and refine your programming skills. Python offers a vast ecosystem with extensive libraries and frameworks, providing endless possibilities for professional development. Get started with Python and make the most out of its naming conventions and coding styles to become a proficient developer.
OPCFW_CODE
I realized recently that I’ve been through three different stages of using Twitter as my number of followers / followed has increased (they’ve always been roughly the same). Stage 1 – friends – only connected to a small number of people that I know well. All tweets are fairly valuable and of interest to me. I know the person behind each handle, at least by reputation if not in real life. I read every tweet, going back to catch up on them if I missed a period of time. In general, I treated Twitter in this phase like a must-read information source like email. Stage 2 – community – connected to two groups: a slightly larger version of the strong core from Stage 1 (as people have joined) and another sizable group of people that are doing stuff of interest to me that I don’t know well. I’ve relaxed the constraints around knowing the network. Maybe this has something to do with breaking some limit related to Dunbar’s number. I actively sought out people in industries I was interested in by following those conversing with people already in my network who had interesting tweet histories. I also followed hashtags around a few key events that were good indicators of overlapping interest to discover others. JavaOne 2008 hit in the meaty part of this stage for me and Twitter served as a strong enhancement to my conference experience. It led me to lunch with people I kind of knew by reputation but didn’t know for real and helped me find sessions that were better than my current one (in real-time). The volume of tweets in this stage was such that I could leave my Twitter client open and “catch up” on all conversations the next time I looked at it if I wanted to, skimming in some cases. Stage 3 – presence – I’ve now reached a third stage where I am following over 500 people. I’ve lost any hope of following “all” or even a fraction of the interesting conversations I could participate in. That is somewhat freeing. When I have something to say or I’m waiting for a compile I’ll check the Twitter client and just take a top sip off the firehose. I’ll check replies to me and might hit a link or two in the last 30-40 posts. I make no attempt to look further back than that. I now rely heavily on persistent searches (especially now that Twhirl has such awesome support for it) to find conversations of higher than average interest instead of actually reading all traffic. That means I miss stuff (but at least I’m blissfully unaware of it). The combination of serendipitous meaningful conversations from sipping the stream (due to having built a set of followed people that I find interesting) and being able to find topics of top interest (regardless of whether I follow them) via persistent search is equally as rewarding as the first two stages but in different ways. In the first stage, it was the feeling of being in the same room with all the people I know. In the second stage, it was the feeling of being connected to a community. In the third, I can still tap into the feeling from the first and second stages, but I do so purely as a function of my time while still retaining the ability to skim off (and promote) the things I care about most. I think the one thing I do miss at this point is seeing “all” the tweets of the people I know the best (friends or co-workers). I could build that through searches too except I’m a lazy man. Maybe some day I’ll get to it. Follow me at @puredanger.
OPCFW_CODE
import os import sys import shutil from twisted.internet import reactor from twisted.internet.endpoints import serverFromString from twisted.logger import globalLogBeginner, FileLogObserver, formatEvent from twisted.web.resource import Resource from twisted.web.server import Site from twisted.web.static import File from twisted.web.wsgi import WSGIResource from twisted.python.threadpool import ThreadPool from server import config from server.api import app import autoreload def run_app_in_twisted(): globalLogBeginner.beginLoggingTo([ FileLogObserver(sys.stdout, lambda _: formatEvent(_) + "\n")]) threadpool = ThreadPool(maxthreads=30) wsgi_app = WSGIResource(reactor, threadpool, app) class ServerResource(Resource): isLeaf = True def __init__(self, wsgi): Resource.__init__(self) self._wsgi = wsgi def render(self, request): """ Adds headers to disable caching of api calls """ request.prepath = [] request.postpath = ['api'] + request.postpath[:] r = self._wsgi.render(request) request.responseHeaders.setRawHeaders( b'Cache-Control', [b'no-cache', b'no-store', b'must-revalidate']) request.responseHeaders.setRawHeaders(b'expires', [b'0']) return r # web-client files served from here base_resource = File('../client/dist') # api requests must go through /api base_resource.putChild('api', ServerResource(wsgi_app)) # downloadable files go here base_resource.putChild('file', File(config.SAVE_FOLDER)) site = Site(base_resource) # Start the threadpool now, shut it down when we're closing threadpool.start() reactor.addSystemEventTrigger('before', 'shutdown', threadpool.stop) endpoint = serverFromString(reactor, "tcp:port=" + str(config.PORT)) endpoint.listen(site) reactor.run() if __name__ == "__main__": autoreload.main(run_app_in_twisted)
STACK_EDU