Document
stringlengths
395
24.5k
Source
stringclasses
6 values
[libcxx-commits] [PATCH] D98983: [libcxx] adds concepts `std::totally_ordered` and `std::totally_ordered_with` Arthur O'Dwyer via Phabricator via libcxx-commits libcxx-commits at lists.llvm.org Tue Mar 23 13:23:33 PDT 2021 Quuxplusone added inline comments. Comment at: libcxx/test/std/concepts/comparison/concepts.totallyordered/totally_ordered.compile.pass.cpp:45 > Mordante wrote: > > This feels wrong, when a double has a NaN value there's no total order. I wonder whether this is a bug in the Standard. > It's a common misconception that floating-point numbers can't model `totally_ordered` because of NaN. > From [[ http://eel.is/c++draft/cmp.concept#1.1 | cmp.concept ]]: > > `t < u`, `t <= u`, `t > u`, `t >= u`, `u < t`, `u <= t`, `u > t`, and `u >= t` have the same domain. > NaN is outside this domain, which I believe is documented by floating-point specifications. > Andrew Sutton sums this up in his [[ https://youtu.be/ZeU6OPaGxwM?t=1980 | CppCon 2019 talk ]]. Specifically: > > Concepts do not hide preconditions. Syntactic operations are not total operations. Floating point types are totally ordered even though NaN exists.... because there's a precondition on the algorithms that use the operation.... Random-access iterators are totally ordered, even though the relation is really sparse. +1 @cjdb. A Concept like `totally_ordered` is ultimately just a way of organizing source code, like saying "does it make sense to have a `std::set` of these things." You can make a `std::set<double>` just fine. It does misbehave badly if you try to put NaN into it; but, you just have to remember not to do that. Andrew Sutton wrote: > > Random-access iterators are totally ordered, even though the relation is really sparse Now //that's// stretching the philosophy to the breaking point, I think ;) but sure, it can make sense to have a `std::set<std::deque<T>::iterator>`, as long as you're careful to put into it only iterators from a single container and never let them become invalidated. Or come at it from the other side: Concepts describe the requirements of algorithms, i.e. what cases the algorithm needs to worry about. If the places-where-the-natural-ordering-fails also happen to be places-where-our-algorithms-don't-care-about-the-result (maybe because they're UB), then the concept remains useful in practice. rG LLVM Github Monorepo CHANGES SINCE LAST ACTION More information about the libcxx-commits
OPCFW_CODE
Logitech G500 Right Click needs extra force and makes weird clamping sound It's harder to go down and depending on position of application of force it may also do a weird sound indicating it 'clamps' at some point [this is especially irritating since it feels like it's 'glued']. Any information that might help? self-answer: I fixed this with some hackery: Under the right click (it requires 'open Operation') there's a plastic, sort of "clamp" which appears to be getting in a hole of some sort acting as a kind of stabilization mechanism. (sorry for the english) The point is if you just cut it off, it fixes. I had initially thought of "Filing"(as in a 'Fingernail File') edges of contact of the key and rest of mouse around it but that did nothing. There's no noticeable stabilization problems with it. EDIT: It may still need a "prop" of some sort under the key where it contacts the actual 'clicker' (e.g. with duct tape (small pieces)). It's not extremely easy but not impossible. It's stable for at least a month now. Mice tend to get buildups of gunk inside them because most people eat and drink at their computers. You have two choices: buy a new mouse or disassemble the mouse and clean it. The latter is worth a shot, since you really don't have much to lose. eHow's basic instructions for disassembling a mouse may be helpful. Unfortunately, the microswitches most commonly used for mouse buttons are both very small and typically "sealed" units (i.e. fully enclosed and not meant to come apart). But you should be able to spot them easily enough. I've had some success just bathing the things in alcohol applied with a QTip and then waiting for them to dry out. Good luck! Problem is this is an extremely new mouse and I have these issues: a) Warranty (minimal issue IMO) but big issue b) If you take it out you have to also make the feets' glue much looser after a reattachment. And I have Recent experience of this being a huge headache. @user30091: I know what you mean. But in my experience, if you're careful not to damage the glue pad when you remove the pad, they go back on and stay on okay. Use a very small flat blade screwdriver to gently pry up one end or side of the pad, take it slow and the pad should come away with the glue pad intact. But I take your point about it being new and still under warranty. At least mice are cheap. But as I said, if this problem is making you not want to use the mouse, what have you got to lose?
STACK_EXCHANGE
User Manual/Appendix: Music and sound Some games (such as Sam & Max Hit the Road) only contain music in the form of MIDI data. At one time, this prevented music for these games from working on platforms that do not support MIDI, or soundcards that do not provide MIDI drivers (e.g. many soundcards will not play MIDI under Linux). ScummVM can now emulate MIDI mode using AdLib, FluidSynth MIDI or MT-32 emulation modes: on most operating systems and for most games, ScummVM will by default use AdLib emulation for music playback. However, if you are capable of using native MIDI, we recommend using one of the Native MIDI drivers below for best sound and performance. This may require manual configuration on some systems. |null||Null output: don't play any music| |adlib||Internal AdLib emulation (default)| |fluidsynth||FluidSynth MIDI emulation| |mt32||Internal MT-32 emulation| |pcjr||Internal PCjr emulation (only usable in SCUMM games)| |pcspk||Internal PC Speaker emulation| |towns||Internal FM-TOWNS YM2612 emulation (only usable in SCUMM FM-TOWNS games)| |CAMD||Amiga MIDI (Commodore Amiga MIDI Driver - Amiga only).| |windows||Windows MIDI (Windows only). Uses built-in sequencer| |core||CoreAudio sound (Mac OS X only)| |coremidi||CoreMIDI sound (Mac OS X only). Use only if you have a hardware MIDI synthesizer| |alsa||Output using ALSA sequencer device (Unix only)| |seq||Use /dev/sequencer for MIDI (Unix only)| |timidity||Connect to TiMidity++ MIDI server| You can either select a sound driver using the Launcher, or run ScummVM with the the '-e' option (see Command line options), for example: scummvm -eadlib monkey2 By default an AdLib card will be emulated and ScummVM will output the music as sampled waves. This is the default mode for most games, and offers the best compatibility between machines and games. Up to version 0.13.1 only one emulator was available (the MAME OPL emulator). After version 0.13.1 the DOSBox OPL emulator has been added. However this emulator is still experimental. FluidSynth MIDI emulation If ScummVM was build with libfluidsynth support it will be able to play MIDI music through the FluidSynth driver. You will have to specify a SoundFont to use, however. Since the default output volume from FluidSynth can be fairly low, ScummVM will set the gain by default to get a stronger signal. This can be further adjusted using the --midi-gain command-line option, or the "midi_gain" config file setting. The setting can take any value from 0 through 1000, with the default being 100. (This corresponds to FluidSynth's gain settings of 0.0 through 10.0, which are presumably measured in decibel.) NOTE: The processor requirements for FluidSynth can be fairly high in some cases. A fast CPU is recommended. Some games which contain MIDI music data also have improved tracks designed for the MT-32 sound module. ScummVM can now emulate this device, however you must provide original MT-32 ROMs to make it work: MT32_PCM.ROM - IC21 (512KB) MT32_CONTROL.ROM - IC26 (32KB) and IC27 (32KB), interleaved byte-wise Place these ROM's in the game directory, in your extrapath, or in the directory where your ScummVM executable resides. You don't need to specify --native-mt32 with this driver, as it automatically gets turned on. NOTE: The processor requirements for the emulator are quite high; a fast CPU is strongly recommended. Native MIDI drivers Use the appropriate -e<mode> command line option from the list above to select your preferred MIDI device. For example, if you wish to use the Windows MIDI driver, use the -ewindows option. Sequencer MIDI (Unix only) If your soundcard driver supports a sequencer, you may set the environment variable "SCUMMVM_MIDI" to your sequencer device -- for example, to /dev/sequencer If you have problems with not hearing audio in this configuration, it is possible you will need to set the "SCUMMVM_MIDIPORT" variable to 1 or 2. This selects the port on the selected sequencer to use. Then start scummvm with the -eseq parameter. This should work on several cards, and may offer better performance and quality than AdLib emulation. However, for those systems where sequencer support does not work, you can always fall back on AdLib emulation. ALSA sequencer (Unix only) If you have installed the ALSA driver with the sequencer support, then set the environment variable SCUMMVM_PORT or the config file parameter alsa_port to your sequencer port. The default is "65:0". Here is a little how-to on how to use the ALSA sequencer with your soundcard. In all cases, to have a list of all the sequencer ports you have, try the command aconnect -o -l This should give output similar to: client 64: 'External MIDI 0' [type=kernel] 0 'MIDI 0-0 ' client 65: 'Emu10k1 WaveTable' [type=kernel] 0 'Emu10k1 Port 0 ' 1 'Emu10k1 Port 1 ' 2 'Emu10k1 Port 2 ' 3 'Emu10k1 Port 3 ' client 128: 'Client-128' [type=user] 0 'TiMidity port 0 ' 1 'TiMidity port 1 ' This means the external MIDI output of the sound card is located on the port 64:0, four WaveTable MIDI outputs in 65:0, 65:1, 65:2 and 65:3, and two TiMidity ports, located at 128:0 and 128:1. If you have a FM-chip on your card, like the SB16, then you have to load the SoundFonts using the sbiload software. Example: sbiload -p 65:0 /etc/std.o3 /etc/drums.o3 If you have a WaveTable capable sound card, you have to load a sbk or sf2 SoundFont using the sfxload software. Example: If you don't have a MIDI capable soundcard, there are two options: FluidSynth and TiMidity. We recommend FluidSynth, as on many systems TiMidity will 'lag' behind music. This is very noticeable in iMUSE-enabled games, which use fast and dynamic music transitions. Running TiMidity as root will allow it to setup real time priority, which may reduce music lag. Asking TiMidity to become an ALSA sequencer: timidity -iAqqq -B2,8 -Os1S -s 44100 & (If you get distorted output with this setting, you can try dropping the -B2,8 or changing the value.) Asking FluidSynth to become an ALSA sequencer (using SoundFonts): fluidsynth -m alsa_seq /path/to/8mbgmsfx.sf2 Once either TiMidity or FluidSynth are running, use the 'aconnect -o -l' command as described earlier in this section. Using TiMidity++ MIDI server If you system lacks any MIDI sequencer, but you still want better MIDI quality than default AdLib emulation can offer, you can try TiMidity++ MIDI server. See http://timidity.sourceforge.net/ for download and install instructions. First, you need to start a daemon: timidity -ir 7777 Now you can start scummvm and try selection TiMidity music output. By default, it will connect to localhost:7777, but you can change host/port by defining "TIMIDITY_HOST" environment variable. Using compressed audio files Output sample rate The output sample rate tells ScummVM how many sound samples to play per channel per second. There is much that could be said on this subject, but most of it is beyond the scope of this document. The short version is that for most games 22050 Hz is fine, but in some cases 44100 Hz is preferable. On extremely low-end systems you may want to use 11025 Hz, but it’s unlikely that you will have to worry about that. To elaborate, most of the sounds that ScummVM has to play were sampled at either 22050 Hz or 11025 Hz. Using a higher sample rate will not magically improve the quality of these sounds, thus 22050 Hz is fine. Some games use CD audio. If you use compressed files for this, they are probably sampled at 44100 Hz, so for these games that may be a better choice of sample rate. When using the AdLib, FM Towns, PC Speaker or IBM PCjr music drivers, ScummVM is responsible for generating the samples. Usually 22050 Hz will be plenty for these, but there is at least one piece of AdLib music in Beneath a Steel Sky that will sound a lot better at 44100 Hz. Using frequencies in between is not recommended. For one thing, your sound card may not support it. In theory, ScummVM should fall back on a sensible frequency in that case, but don’t count on it. More importantly, ScummVM has to resample all sounds to its output frequency. This is much easier to do well if the output frequency is a multiple of the original frequency.
OPCFW_CODE
On-delay circuit not working I'm trying to make On-delay circuit with passive components, I use 3 identical AAA alkaline batteries (4.5 v) and use PNP 2N2907 Bjt with 600mA Collector Current to heat(glow) up a small resistor wire I use a 1000 uF capacitor and 20k Pot resistor for adjustable RC delay, Also I use 6.1v Zener diode however, when I press push button in my circuit the heating element does not even warm-up following is my schematic my question is: Is Bjt not powerful enough to deliver current and heat up the element, or is my schematic wrong and I should select another component? if the current is not enough how can I add a wire coil inductor to charge on delay and push more current to the heating element? updated schematic updated schematic base on new comments Can you tell us how much current the heating element draws when cold and when hot, and how much delay you want? @GodJihyo I think my heating element need around 1 Amp and want to delay not more than 10 seconds If you need 1A a 600mA transistor isn't going to handle it. And if it takes 1A when it's heated it will take a lot more when cold. @GodJihyo can parallel two Bjt? It's possible but you still have other problems. You need to know the characteristics of the load and the power source. What is the current draw profile of the load? Will the batteries supply the current you need for the time that you need it? And we haven't even gotten to the matter of whether the time delay will work or not. The zener diode will never conduct. Its breakdown voltage is greater than the voltage from the batteries. Neither transistor will ever conduct. Also, if the batteries are connected as shown, they're backwards. Why do you want a 10 second delay? If you get the circuit working it won't do anything more than waiting 10 seconds before pushing the switch. If you want to start a time delay after pressing the button, you might want a latching circuit, best accomplished with a 555 timer. But it looks like you want to keep the output off until the button has been held for at least 10 seconds. And then will stay on as long as the button is held on. why don't you simply press the button 10 seconds later? Transistors and BJTs are not passive components.... You are trying to turn on a PNP transistor with a positive voltage from base to collector, to turn on a PNP you need a negative voltage from base to emitter. Another problem is that you are using a 6.1 V Zener but your batteries only supply 4.5 V, so the Zener will never see enough voltage to conduct. You would either need higher voltage from the batteries or a different circuit altogether. You could try an NPN transistor and swap the emitter and collector around, but you need one that will handle the cold current of the wire and enough base current to saturate the transistor which may require using another transistor as a driver. Other options are a MOSFET or a relay, although a relay would add extra drain on the batteries. You need to take the initial inrush current into account. A heating wire will have a low resistance when cold and then the resistance will increase as it heats up. This will cause a large current draw on startup than when in operation. You can deal with the inrush several ways, either use a device that can handle the maximum inrush current, or limit the inrush current. the reason I use PNP is, that I couldn't find and high current NPN in the market, but I was suspicious about the Zener diode my capacitor not break it to drive the transistor I update the schematic @mehrdad You can't find a high current NPN, but you can find high current PNP? That sounds really strange to me, because NPN transistors are generally better in most respects than PNP equivalents, so they're much more common, especially for power transistors. @Hearth price tag is the most factor of this project, if I could use the parallel PNP, still is much lower than the high current NPN or MOSFET or even N555 timer That time delay circuit needs a complete redesign, so here is my idea for something that should work. It uses an NPN/PNP pair as an SCR, so it will latch on and remain on as long as power is applied. price tag is the most factor of this project, although your circuit works it needs more components, then the price is more than N555 timer (robust solution) , can you check the new circuit update Simulate it or build it and find out. Or use an actual SCR.
STACK_EXCHANGE
I cannot use filter with function as parameters Hi :) I don't know if it's me or not, but I am unable to use filter function in twig. Here is my test script: import * as Twig from 'twig'; (async () => { Twig.extendFilter("filter", (value, args) => { return value.filter(args); }); const template = Twig.twig({ data: '{{ [1,2,3]|filter(test => test > 1)|join(",") }}' }); const content = await template.renderAsync(); console.log(content); })().catch(console.error); I would like to extend twigjs to add map/filter filters. But twigjs seems to doesn't implement function parser :/ Here is my error staks: Error compiling twig template undefined: TwigException: Unable to parse '=> test > 1)|join(",")' at template position19 TypeError: Cannot read property 'length' of undefined at Object.Twig.async.forEach (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:8937:19) at Twig.ParseState.parse (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:1576:26) at Twig.ParseState.parseAsync (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:8606:17) at Twig.Template.<anonymous> (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:1750:20) at Object.Twig.async.potentiallyAsync (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:8668:42) at Twig.Template.render (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:1748:23) at Twig.Template.renderAsync (C:\Users\duto\Desktop\test\node_modules\twig\twig.js:8620:17) at C:\Users\duto\Desktop\test\index.ts:12:36 at step (C:\Users\duto\Desktop\test\index.ts:33:23) at Object.next (C:\Users\duto\Desktop\test\index.ts:14:53) How could we fix this ? I found a small solution, to build the function in the extension like this: import * as Twig from 'twig'; (async () => { Twig.extendFilter("filter", (value, args ) => { const f = new Function(... args); return value.filter(f); }); const template = Twig.twig({ data: '{{ [1,2,3]|filter("test","return test > 1")|join(",") }}' }); const content = await template.renderAsync(); console.log(content); })().catch(console.error); That would be nice if the twig parser is already enable to parse directely the function. Thanks ! Hello there, we are having the same problems in our projects with the twig filter |filter(). https://twig.symfony.com/doc/2.x/filters/filter.html {% set sizes = [34, 36, 38, 40, 42] %} {{ sizes|filter(v => v > 38)|join(', ') }} These lines will cause the error as described above: NonErrorEmittedError: (Emitted value instead of an instance of Error) TwigException: Unable to parse '=> v > 38)|join(', ')' at template position2 Is this filter supported, as seen in the implementation notes? Thank you! Hello @jzuleger, You shared the documentation from twig who runs on php language. On the php version, the following filter is implemented, not in the twigjs package. Here is the list of all the features who are implemented on the twigjs package: https://github.com/twigjs/twig.js/wiki/Implementation-Notes Hey @dupasj, that's absolutly right. My lines are copied from the documentation for the twig version for php. But the php documentation is added as Docs reference in the twig.js implementation note -e.g. for filters. So my understanding would be, that if they are supported, the filters would work as they would in php. But the filter "filter" does not. @jzuleger , twigjs doesn't support the filter filter if I'm not mistaken. If you are looking for a Twig implementation that supports the filter filter, maybe you can give Twing a try: https://www.npmjs.com/package/twing Disclaimer: I'm the author of Twing and I rarely advertise my work here. But in this case, since it is about un unsupported feature, I allowed me to do so. :) Duplicate of #652.
GITHUB_ARCHIVE
You can create your own VPN protocol using a combination of encryption algorithms, tunneling protocols, and other networking concepts. This can be a challenging project, but it can be a great way to learn about network security and cryptography. A Virtual Private Network (VPN) protocol is a set of rules that govern the way data is transmitted between a client and a server. VPN protocols provide secure and private communication over the internet by encrypting all data transmitted between the two parties. While there are many VPN protocols available, you may want to create your own custom VPN protocol for various reasons, such as improving security or optimizing network performance. In this article, we will guide you through the process of creating a custom VPN protocol. Step 1: Define the goals of your custom VPN protocol Before creating a custom VPN protocol, you need to define the goals you want to achieve with it. Some possible goals could be: Improved security: A custom VPN protocol can use stronger encryption algorithms or implement additional security features to improve the overall security of the VPN. Optimized performance: A custom VPN protocol can be designed to reduce latency and improve network throughput, which can be especially important for bandwidth-intensive applications such as video streaming. Better compatibility: A custom VPN protocol can be designed to work with specific devices or platforms that may not be supported by existing VPN protocols. Step 2: Choose the underlying transport protocol The underlying transport protocol is the protocol used to transmit data between the client and server. The most common transport protocols used in VPNs are TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). TCP is reliable but can be slow and introduce latency, while UDP is faster but less reliable. You need to choose the transport protocol that best suits your needs. Step 3: Design the encryption scheme Encryption is a critical component of any VPN protocol, as it ensures that data transmitted between the client and server is secure and private. You need to design an encryption scheme that meets your security requirements. Some common encryption algorithms used in VPNs are AES (Advanced Encryption Standard), Blowfish, and 3DES (Triple Data Encryption Standard). Step 4: Define the handshake protocol The handshake protocol is the process by which the client and server establish a secure connection. It involves exchanging messages that authenticate the parties and negotiate the encryption parameters. You need to define a handshake protocol that is secure and efficient. Step 5: Test and refine your custom VPN protocol Once you have designed your custom VPN protocol, you need to test it thoroughly to ensure that it works as intended. You should test it under different network conditions and with various devices and platforms to ensure that it is compatible and performs well. You may need to refine your custom VPN protocol based on your test results. In conclusion, creating a custom VPN protocol requires careful planning and design. By following the steps outlined in this article, you can create a custom VPN protocol that meets your specific goals and provides secure and private communication over the internet. Remember that security is paramount, and you should always test and refine your custom VPN protocol to ensure that it is effective and secure.
OPCFW_CODE
Can you check the Event Viewer (Applications) if you can find more info on the "internal error"? What I would try in your situation is *uninstall Reader 9 *download and install Reader 8.1.3 from http://get.adobe.com/reader/otherversions/ Thnks for ur reply. I cannot locate any Event Viewer App or file in my computer. I tried to uninstall Reader 9 per your intelligent suggestion, but it will not allow me to!!!! I get an error message from ADOBE SAYING "error 1606. Could not access network location. %APPDATA%\. whatever the heck that means. All I know is this is getting to be a nightmare. I cannot evem access tech support because the web page insists over and over that my PW and ID do not match, even tho I have gotten the items from ADOBE themselves. And, they will not take a phone call because my Reader 9 has not been registered, even tho it has been. I am THE most frustrated non-techy in Texas now. I have spent about 4 days with running into complete deadends. Any other suggestions? Microsoft wont even talk to me. LOL See this article on how to use the Event Viewer http://support.microsoft.com/kb/308427 One problem that you have is that you are discussing the same problem in two different threads in this forum, which are going in different directions; consider stopping one or the other. Can you tell me what the contents of your %APPDATA% environment variable is (Start -> Run -> cmd [OK] -> echo %appdata%) - is it pointing to a folder 'Application Data' on your local disk? Since you cannot use the Windows Installer Cleanup Utility (according to your other thread), can you try to run a Repair Install of your Adobe Reader (Add/Remove Programs | select Adobe Reader | click [Change] | click [Next] | select 'Repair' and click [Next]). Yes, I understand that I am talking now about different threads. Thats because I seem to run across a new problem every time I attempt a fix. Thus the "Catch 22" dead end comment. That is my situation exactly. Now, I cannot even uninstall v9. I have no idea what my envirionment variable is, because I find no Event Viewer at all on the list of apps or programs ( and I dont know what an envirionmental variabe is to begin with . lol). Keep in mind that I cannot even OPEN v 9 now to try and use some of the fixes that have been suggested. Nor can I change or uninstall it. I was going to uninstall and try and find an older version, but now that is immpossible. And, heres another new tilt. I have been getting messages today that Flash Player wants to re-install. This has happened 3-4 times unsolicited. Am I being haunted by an Adobe ghost??? lol I'm having the exact same problem with Reader 9.0. I'm not even sure when it started, but I've been unable to open the program and email attachments for days. I have Windows Vista and I've tried everything I've read to make Reader 9.0 work, with no success. I keep getting the error 1606 message when trying to open or unistall the program. Any more suggestions? I just spent two hours on the phone with the Geek Squad online support tech group. Used remote access, many many checks and several tools. They could not fix the problem either. So if they cant we probably cant either. I wish I had read your reply before I accepted their "verdict" of a possible RAM problem. Your exact same situation would seem to me to disprove that diagnosis and support a theory of some incompatability with Vista and/or MS "Updates." Keep in touch and lets compare notes. hello, when i try to open an pdf-document. it is opend automatically in word. how can i change acrobat-reader 9 to open the document automatically as a pdf-document? thank's for your advice. Having same problem cannot open email pdf messages.Using Adobe reader 9.0 and windows XP. Have saved messages o My Documents and then tried to open them with 9.0. Get a blank page. Doesn't matter what source the emails are sent from. At first I would get a message saying "not enough data to open image". Thanks in advance, Tom is this your own machine or a work machine in an IT managed environment? If the later I'd call IT support before making changes to your system. I suspect that in that scenario you're running into this issue: If it's your own machine, check this MS kb doc: Thanks Simon; Its my own. I have read and printed the instructions on how fix the error 1606 message. Perhaps I will work up the courage to actually try and follow those instructions. Whew! For a non-computer literate like myself they appear overwhelming. But, it's better than taking the machine into Geek Squad. I'll let you know if I blew the dang thing up! This does not appear to be a VISTA only problem. See note #8, which states I am having a problem using XP. Another attempt I made to open pdf emails was to copy them on a floppy and then try and open them on another computer that is using Windows 98. Same result. Hey Adobe!!! If you are listening we need help! > Hey Adobe!!! If you are listening we need help! Please open your own thread with your own problem instead of hijacking a thread with a completely different issue. Wiilhoite and I seem to be having the same problem. That is we cannot open Adobe Reader9. This does not seem to be a "hijacking" but if it does to you you have my apoligies. Well, perhaps as a novice to "threading" I may be guilty of "hijacking" according to some folks' terms. I just wanted to include all of the situations I have run into in attempting to fix the problem in hopes someone might help me get started in the right direction. However, the root problem is that I cannot open Adobe Reader v 9. I am having the exact same problem as Tom. I just purchased a new computer for home and it came installed with v9 but it doesn't work. I can't uninstall it and can't install another version of Reader since it states that I already have an improved version. I read in another thread that sometimes 8.1.2 doesn't completely uninstall and therefore v9 may not be completely installed. Is anyone aware of this? This is terribly frustrating. Not only can't I open any pdf files, but when I try to do this on line, it closes the link to the site I am on. Any help would be greatly appreciated. Tom - I am still having the 1606 error problem on my computer using the administrator account. However, my husband created a user account and reader v9 worked. So I created a user account and now I also have access. The adminstrator account is still not running v9 but at least I have access using the user account. Hope this works for you. Are you a Domain User or an Admin on your PC? The reason I ask is because you mentiond that you cannot remove Adobe Reader 9. Either cause you don't have permissions. Then again it is Vista so maybe you have to login as the Administrator of the machine. I know it sounds dumb and forgive me for having to ask that but it's a common problem that I've been running into. I am the Admin on my PC. I cannot uninstall Adobe Reader 9 when logged on as the Admin, additionally, the computer will not open any pdf files while in this domain. However, I created a user domain and I can open pdf files when I log on as a user. I would like to correct the Admin domain as I am concerned that this will eventually affect the user domains, but at least I have funtionality as a user at this time. I believe there is a problem with Reader 9 for users with profiles stored on a server. I am having the same exact problem as you are having. I am also using Windows Vista and am having problems with Adobe Reader 9.1 The download doesn't seem to fully complete. I also am an administrator and logged in as such on this computer. I had another user account that I created for others that might use this computer. I logged into that user account yesterday and I received notifications that updates were available for Adobe, Internet Explorer, and Firefox. I installed all the updates and got the problem with Adobe Reader. Could the problem have been created by me installing the update for Adobe Reader through the user account? Adobe Reader worked on the user account but then I logged off and logged into my main account (administrator account) and there was a problem with Adobe. I tried to uninstall Adobe Reader (through the control panel) and was unsuccessful (I figured I can uninstall it and then reinstall). I then just deleted the user account and arrived to where I am right now. I click on Adobe Reader's shortcut on my desktop and get a popup stating: "An internal error occurred." Then I close the popup and get the famous green bar from hell that is associated with Vista stating: "Adobe Reader 9.1 has stopped working A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available." blah blah blah I then click to close that window....windows returns a solution...to go to Adobe Incorporated....etc. and sends me to download Adobe Reader 9.1....I proceed to download First I get a popup window during the installation process stating: "Welcome to Setup for Adobe Reader 9.1 Adobe Reader 9.1 Setup is preparing to guide you through the program setup process. Please wait. Computing space requirements." Then the process gets interrupted with this popup: "Setup Completed Setup was interrupted before Adobe Reader 9.1 could be completely installed. Your system has not been modified. To complete installation at another time, please run setup again. Click Finish to exit setup." Along with the previous window I also get an error message stating the following: "Adobe Reader 9.1 Installer Information Error 1606. Could not access network location %APPDATA%\." There is an Adobe Reader 9 Installer file on my desktop which includes the setup files. I open the setup file to complete the installation process and I get the same error messages. I believe this problem has to do with the registry. This is very frustrating. CAN ANYONE WITH INFORMATION HELP PLEAAAAAASSSSSSSEEEEEEEEEEE!!!!! Well, I have returned to post my results after searching for a solution for hours and finally arriving to one that was specific to my problem....and I decided to share with others experiencing a similar problem. It took a couple of hours of searching but I followed the instructions located here http://support.microsoft.com/kb/886549/ on a microsoft support page and so far the problem has been fixed. Hope this helps!!! Thanks for your tread because I have problem and I get reply from this tread help me. Not sure about the solution but seems to be this issue occurs when there is some confliction between Office files and pdf maker. reinstalling Microsoft office will resolve this issue. Try it. Thanks. Wow, thanks SimonATS...the Microsoft link worked! One of the values was wrong. I typed what Microsoft said should be there and the I was able to download Adobe with no problem. I can open Adobe documents. Thank you sooo much for posting that link. I love the internet!
OPCFW_CODE
uv_spawn memory leak when command not found (libuv-1.8.0) I am seeing a memory leak when i do uv_spawn for which the options.file doesn't exist. I do know if we use uv_strerror it will leak for unknown error codes, but this is a known error code (even with uv_strerror commented out i still see the same leak). More Context: I am writing a light weight remote execution program (daemon), since i am getting small memory leak my code is core dumping if i hit it hard with concurrent requests (if i comment out the test case which caused uv_spawn to fail, then i am all good). I am using libuv-1.8.0 Code #include <stdio.h> #include <inttypes.h> #include <uv.h> uv_loop_t *loop; uv_process_t child_req; uv_process_options_t options; void on_exit(uv_process_t *req, int64_t exit_status, int term_signal) { fprintf(stderr, "Process exited with status %" PRId64 ", signal %d\n", exit_status, term_signal); uv_close((uv_handle_t*) req, NULL); } int main() { loop = uv_default_loop(); char* args[3]; args[0] = "/bin/no_mkdir"; // no such command args[1] = "test-dir"; args[2] = NULL; options.exit_cb = on_exit; options.file = "/bin/no_mkdir"; // no such command options.args = args; int r; if ((r = uv_spawn(loop, &child_req, &options))) { fprintf(stderr, "%s\n", uv_strerror(r)); return 1; } else { fprintf(stderr, "Launched process with ID %d\n", child_req.pid); } return uv_run(loop, UV_RUN_DEFAULT); } Compile tmp ~$ gcc -g -o spawn spawn.c -luv Valgrind ...snip... ==12179== HEAP SUMMARY: ==12179== in use at exit: 152 bytes in 2 blocks ==12179== total heap usage: 2 allocs, 0 frees, 152 bytes allocated ==12179== ==12179== 24 bytes in 1 blocks are definitely lost in loss record 1 of 2 ==12179== at 0x4A06A2E: malloc (vg_replace_malloc.c:270) ==12179== by 0x4C286EB: uv_spawn (process.c:418) ==12179== by 0x400858: main (spawn.c:28) ==12179== ==12179== 128 bytes in 1 blocks are still reachable in loss record 2 of 2 ==12179== at 0x4A06A2E: malloc (vg_replace_malloc.c:270) ==12179== by 0x4A06BA2: realloc (vg_replace_malloc.c:662) ==12179== by 0x4C2233A: uv__io_start (core.c:779) ==12179== by 0x4C29260: uv_signal_init (signal.c:225) ==12179== by 0x4C2786E: uv_loop_init (loop.c:66) ==12179== by 0x4C2020D: uv_default_loop (uv-common.c:567) ==12179== by 0x4007FF: main (spawn.c:16) ==12179== ==12179== LEAK SUMMARY: ==12179== definitely lost: 24 bytes in 1 blocks ==12179== indirectly lost: 0 bytes in 0 blocks ==12179== possibly lost: 0 bytes in 0 blocks ==12179== still reachable: 128 bytes in 1 blocks ==12179== suppressed: 0 bytes in 0 blocks ==12179== ==12179== For counts of detected and suppressed errors, rerun with: -v ==12179== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 6 from 6) no such file or directory ==12178== ..snip.. i am not worried about uv_run leak because i am not closing it correctly (or is it a problem?). Original Google Group Post: https://groups.google.com/forum/#!topic/libuv/Mv46eXpp-YE related #551 As I said on the mailing list, I think this is a false positive. I don't see how that function can return without freeing the pipes array. Maybe valgrind gets confused? @bnoordhuis WDYT? Valgrind is probably complaining about the forked process. It doesn't free pipes because why bother, it's going to call execve() anyway. I thought of that and added a free right before the execve, but it kept complaining... what gives? Try passing --child-silent-after-fork=yes to valgrind. @bnoordhuis yep, that did it. laso this small patch, but I'm not sure if we want to do that since we execvp right after: diff --git a/src/unix/process.c b/src/unix/process.c index 571f8cd..7dc3d1d 100644 --- a/src/unix/process.c +++ b/src/unix/process.c @@ -376,6 +376,8 @@ static void uv__process_child_init(const uv_process_options_t* options, environ = options->env; } + uv__free(pipes); + execvp(options->file, options->args); uv__write_int(error_fd, -errno); _exit(127); Yeah, not much point and probably not 100% effective, either. Ah true that. Closing as wontfix then :-)
GITHUB_ARCHIVE
API 2.0 Switchover Progress on Switching over to API 2.0 API: [x] add-key [x] authorize [x] create [x] destroy [x] destroy-image [x] droplets [x] halt [x] help [x] images [x] info [x] info-image [x] keys [x] password-reset [x] rebuild [x] regions [x] resize [x] restart [x] sizes [x] snapshot [x] ssh [x] start [x] verify [x] version [x] wait Everything seems to work from my testing, now it's just a matter of getting all the tests ported over. I'm no Ruby pro, but I tried this to try out the progress so far and I get an error. Thoughts?: $ gem build tugboat.gemspec $ sudo gem install tugboat-2.0.0.pre1.gem $ tugboat Error: /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:135:in require': cannot load such file -- tugboat (LoadError) from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:135:in rescue in require' from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:144:in require' from /Library/Ruby/Gems/2.0.0/gems/tugboat-1.0.0/bin/tugboat:8:in <top (required)>' from /usr/bin/tugboat:23:in load' from /usr/bin/tugboat:23:in ' "This documentation is for version 1 of the DigitalOcean API. It has been deprecated and will be shutdown on Monday, November 9, 2015." Yikes, deadline coming up fast! :scream: yeap, 1 month left... This is great progress. Is there an easy way to try this out for a non-Rubyist? I run this all in a docker container and currently run "gem install tugboat", but is there a replacement command which would install this branch instead? Created a https://www.bountysource.com/issues/8836949-api-2-0-switchover for your efforts so far. @FredrikWendt I too am ruby-less... https://github.com/pearkes/tugboat/blob/master/CONTRIBUTING.md explains a lot about what you are asking. You're going to wind up doing something similar to the following on a linux command line: $ sudo apt-get install bundler git $ cd ~/ $ git clone https://github.com/pearkes/tugboat.git $ cd tugboat $ git checkout api2_changes $ bundle $ bundle exec rake install #this failed for me but led me to $ sudo gem install ~/tugboat/pkg/tugboat-2.0.0.pre3.gem i was able to tugboat authorize and create, tugboat ssh made an ugly fail when i tried to ssh without an id_rsa but I feel set that I can make the switch on nov 9 without too much pain. First of all, thank you for your service to the community :smile: . Do we need to swarm this and get it working with V2? Is it just tests that we're waiting on for a release? Hi @freedomben, yeah it's just a matter of getting all the tests moved over to the new job fixtures, cleaning up the commits then doing an official 2.0.0 release. I should have it done before end of October, I've been doing it on and off over the last year, but the deadline has lit a fire and I'm cranking out the last bits of test code. If I don't get the tests finished, I'll release it with pending tests and add a caveat to the post install just in case. Weve had a bunch of people testing the alpha release and most of the commands seem OK 😊 PR here: https://github.com/pearkes/tugboat/pull/178 Only 44 failing specs to go! :smile: Clean PR here: https://github.com/pearkes/tugboat/pull/180 Just need to figure out why the specs fail peridocially on 2.0.0, then merge soon! :+1: Fixed by #183
GITHUB_ARCHIVE
Hi Daniel, Ayden, On 25.04.23 19:03, Daniel Rozsnyó wrote: > I have an Epyc 2nd gen (Rome) system and I am getting this error: > Enabling flash write... FCH device found but SMBus revision 0x61 does > not match known values. there's a good chance that the flashrom-stable fork (list in CC) works for your systems. The source code is available via Git $ git clone -b v1.1 https://review.coreboot.org/flashrom-stable.git or as a tarball: Daniel, I've never looked into ebuilds, I suppose it could be easily adapted, though. If you want to build it manually, check for prerequisites. I suggest that you make a backup first and try to confirm its consistency before any write attempt. With some luck, there are some markers visible in a hexdump. At *20000 you may see "aa 55 aa 55". And probably in the very last line some pattern with 90* e9. The latter is legacy, though, and may not be there. This is from one of my AM4 boards: $ hexdump -C /tmp/dump | grep -E '0000 aa 55 aa 55|fff0 (90 )+.*e9' 00020000 aa 55 aa 55 00 00 00 00 00 00 00 00 00 10 02 ff 00fffff0 90 90 e9 03 e9 00 00 00 fc 00 00 00 00 00 d0 ff after a little more (boring) testing, the v1.1 release is ready and tagged. 281 commits went into this release. About 200 of them were cherry-picks from the original master branch. After those, regular development continued, mostly adding more programmer drivers. # What changed * The buildsystem integration was almost re-written. The Makefile looks a lot cleaner now, and Meson is ready to be used on more than just Linux. * Two new programmer drivers were added: * The `dirtyjtag_spi` driver is DJTAG2 ready. * The `internal` programmer has a new driver for AMD's SPI100 controller now. This was tested with different chipset confi- gurations on Pinnacle Ridge, Raven Ridge, Matisse, Vermeer, and Genoa(!) :) * And the usual smaller refactorings and other changes. The official tarball + signature live on flashrom.org now: And of course, there's the Git tag `v1.1`. # What was tested recently: * external SPI: ch347_spi, ft2232_spi, dirtyjtag_spi, buspirate_spi, serprog (w/ stm32-vserprog, SPI flash), pickit2_spi, jlink_spi, usbblaster_spi, ch341a_spi, linux_spi and * internal: SPI on Intel ICH7, ICH9 (hw&sw sequencing), PCH7, and APL, Parallel through VIA VT82C686 southbridge
OPCFW_CODE
Logwatch is not respecting MailFrom I've gone through today to setup Logwatch on my server and have installed this all successfully. I've followed this guide on Digital Ocean and set the MailFrom parameter to: MailFrom =<EMAIL_ADDRESS>I'm using ssmtp to send emails using my Postmark App account and it is coming through on my Postmark activity feed but it is showing the From field being set as root. SMTP API Error for<EMAIL_ADDRESS>Invalid 'From' address: 'root'. Looking at the raw source of the email trying to be sent it shows this line: From: root This is the command I am using to generate the send: sudo logwatch --detail Low --mailto<EMAIL_ADDRESS>--service http --range today Where am I going wrong or what can I do to get it sending as<EMAIL_ADDRESS>as Postmark require the from address to be correctly sent otherwise it won't allow it through and returns an error Further details Logwatch version: Logwatch 7.4.0 (released 03/01/11) System: Debian 8 (Jessie) Using sSMTP on my server to send emails from Postmark Debug log: Config After Command Line Parsing: supress_ignores -> 0 pathtozcat -> zcat html_header -> /usr/share/logwatch/default.conf/html/header.html logdir -> /var/log hostlimit -> encode -> none subject -> mailfrom -> root format -> html numeric -> 0 tmpdir -> /tmp html_wrap -> 80 pathtobzcat -> bzcat detail -> 0 range -> yesterday hostformat -> none debug -> 10 output -> mail mailer -> /usr/sbin/sendmail -t hostname -> game html_footer -> /usr/share/logwatch/default.conf/html/footer.html archives -> 1 pathtocat -> cat mailto -><EMAIL_ADDRESS>filename -> Can you explain what version are you using, what distro and such details? What about debug log with --debug=10? @Jakuje I have added some more details, however the debug log is too long for my putty client to be able to get the whole trace. Do I really need to run it at a debug level of 10? How can I get the whole output for you The start about parsing configuration should be enough (you can redirect it into file and then copy/browse through it later). I was interested if the value is even correctly read from the configuration file. @Jakuje how do i output it in to a file? logwatch --debug=10 > /tmp/log and then less /tmp/log @Jakuje have added these in now. It doesn't look like it is respecting the from field at all @Jakuje interestingly, if I use<EMAIL_ADDRESS>it will try to send with<EMAIL_ADDRESS>set in the From field and if I change this to Logwatch it will try to send the with the From field set as root. So it seems that it is picking up that I have changed the values but isn't respecting it. I've changed our provider over from Postmark to Mailgun to trial if it works (Mailgun aren't strict on having the from field set exactly) and it is letting the emails through. The only issue is that Logwatch doesn't send RFC 5322 compliant emails. After a tonne of investigation, I've tracked down the cause. Logwatch processes /usr/share/logwatch/dist.conf/logwatch.conf after processing /usr/share/logwatch/default.conf/logwatch.conf. Inside /usr/share/logwatch/dist.conf/logwatch.conf was three config lines: mailer TmpDir MailFrom It was here that MailFrom was set to root which was causing the issues. After updating it to<EMAIL_ADDRESS>it all worked fine! Great to hear it you sorted it out. I see also my logwatch is sent from different then I would like. Can you fill a bug report for Debian, so they can fix that? @Jakuje The dist.conf directory is not a bug -- in fact, it's the supported way for distributions to push their own Logwatch settings (although this is poorly documented). The tutorial linked in the original question is incorrect, since the files in /usr/share/logwatch/* should not be edited by end-users. This answer is mostly correct, but it suggests editing /usr/share/logwatch/dist.conf/logwatch.conf, which is not a "safe" way to configure logwatch. Instead, use /etc/logwatch/conf/logwatch.conf. Unfortunately the Digital Ocean article is misleading on an important point. The logwatch configuration file should be copied (see e.g. https://help.ubuntu.com/community/Logwatch) to become /etc/logwatch/conf/logwatch.conf before being edited. Provided there is a config file at the /etc location, logwatch will prioritise the /etc file over the defaults (or even ignore the defaults, I'm not sure which). This is mentioned in the comments below the article, but like you, I didn't read the comments before going ahead with implementation. That's how I finished up here! The first mistake I've seen from them, and it bit me hard. You should set your configuration inside /etc/logwatch/conf/logwatch.conf. This overrides both /usr/share/logwatch/dist.conf/logwatch.conf and /usr/share/logwatch/default.conf/logwatch.conf. From http://ftp.logwatch.org/tabs/docs/HOWTO-Customize-LogWatch.html However, Logwatch, starting with version 7.0, implements a mechanism to allow modifying the local system easier. These modifications may be needed either because the configuration of the service that writes to the system log has been altered from its default, or because the Logwatch user prefers what is reported or how it is reported by Logwatch to be different. You can customize the output of logwatch by modifying variables in the /etc/logwatch/conf directory. Default values are specified in the /usr/share/logwatch/default.conf directory. Your distribution may have set additional defaults in the /usr/share/logwatch/dist.conf directory. All the variables available are declared in the files under these directories. You can change the default values to modify how or what is displayed with logwatch.
STACK_EXCHANGE
how can i read the cue-list in a Wave File? i think, there was a post before on this subject: bret said something like: “interesting, yeah i was going to add some functionality for the sync callback stuff.. i was going to add ways to add and delete your own callback points, so i guess being able to list them would be useful as well.” i think, i could just read the RIFF Format of the Wave File: RIFF ‘WAVE’ (wave file) <fmt > (format description) 11025 frames per sec 11025 bytes per sec 1 bytes per frame 8 bits per sample <data> (waveform data – 52862 bytes) <cue > (cue points) #1: sample 23168 (playlist position 23168) #2: sample 30976 (playlist position 30976) #3: sample 35520 (playlist position 35520) #4: sample 38816 (playlist position 38816) #5: sample 42400 (playlist position 42400) #6: sample 44976 (playlist position 44976) #7: sample 48960 (playlist position 48960) like in the RiffViewer app from: but after reading infos in the net and as a slow-brain i don’t get it right. maybe someone is also interested on this topic and there are some solutions. o.k. , i know the FSOUND_Stream_SetSynchCallback, and i’m using it like in the “fmod” eyample code. great example, by the way. 😀 my idea was: when i edit the samples in cooledit i can save markers within the wave file. i thought it would be nice, when i read the wave in fmod, i could read the <plst> Riff chunk where the <playlist> information of the markers is stored. then i could reach every marker directly, because i have the cue-informations when i load the file. i could jump within the sample from marker to marker and do some weird sound stuff. 😮 thats more an idea for a music aplication than for syncronising some openGL engine to the sound. or something like that. i thought it would be nice when you have access to the RIFF information of the wave file. more like a wave file analysis. well, i worked on the getting-the-cue-list-problem the last days. i have a very ugly solution on my machine, but it works !!! 😉 i use the <mmsystem.h> functions to read the IFF chunks. (you don’t have to use fmod for this 😉 ) ruffly, that means: mmioOpen // open the file the Four-Charakter-Code for the chunk, you want to search for: 1. ‘W’,’A’,’V’,’E’ // the header 2. ‘f’,’m’,’t’,’ ‘ // some other header 😉 3. ‘d’,’a’,’t’,’a’ // the music 4. ‘c’,’u’,’e’,’ ‘ // the cues mmioDescend // scan the file for the FOURCC mmioRead // read the chunk into your struct mmioAscend // leave the file for the next search // you have to De,- Ascend for each FOURCC !!!!!!!!!!!!!! mmioClose // close the file when finished with everything (maybe i should post some example code here ?) 😆 for everybody who is interestet in the WAVE Format for windows. i recomment to check this out, because there is everything you’ll need to solve this problem. and you get some funny facts ’bout the WAVE format itsself. so, just “rtfm”, like always. Please login first to submit.
OPCFW_CODE
We are the makers of casual mobile hit Love & Pies, which globally launched in September 2021 and has now found its way into the hands of millions of players around the world. We’re backed by Supercell, one of the biggest and best in the industry. We make games for everyone that are snackable yet nourishing — built to fit into your daily life, but with the emotional depth of your favourite TV show. Our team always comes first at Trailmix, and our culture is built on both high performance and high care — we strive to be the best versions of ourselves, but always within an environment based on inclusivity, respect and safety, no matter who you are or where you come from. Senior Product Analyst We’re looking for a Senior Product Analyst to join us on our journey of extraordinary growth. At Trailmix we’re always looking for new insights that result in tangible impacts: impact on our players, our marketing efforts and on our game & portfolio level strategy. As our Senior Product Analyst you'll report directly into the Head of Data and join our established (but growing) Data team. You’ll help the Game, Product Management and Marketing teams to make informed decisions with actionable insights. Through a strong understanding of game KPI’s, monetization and marketing you’ll ensure that data driven insights help inform every decision we make and help us achieve even more success! The Responsibilities Include: - Champion our analytics efforts on our award winning game Love & Pie’s and future titles. - Work closely with the game team(s) to shape the analysis scope, including identifying the key questions that data can help to solve and prioritising projects. - Proactively deliver actionable insights to shape game team roadmap and help the team better understand key factors that influence our KPIs, product trends and user behaviour. - Support product launches: Define measurement plan, design experiments and deliver quantitative deep-dives to help understand feature performance and to influence product decision through your insights. - Help improve data quality by identifying data issues and collaborating closely with Data Engineer to work on tooling and data infrastructure. - Collaborate closely with Data Scientist and Data Engineer on daily basis and support less experienced team members through mentoring and coaching - A passion and curious mind for games and always striving to improve our players experience. - A combination of strong product sense and analytics skills: be able to identify key questions that can be answered by data and solve problems using different analytical and statistical approaches. - Experience in curating data analysis and science workflows, applying statistics, understanding biases. - Fluent in manipulating data with complex SQL queries and experience to clean up untidy, unstructured data. - Experience with data dashboarding tools such as DataStudio/Looker/Tableau etc. - Solid communication skills - able to compile and translate data, numbers and analyses to actionable suggestions and able to present findings to non-technical audiences and influence decision making. - Experience in collaborating with multiple functions in product development and supporting product launch end to end. - Prior years experience in Data Analyst roles within Mobile Gaming and Free to Play. - Experience defining and scoping the data to collect from a game (across multiple games is a bonus). - Experience with R/Python. - Flexible working - Contributory Pension - Tax advantaged Stock options - Fully comprehensive Private Medical Care (premiums paid by Trailmix) - 28 days Holiday (including 3 company holiday days around Christmas) - Free drinks and snacks in the office - Company gifts and swag - Socials and events Trailmix was founded to make a positive impact on our players, our community, and our colleagues. We are committed to creating an inclusive culture, and fostering an environment where people can flourish is our priority. We think it’s vital that players see themselves represented in games, and in Love & Pies we showcase the beautiful diversity of life. Love & Pies represents the people we are and the people we love, and we’re always looking to add more people who want to design games for everyone.
OPCFW_CODE
About Eric Amptmeyer I am a freelance internet software developer with over 16 years of experience in web development and database applications. I am originally from the Chicago area. I spent 10 years in West Lafayette, part of the time attending Purdue University. When the internet became popular in the late 1990s, I decided to switch careers from from electronics to programming. I promptly moved out west to San Diego to become part of the dot com bubble. I worked several corporate, unfortunately some of them failed, for a variety of reasons. (poor financial management topping the list) When the dot com bubble finally burst in 2000, I found myself competing for jobs with many other programmers. That is when I started working for myself. I have been a full-time freelance programmer ever since. Unlike some programmers who disappear suddenly, I have more than 10 years of experience as a freelance developer, working directly with clients. Both my email address and website are tied directly to my name. I have nothing to hide. The reason I am still in business after all these years is because my clients can trust that I will always be around to help them. I have other hobbies, but this is my full time job. Contact me today, and " GET WHAT YOU NEED THIS TIME AROUND!" How To Avoid Getting Scammed! There are no guarantees in life, but you can greatly reduce the chance of getting scammed online if you do the following: - AVOID working with people who don't have a portfolio on a domain they own. For example, something on Yahoo can be setup easily in a short period of time, and possibly link to websites they didn't even work on. Always remember it's easy to fake a portfolio. - AVOID working with people who only have a generic email address. For example, @gmail.com, @hotmail.com, @yahoo.com, @aol.com, etc. There is no way to trace these emails without a court order, so it's easy to get ripped off. You should be concerned if the person doesn't have a domain for themselves to do business. After all, why would anyone build websites for other people if they don't even have one for themselves? - ALWAYS insists on PayPal for the first transaction, at least. That way, you can see if they are verified. And you can file a dispute with PayPal if you think you got ripped off, and possibly get your money returned. - ALWAYS insist on talking to the programmer over the phone before sending any payments up front, if you have any suspicions about their credibility. If someone doesn't want to give you their phone number, that should be a red flag. - ALWAYS verify references and make sure they belong to sites they claim to have worked on. It's easy to fake a resume, so beware of cheaters who lie about their work history. Again, this won't guarantee anything, but these are excellent tips to follow, whether you use my services or anyone else. If you make good common sense choices, you can easily find a reputable developer to work with remotely. Learn more about my education. Learn more about my experience.
OPCFW_CODE
Where to now for the data robot? Limited options one way but wide open another Comment Data Robotics has the great mass of business data centre computing closed off but the small business market is wide open and waiting for Drobo-isation. We talked to Tom Buiocchi, Data Robotics CEO, and also to Paul Thackeray, the EMEA VP, to get a picture of the Drobo company just after it had announced its refreshed 8-bay business Drobo line and its new top-end 12-bay product with thin provisioning and automated tiering. The Drobo (Data ROBOt) is a unique product in terms of its feature set, which includes the so-called Beyond RAID protection, which enables users to populate the Drobo's drive bays with their own choice of 3.5-inch drives, choosing their own manufacturer and capacity levels. The user interface is a simple one based on red, blue and green indicator lights. These indicate drive health as well as capacity uptake, thus signalling when a drive update is needed to add more capacity. Lastly the Drobo has a neat curvy cornered black enclosure. These three features don't sound much, but we understand that the firmware behind Beyond RAID, the key Drobo attribute, took two years to produce. We also understand that Data Robotics' founder Geoff Barrall had a very particular idea of what the device would look like, its features, and how it should present them to users. He was, we believe, quite persistent and insistent that the Drobo device should match his conception. Today there is still no competing product that matches and exceeds the Drobo's functionality and feature set, After Barrall left and went on to Overland Storage the Data Robotics board appointed Tom Buiocchi, an executive in residence at a Mohr Davidow Ventures, with strong sales and marketing skills among others, as its CEO. He has helped the company to become more business-oriented. It has a file-sharing (NAS) Drobo and an iSCSI SAN (block access) product, which have both been refreshed in the recent announcement. Now we have the 12-bay product and a new Drobo Dashboard interface, which runs on a connected host server and shows the status, health and capacity take-up of a set of Drobos. The 12-bay is a 3U rackmount unit and there is talk of a dual controller unit coming, also of a 2.5-inch drive version which would have more drive spindles, and we understand that 4TB drives are supported with testing under way or about to start. The SSDs for the 12-slot box will be third-party ones qualified by Data Robotics, not supplied by Data Robotics. Expect mainstream SAS/SATA 3.5-inch SSDs to be on the list or, maybe, 2.5-inch ones in 3.5-inch carriers. How well has Data Robotics done? There are some 150,000 Drobo customers with fewer than 200,000 units sold. er, yes you can... yes you can grow raid arrays in a PC. In Linux you can anyway, using mdadm. I've done so, so I know it works. :-) My old 500GB/disk raid5 array was grown from 4 to 7 drives one at a time until i decided to replace it with larger drives (hence ending up with more 500GB drives than I have bays to put them in). Now, when I upgraded to larger drives I could have swapped them in the larger drives one at a time and re-shaped accordingly on the fly. (I didn't as I had enough external drive space to back up the array's contents and restore it to the new array, which was quicker and simpler.) It is a very manual, hands-on (and thus failure-prone) process, especially if the drive bays aren't hot-swappable, but it *is* possible. I wanted the drobo or something like it (there seems to be little like it) because i was *bored* of doing it the manual way, frankly. But I'm not so bored of it as to spend that much. I have been told that the 2008 numbers I guessed at were about 1/2 of what Data Robotics achieved in that calendar year. That's because Jillian Mansolf did an amazing job in the first full year of shipments. Also Data Robotics shipped for two and a bit quarters in 2007, from June onwards, so there was about half a year's revenue in that year too, Adjusting my numbers (assuming a $500 ASP) along these lines and guesstimating we get:- 2007 - $5m 2008 - $10m 2009 - $25m 2010 - $47.5m TOTAL - $87.5m Halve the numbers for a $250 ASP. That's probably a more realistic picture. The Drobo S is a crazy price! Almost double the price of the original Drobo for what? eSata, 1 more bay, and the option to enable double-failure protection (which then uses up that extra bay...). It makes the US$349 pricing on the original look very reasonable. As for rolling your own using a PC chassis with lots of drive bays ... that's fine except when you run out of capacity. With standard RAID software you'll either have to add a new RAID array (yes, even if you use ZFS you can't just add to an existing RAID set), or you'll have to replace all the existing drives at once which is also going to require a 2nd RAID array at least temporarily while you copy the data. It's the ability to expand the storage literally by pulling out one drive and pushing a new one in that makes the Drobo attractive. I've currently got 3 x 1.5 TB drives in mine and it's about 70% full. By the time I need more space I'm hoping that 3 TB drives will be the cheapest per byte (the 2's are at the moment).
OPCFW_CODE
Status: Reopened (View Workflow) Folders plugin is a great way to organize per project jobs. But in case of a feature branch builds its healt metrics is useless, currently it shows the worst child metric, which in case of feature builds will very often be bad, and doesn't bring any information to the user. I propose to have another metric: copy the health from a single job inside the folder. This way i could make the folder health metric be based on master branch build, and ignore the feature branches. JENKINS-56903 Health metric that reports only the primary branch I'm a Jenkins newbie, but this seemed like it might be an easy to pickup and contribute. Checked out the source code from https://github.com/jenkinsci/cloudbees-folder-plugin. mvn -version Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T10:41:47-06:00) Maven home: /usr/local/Cellar/maven/3.3.9/libexec Java version: 1.8.0_60, vendor: Oracle Corporation Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.10.5", arch: "x86_64", family: "mac" I attempted to build it with mvn verify and saw a few Javadoc errors with malformed tags. I fixed these, and then mvn verify again and a success. I'm now trying to run by following some of the steps on the Jenkins Plugin Tutorial wiki page. I run mvn hpi:run and I see a SEVERE in the output logs: SEVERE: found cycle in plugin dependencies: (root=Plugin:cloudbees-folder, deactivating all involved) Plugin:cloudbees-folder -> Plugin:matrix-auth -> Plugin:cloudbees-folder I also see the "The following plugins are deactivated because of cyclic dependencies, most likely you can resolve the issue by updating these to a newer version." with both plugins at http://localhost:8080/jenkins/manage. I commented out the matrix-auth for now because it seems like that is being copied into the run container. <!--<dependency>--> <!--<groupId>org.jenkins-ci.plugins</groupId>--> <!--<artifactId>matrix-auth</artifactId>--> <!--<version>1.3</version>--> <!--<scope>test</scope>--> <!--<type>jar</type>--> <!--</dependency> This seems sort of wonky, but I'm having a hard time figuring out how to contribute here. This is an old issue but still relevant in my opinion. I just created a pull request that implements the requested behavior: https://github.com/jenkinsci/cloudbees-folder-plugin/pull/196 This is possible with the branch api today - so this should likely be closed. the metric you need to add (and remove the others) is "Health of the primary branch of a repository" view of the folder: view inside the folder: https://github.com/jenkinsci/branch-api-plugin/pull/146 / https://github.com/jenkinsci/branch-api-plugin/pull/169 And https://github.com/jenkinsci/branch-api-plugin/pull/147 there is a Child Health metrics property for organization folders. Now filed as https://github.com/jenkinsci/cloudbees-folder-plugin/pull/243. Is there still a valid use case here not covered by the branch-api change? If so, this should be reopened. As for https://github.com/jenkinsci/branch-api-plugin/pull/147 it only covers OrganizationFolders from what I understand. https://github.com/jenkinsci/branch-api-plugin/pull/146 does not cover simpler use cases, like a folder with multiple build jobs in it such as build / deploy / test. Another issue is that not all SCM (and respective Jenkins SCM providers) do have a concept for "primary branches". My PR in https://github.com/jenkinsci/cloudbees-folder-plugin/pull/243 would cover that. It would also be great if this could be represented as a properties element, too Taking a quick look, it doesn't seem like this would be too difficult to do from a contributor standpoint, but I have no knowledge around all the semantics of writing an extension for Jenkins (serialization, handling input, concurrency issues). Looking through the source code is semi-helpful, but I am still not 100% sure how I would actually write the code for this. Any tips or links to figuring out the right way to contribute this?
OPCFW_CODE
State of the Art Reports - Session Details: Tuesday, May 10, 2016 – 09:00 – 10:30 - Session Chairs: Renato Pajarola – University of Zurich - Directional Field Synthesis, Design, and Processing - Amir Vaxman, Marcel Campen, Olga Diamanti, Daniele Panozzo, David Bommes, Klaus Hildebrandt, Mirela Ben-Chen - Direction fields and vector fields play an increasingly important role in computer graphics and geometry processing. The synthesis of directional fields on surfaces, or other spatial domains, is a fundamental step in numerous applications, such as mesh generation, deformation, texture mapping, and many more. The wide range of applications incentivized the definition of many types of directional fields, from vector and tensor fields, over line and cross fields, to frame and vector-set fields. Depending on the application at hand, researchers have used various notions of objectives and constraints to synthesize such fields. These notions are defined in terms of fairness, feature alignment, symmetry, or field topology, to mention just a few. To facilitate these objectives, various representations, discretizations, and optimization strategies have been developed, with varying strengths and weaknesses. This report provides a systematic overview of directional field synthesis for graphics applications, the challenges it poses, and the methods developed in recent years to address these challenges. - Session Details: Tuesday, May 10, 2016 – 16:00 – 17:30 - Session Chairs: Paolo Cignoni – CNR-ISTI, Pisa - 3D Skeletons: A State-of-the-Art Report - Andrea Tagliasacchi, Thomas Delame, Michela Spagnuolo, Nina Amenta, Alexandru Telea - Given a shape, a skeleton is a thin centered structure which jointly describes the topology and the geometry of the shape. Skeletons provide an alternative to classical boundary or volumetric representations, which is especially effective for applications where one needs to reason about, and manipulate, the structure of a shape. These skeleton properties make them powerful tools for many types of shape analysis and processing tasks. For a given shape, several skeleton types can be defined, each having its own properties, advantages, and drawbacks. Similarly, a large number of methods exist to compute a given skeleton type, each having its own requirements, advantages, and limitations. While using skeletons for two-dimensional (2D) shapes is a relatively well covered area, developments in the skeletonization of three-dimensional (3D) shapes make these tasks challenging for both researchers and practitioners. This survey presents an overview of 3D shape skeletonization. We start by presenting the definition and properties of various types of 3D skeletons. We propose a taxonomy of 3D skeletons which allows us to further analyze and compare them with respect to their properties. We next overview methods and techniques used to compute all described 3D skeleton types, and discuss their assumptions, advantages, and limitations. Finally, we describe several applications of 3D skeletons, which illustrate their added value for different shape analysis and processing tasks. - Session Details: Wednesday, May 11, 2016 – 09:00 – 10:30 - Session Chairs: Daniele Panozzo- NYU - Laplacian Spectral Kernels and Distances for Geometry Processing and Shape Analysis - Giuseppe Patané - In geometry processing and shape analysis, several applications have been addressed through the properties of the spectral kernels and distances, such as commute-time, biharmonic, diffusion, and wave distances. Our survey is intended to provide a background on the properties, discretization, computation, and main applications of the Laplace-Beltrami operator, the associated differential equations (e.g., harmonic equation, Laplacian eigenproblem, diffusion and wave equations), Laplacian spectral kernels and distances (e.g., commute-time, biharmonic, wave, diffusion distances). While previous work has been focused mainly on specific applications of the aforementioned topics on surface meshes, we propose a general approach that allows us to review Laplacian kernels and distances on surfaces and volumes, and for any choice of the Laplacian weights. All the reviewed numerical schemes for the computation of the Laplacian spectral kernels and distances are discussed in terms of robustness, approximation accuracy, and computational cost, thus supporting the reader in the selection of the most appropriate method with respect to shape representation, computational resources, and target application. - Session Details: Wednesday, May 11, 2016 – 11:00 – 12:30 - Session Chairs: Luís Paulo Santos – Universidade do Minho, Braga - BRDF Representation and Acquisition - Dar’ya Guarnera, Giuseppe Claudio Guarnera, Abhijeet Ghosh, Cornelia Denk, Mashuda Glencross - Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area.This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials. - Session Details: Wednesday, May 11, 2016 – 14:00 – 15:30 - Session Chairs: Thomas Delame – Inria, Grenoble - Semi-Regular Triangle Remeshing: A Comprehensive Study - F. Payan, C. Roudet, B. Sauvage - Semi-regular triangle remeshing algorithms convert irregular surface meshes into semi-regular ones. Especially in the field of computer graphics, semi-regularity is an interesting property because it makes meshes highly suitable for multi-resolution analysis. In this paper, we survey the numerous remeshing algorithms that have been developed over the past two decades. We propose different classifications to give new and comprehensible insights into both existing methods and issues. We describe how considerable obstacles have already been overcome, and discuss promising perspectives. - Session Details: Wednesday, May 11, 2016 – 16:00 – 17:30 - Session Chairs: Rafael Bidarra – TU Delft - A Survey of Real-Time Crowd Rendering - A. Beacco, N. Pelechano, C. Andújar - In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware.We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability.Finally we provide an exhaustive comparison of the most relevant approaches in the field. - Session Details: Friday, May 13, 2016 – 09:30 – 11:00 - Session Chairs: Pere Brunet – UPC, Barcelona - Data-Driven Shape Analysis and Processing - Kai Xu, Vladimir G. Kim, Qixing Huang, Evangelos Kalogerakis - Data-driven methods serve an increasingly important role in discovering geometric, structural and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data-driven methods aggregate information from 3D model collections to improve the analysis, modelling and editing of shapes. Data-driven methods are also able to learn computational models that reason about properties and relationships of shapes without relying on hardcoded rules or explicitly programmed instructions. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modelling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.
OPCFW_CODE
NullReferenceException when initializing MySqlDB.MySql.MySql() buried in DLL This is not a standard nullreferenceexception question because there are no objects instantiated that this initialization relies on, only namespaces I've been racking my brain for a couple of days on this one. The code works on another developer's machine (who is no longer here), so I have installed MySQL and imported the configuration, but I keep coming back to this exception. MySqlDB.MySQL is a namespace and MySQL() is just a method stub from the metadata. The DLL is MySqlDB.dll, which lives with the project. I have confirmed the executable by itself doesn't work on my machine either, so I know it is an environmental issue. Here's the list of other things I've tried (the application also has a dependency on SQL Server). install SQL Server Express uninstall SQL Server Express install SQL Server install Excel 2016 version for mysql plugin for Excel that was missing moved executable from developer's machine to attempt running outside of Visual Studio - confirmed environmental issue checked .NET version Remove and Add MySQL DLL Reinstall MySQL Copied MySQL DLLs from GAC on developer's machine Export and Import MySQL DB Configuration var mySql = new MySqlDB.MySql.MySql(); at MySqlDB.MySql.MySql.c909228df886bcca197ed06f91ce6af71() at MySqlDB.MySql.MySql..ctor() at BCS_UI.App.Register() in c:\GitCode\Windows_UI_Beta\UI\UI\App.xaml.cs:line 47 at BCS_UI.App.Application_Startup(Object sender, StartupEventArgs e) in c:\GitCode\Windows_UI_Beta\UI\UI\App.xaml.cs:line 23 at System.Windows.Application.OnStartup(StartupEventArgs e) at System.Windows.Application.<.ctor>b__1_0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) "I know it is an environmental issue" Did you check both machines are running on 32 or 64 bits and installed the correct bits related software? Are you asking if both machines are running the same version of the software or do you mean 32/64 bit Windows version? Software versions and windows versions.. Good question. Machines are all x64 and target CPU was set to "any" until I changed it to x86 the other day in attempt to fix this issue. Don't ask me why. Will try to reproduce by setting them to x64.
STACK_EXCHANGE
November 10, 1987 TO: All People interested in the FORTH computer language. FROM: Jack W. Brown, BCIT Mathematics Department. 3700 Willingdon Avenue Burnaby B.C. V5G 3H2 Phone 434-5734 local 5401. BBS Phone 434-5886. If you would like to know more about the FORTH computer language, please readon, otherwise perhaps you might be kind enough to pass this information on toa friend or colleague. I am offering, through the Mathematics Department (BCIT Continuing Education),two courses covering various aspects of the FORTH computer language. These courses will cover the full range of skills required to use FORTH to solve substantial real world problems. FORTH is the rising star in a group of High Tech programming languages that include LISP and C. The courses are currently titled: MATH 495 Introduction to the FORTH Programming Language. Starting date: Monday September 14, 1987. MATH 496 Inside FORTH 83. Starting date: Wednesday September 17, 1987. If you would like to learn forth on your own, the file LEARN4TH.ARC in file area #2 has a modified version of the Laxen and Perry F83 with a super editor. Also inclued is the actual notes and examples used in MATH 495, Introduction to the FORTH Programming Language. Get a copy of Starting FORTH by Brodie and enjoy. The best way to learn FORTH is from a live teacher!! I have 6 years of experience with the FORTH language on 6502, 8086, and 68000 CPU's. I would love to have you attend my FORTH classes at B.C.I.T. Below is a directory of the LEARN4TH.ARC file - On line HELP system see SAMPLE1. - Examples and notes for lecture # 1 Words, In and Out, Simple programs, VEDITor, HELP system. - EXAMPLES FOR LECTURE #2 Stack manipulation, Area & volume calculations, Tables. - EXAMPLES FOR LECTURE #3 Number display, Logicals and conditionals, Numeric input. - EXAMPLES FOR LECTURE #4 Interval logic, return stack, variables constants arrays. - Examples for lecture number five. Fixed and floatin point, fractions scaling, rounding - There is no 6. This was a test night. - Examples for lecture number seven. Strings, IF ELSE THEN, BEGIN WHILE REPEAT, CASE statement. - Examples for lecture number eight. Dictionary structure, vocabularies, recursion. - Examples for lecture number nine Compiler extension… CREATE DOES> line editor. - Examples for lecture number ten Making use of virtual memory. - Examples for Lecture Number 11 Multi tasking. - There is no 12. - MID TERM EXAM Sample midterm tests given on 6th night No sample final exams are provided. Here is some interesting information on the FORTH Language. WHERE IS FORTH USED? FORTH is used in video games, operating systems, real-time process control, word processing, spread sheet programs, business packages, database management systems, robotics control, high speed data acquisition, artificial intelligence programs, and for engineering and scientific calculations. WHY IS FORTH USED? The reason is best stated by Charles H. Moore, the inventor of FORTH: „FORTH provides a natural means of communication between man and the smart machines he is surrounding himself with . . . . I cannot imagine a better language for writing programs, expressing algorithms, or understanding computers“. From the forward of Starting FORTH by Brodie. WHAT IS FORTH? - FORTH is conversational like APL LISP or BASIC. - FORTH is compilable with many high-level language structured programming constructs such as IF … ELSE … THEN, BEGIN … WHILE … REPEAT, BEGIN … UNTIL, DO … LOOP, CASE … ENDCASE. - FORTH exhibits performance very close to that of machine coded program equivalents, yet it is a high level language. - FORTH is completely written in itself and you are given the complete source code for the language. - FORTH places no barriers among combinations of system, compiler, or application code. - FORTH includes an integrated, user-controlled virtual memory system with dynamic allocation of resources for both program source text and data files. - FORTH includes an integrated full feature machine code assembler with built in high level language structured programming constructs, like those mentioned above, for use in the assembler! - FORTH can be extended to include new commands written in terms of any previously existing commands or in machine code using the integrated assembler. - FORTH permits easy user extension of existing data types and data structures. - FORTH is easily debugged- application program modules can be incrementally compiled tested and debugged interpretively. - FORTH spans the power of most other programming languages including assembly language, FORTRAN, Pascal, C, and LISP. - FORTH is transportable- applications can be run easily on many different micro-computers, even those with different CPU's. - FORTH produces completely relocatable object modules with code more compact than native machine language assembly code. STILL NOT CONVINCED? Take any of the above FORTH courses and receive a copy of Laxen & Perry's public domain FORTH83 system (free!). You have nothing to loose!
OPCFW_CODE
[SPARK-9805] [MLLIB] [PYTHON] [STREAMING] Added _ssc_wait_checked for ml streaming pyspark tests Recently, PySpark ML streaming tests have been flaky, most likely because of the batches not being processed in time. Proposal: Replace the use of _ssc_wait (which waits for a fixed amount of time) with a method which waits for a fixed amount of time but can terminate early based on a termination condition method. With this, we can extend the waiting period (to make tests less flaky) but also stop early when possible (making tests faster on average, which I verified locally). CC: @mengxr @tdas @freeman-lab If this looks reasonable, I'll update the rest of the uses of "ssc_wait" Merged build triggered. Merged build started. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Test build #40354 has started for PR 8087 at commit 3fb7c0c. Test build #40354 has finished for PR 8087 at commit 3fb7c0c. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Test build #40357 has started for PR 8087 at commit ef49b2b. Test build #40357 has finished for PR 8087 at commit ef49b2b. This patch fails PySpark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Merged build finished. Test FAILed. Jenkins test this please Merged build triggered. Merged build started. Test build #40495 has started for PR 8087 at commit ff1ee1b. Test build #40495 has finished for PR 8087 at commit ff1ee1b. This patch fails PySpark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Test build #40502 has started for PR 8087 at commit afbe8b1. Test build #40502 has finished for PR 8087 at commit afbe8b1. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. Yay it passed! If this looks reasonable, I'll make similar changes for the other streaming ML pyspark tests. Nice! I think this is a solid strategy. Maybe in the next round of changes make that 20.0, which will presumably be used throughout, a var shared by all the tests? I think you can make a generic equivalent of scalatest eventually in python. That takes care of failing with timeout and providing meaningful last error message. def eventually(timeout, condition, errorMessage) # condition: function that must return boolean # errorMessage: can be a string, or a function that returns a string, it invoked if there is a timeout. Then thats solves the problem I alluded to earlier about a possible race condition. @tdas Sure, I can do that. I don't think the race condition matters for ML tests (or if it does, then the test was written incorrectly), but that does clarify semantics. I guess I'll have to duplicate the check code no matter what to get nice error messages. Actually, I'm going to switch the design to instead: accept a single check method which will use assertions catch AssertionErrors when deciding whether we can terminate throw the last caught AssertionError upon timeout That will allow us to (a) avoid copying the set of checks and (b) take advantage of the many assertion variants, including approximate equality. AFAIK, the overhead in catching errors should be negligible compared to the time for the tests. (Correct me if I'm wrong here.) Merged build triggered. Merged build started. Test build #40578 has started for PR 8087 at commit 48f43c8. Test build #40578 has finished for PR 8087 at commit 48f43c8. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Merged build finished. Test FAILed. Jenkins test this please Merged build triggered. Merged build started. Merged build finished. Test FAILed. What if condition requires at least one batch to work correctly? This is not the case for streaming ML algorithms, but I'm not sure for other streaming unit tests. Test build #1474 has started for PR 8087 at commit 3717fc4. Yeah, I should document that. I made sure to make condition() work for those cases (e.g., checking result array length instead of the values in the result array which might not yet exist). Merged build triggered. Merged build started. Test build #40598 has started for PR 8087 at commit 5e49327. Test build #40598 has finished for PR 8087 at commit 5e49327. This patch fails PySpark unit tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Test build #1474 has finished for PR 8087 at commit 3717fc4. This patch fails PySpark unit tests. This patch merges cleanly. This patch adds no public classes. Working on improvements... OK everyone, I think that should fix things...but we'll wait and see. I changed the logic of eventually to support the 2 types of tests: ones which have a simple condition to check and cannot stop early, and ones which can stop early if all batches have been processed. Merged build triggered. Merged build started. Test build #40678 has started for PR 8087 at commit 002e838. Test build #40678 has finished for PR 8087 at commit 002e838. This patch fails Python style tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test FAILed. Merged build triggered. Merged build started. Test build #40688 has started for PR 8087 at commit 2897833. Test build #40688 has finished for PR 8087 at commit 2897833. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. LGTM. @tdas Do you want to make a final pass? But yeah @tdas I'll wait for your final OK Merged build triggered. Merged build started. Test build #40816 has started for PR 8087 at commit a4c3f1e. Test build #40816 has finished for PR 8087 at commit a4c3f1e. This patch passes all tests. This patch merges cleanly. This patch adds no public classes. Merged build finished. Test PASSed. LGTM! OK, I'll merge this with master and branch-1.5 then. Thanks for reviewing, everyone!
GITHUB_ARCHIVE
Which one is better framework for Enterprise level JS programming - jQuery or Prototype and why? I'm trying to choose a JS framework to be able to stand the test of time (still usable and scalable in 5+ years), with a good solid code foundation for other programmers to code their own extensions or projects (from complex animation to multi-threading Ajax). These are the things I'm comparing: Extensibility Scalability Consistent and logical syntax Performance Ajax support Animation support Nearly bug-free library update history Enterprise adoption examples Maybe there are other points I should consider? Others pointed out that there a some arguments here but most don't apply in the enterprise standpoint because they are short-term benefits, such as: Tons of plugins Bigger Momentum You might want to consider changing your question to something that can actually be answered rather than opined on or set it to community wiki mode. I'm afraid that your question will be closed as subjective and argumentative. Plus, check out the lengthy discussions on the topic already done: http://stackoverflow.com/search?q=prototype+jquery Thanks. I changed my question to be more open ended. And I searched but couldn't find reference on the enterprise perspective. Hmm. From an enterprise point of view, I'd have a completely different set. Expected lifetime of the library; Learnability; Availability of programmers; Integration possibilities; Quality of plug-in code. I'd also take a good look at where and how javascript is expected to be used in the company, and how that is going to change. And in an enterprise setting, you should present this as part of a Technology Roadmap. In that, you take a look at the long time customer needs, how they should reflect in technology developments and choices. Thanks. Your answer is very insightful.. Your points are very good. Assuming this framework is to withstand the test of time and it should be scalable to do anything (from complex animation to multi-threading ajax). What's your take on jQuery and prototype, or maybe another library. Well, I'm not looking from an enterprise point of view. I am switching from Scriptaculous to jQuery+RaphaelJS. But I'm agile, and can switch to something else when needed (Clamato?) I like this answer better than mine.. very good criteria (which I think jQuery meets all just fine)... Envy! We are definitely not agile. Why did you switched? What's your after-thought? I'm using technology that makes it possible for me to make a difference, in a small group of developers. That means Pharo Smalltalk and Seaside, on a Gemstone object database. The lead developers of Seaside moved to jQuery for new development. My existing code is well-refactored, so switching is no big deal. Strictly from a long term support and momentum perspective: Major vendors such as Microsoft, Google, Dell, Wordpress, Nokia, etc. have adopted jQuery. Take from this what you may when considering if its Enterprise-ready. Beyond this, consider that it is likely the fastest growing framework and has a huge following. These two facts should drive it forward with continual improvements and support. Even long term human resource support to develop in it should be there given the number of jQuery developers out there... Thanks! I do think both libraries can do all of the above. But which is better in comparison? The bigger following or plugin support are not as important in this case. Feature specific comparison: http://en.wikipedia.org/wiki/Comparison_of_JavaScript_frameworks @JONYC Can you elaborate on your enterprise level project? It's pretty tough for anyone to tell you exactly which one is 'better' as they both suit certain uses.. especially if following or plugin support is not important (which I personally think would be for large scale systems).. It's not for a particular project. We wanna choose a framework that is solid (code-wise), extensible and relevant for years to come. That's why plugin support isn't much of a concern because they come and go. But I need a good code-base and architecture will allow programmers to do whatever comes up in the future. In that case I'd definitely recommend jQuery. Major corporations would not adopt a framework that is going to die next year. The big boom in javascript frameworks is jQuery without a doubt. It's easy, has good performance (maybe not the best but still very good) and will be around for a long time. $().launch_photon_torpedos() $('#trollingAnswer').each(function() { $.TrollTools.ResistUrgeToComment($(this), false) }); I make no apologies for making fun of vague laundry-list requirements that unthinking MBA dinosaurs put together to give the illusion of having work to do, and sum up with that most self-important term, “enterprise”. As if proper, big, important companies have proper, big, important needs that could only possibly be satisfied by one particular mature JavaScript framework, as opposed to another particular mature JavaScript framework. If you think “Enterprise level JS programming” actually means anything, maybe you should leave the programming to actual programmers. That said, good answers. +1, funny. That should get make a nice zero for your rep. We should leave it at that. Aw, thank you! I should really now write a photon torpedos plugin for jQuery.
STACK_EXCHANGE
Last modified: June 6th, 2021 Note: This documentation is for Dynatrace India ACE Services team. Please don’t read if you are a non-dynatrace person. This document shows how to setup lab in Azure DevTestLabs. In this document you can find how to set-up labs for training and also it classified into 2 types, - Admin Training - Power User Training Please check the contents for navigating through the document. It contains several sections and step by step guide to create lab resources for training purposes. For creating labs, using Azure or GCP is recommended. - Steps needs to be followed - Access to Azure tenant of Dynatrace (At least one subscription access is required) - Admin access to Transform Lab (https://lab.dt-transform.com) - Dynatrace Email ID - Participants email address for sending all the instructions later on. - Collect the participants details - Create GitHub issue to notify the activity or to track each changes. - Create atleast 2 VMs for each participant. - Create Dynatrace Environments for users (depends on the type of user). - Email the resource details to each participant from the list you have collected from the customer. Before doing any changes it is the best practice to create a GitHub issue with the details of the change. Please follow that to keep track of your work and also help us to improve. In future there will be an automation included for these issues. In the same repository, you need to create 2 issues one is regarding Access request for Dynatrace Environment and another is for creating VMs in Azure. - Login to GitHub. - Visit out lab repository here. Go to issues and click on New issueas shown below. Use the predefined templates to create an issue. Note: Use Access Request teamplate for Environment access and **Resource request ** template for creating virtual Machines. Follow the instructions in the template and create issues - Navigate to Azure DevTestLabs and login using your Dynatrace account (If you don’t have access to Azure account, please contact firstname.lastname@example.org for the access). - Click on Addat the top left panel and Create Devtest Labblade will open. Fill the **Basic details with the details as shown in the below screenshot. (You can fill up your own details for e.g. instead of BT-Trainingyou can fill any other name of the customer). Go to next step Auto-shutdown by clicking on Next: Auto-shutdownon Basic deatils page (Please don’t click on Review + createas this will directly take you with preconfigured settings.). In this page leave the default settings and let Auto-shutdown to off as shown in screenshot. Next: Networkingand this will take you to Networking step. Please select indiaservices_transform_lab-vnetfrom the drop down of Virtual Network. Keep Subnet to default. In Network Isolation you can wither Isolate lab instances or go with non-isolation. Foe more details about Network isolation in DevTest Labs, please visit this documentation. Go to Tags and enter the tags as shown in the below screenshot. In github-issue tag, please specify the issue number created here before creating lab. Once tags are added, please click on Review + createthat will take you to the summary page. Click on Your lab will be created in 5-10 minutes. You can track the progress in deployment overview page. Once it is created, it will give you success message and a button to navigate to the resource. - Make sure you know the confuiguration of VMs that you need along . with the OS of it. As you might have specified in the GiHub issue earlier. Go to Azure DevTestLabs and see that lab name you created in previous section. Click on the lab to open it’s Overview page. Add, Base selectotr page will open and here as per the requirement, search for the Windows 10 Pro. - Selet the latest build release of Windows 10 Pro. In this case, we need to select Windows 10 Pro, Version 21H1. Once you select the option, it will take you to the VM creation page where you need to enter the name of the VM and an admin user name where all other VMs canbe accessed through that credentials. Please use D4s_v3size and after selecting the size please sue Standard SSD instead of Standard HDD. You can go to Advanced settingsand leave Virtual network and Subnet selector as it is. If you want to make virtual machine publicly accessible, you need to select PublicIP address. Selecty the Expiration date to next 15 days and enter the number of instances it should create, In our case it is 20 so I am using 20 as the number. - Go back to Basic Settings and click on This will create 20 VMs for us to use it for training. It will take around 15 to 20 minutes to create all the machines. So, sit back and relax for sometime. Now if you see the screenshot above, all VMs that are created with the rule of Auto-shutdownenabled. If we leave this enabled, it will turn off VM every day at 19:00 Hours. To disable that, got o Configyuration & policies> Auto shutdown policy> select User has no control over the schedule set by lab administratorand save it. Above step will ensure that you can set Auto shutdowm policy to all the VM at once. Thus, got to Auto-shutdownand turn it off in all VMs in the lab as shown in the screenshot below. Finally, All VMs are created and now they are ready to start working and do out task. But, all machines are having the same credentials which we gave at the start. So, if you want to create specific user credentials for each machine we need to go into each VM and create a user with the participant name and password. As shown in above screenshot, navigate to Virtual Machines and select the each virtual machine and select reset password option that is available in options blade of the VM. - Once passwords are reset or users are created for each VM, download the RDP files of each VM and store it in a folder. Mail the RDP files and user credentials to user’s individual emails. Note: Assign 2 VMs with same credentials because we are providing 2 VMs for every user. - Create New Environements - Create User Group - Create User account - When you collect the naem of the users/participants based on the please procees to login to Tansform Lab - Make sure you have the access to cluster management console. Once toy login, navigate to Add another environment> Name the environment in this format, BT - Create environment for each user. - To create user group in CMC, go to User authenticationfrom navigation menu and go to - Click on Add new groupand add group to have all access to their respective environments. Create one group to each user and assign that group permission to have admin access to the environemnts that you have created above. Create group for each user and assign their dedicated environment to them. - Go to CMC, in Add new user. Fill up details regarding the account such as First Name, Last Name, Email and Username. Save the details and in next page, make sure you assign the user groupt ot he the user. If not, user will not be able to access the environment. After creating VMs and Dynatrace environments to the users, please email those RDP files and passwords that you have created along with usernames. Meanwhile, if you need any help feel free to contact email@example.com at any time.
OPCFW_CODE
Last Friday 18/09/2015 Symfony Live London 2015 took place at the QEII Conference Centre, close to the Houses of Parliament and Westminster Abbey. The attendees had the chance to learn through different tracks, network, meet with the event sponsors and learn further about the future plans of Symfony framework. Here are the highlights that deserve your attention. The event kicked off with an inspiring and perfect keynote by Seb Lee-Delisle. It was really inspiring of what Seb did with particles, Lunar Trails, Cluppy bird and the laser’s projects. Building a Pyramid: Symfony Testing Strategies The last few years have seen a huge adoption of testing practices, and an explosion of different testing tools, in the PHP space. The difficulties come when we have to choose which tools to use, in what combinations, and how to apply them to existing codebases. The talk look at what tools are available, what their strengths are, how to decide which set of tools to use for new or legacy projects, and when to prioritise decoupling and testability over the convenience we get from our frameworks. Puli, a new PHP toolkit, is a step to make this possible. With Puli, Composer packages become “intelligent”. Enable any package in any project (Plug ‘n Play) simply by running “composer install” – independent of your framework. Are you ready for the future of PHP? Using Doctrine 2 can be a very rewarding experience, extremely frustrating or anything in between. To be a happy Doctrine 2 user requires attention and willingness to compromise. The talk show how to use Doctrine defensively, common pitfalls that hurt maintainability and when to avoid Doctrine altogether. Commands, events, queries – three types of messages that travel through your application. Some originate from the web, some from the command-line. Your application sends some of them to a database, or a message queue. What is the ideal infrastructure for an application to support this on-going stream of messages? What kind of architectural design fits best? Real-time is becoming the life blood of applications. Facebook, Twitter, Uber, Google Docs and many more apps have increased user expectation to demand real-time features. Features such as Notifications, activity streams, real-time data visualisations, chat or collaborative experiences instantly keep users up to date and enable them to work much more effectively. So, how do you build these sorts of features with Symfony? Enter Blackfire. Blackfire is a PHP profiler that simplifies the profiling of an app as much as possible. The Path to Symfony 3.0 This session Fabien share his vision for Symfony 3.0 – from conception to launch and implementation, you’ll get the ins and outs straight from Symfony’s founder himself.
OPCFW_CODE
Please write detail job duties for below job duties. Job Duties: Job Title – “.Net Developer†- Develop applications using C#.net, Asp.net MVC, ORM framework like Entity Framework, LINQ and SQL Server. - Develop Stored Procedures, Triggers in SQL language to implement business rules and scenarios involved in deterring whether the service is healthy or not. - Test the functionality by accessing these entities from DAL (Data Access Layer) of the solution. - Using ASP.NET MVC framework to support the use of dependency injection to inject objects into a class, instead of relying on the class to create an object itself. - Develop pages and User Interfaces for the Heatmap reports. - Give interim demo of the software and tools to the client at the end of each sprints. Collect and incorporate feedbacks from the client. - Responsible for Timely resolution of critical production systems issues using technical and problem-solving expertise. - Give root cause analysis, suggest fixes, implement hot fixes, do patch releases. - Conduct peer code review using Code flow and Microsoft FxCop, suggest improvement areas for reusability and performance optimization. - Suggest feasible and scalable design solutions at applicable places. - Review unit test cases of developers and provide feedbacks to ensure complete code coverage and Business scenarios. - Following end-to-end change management process for all releases. Creating unit test cases for all releases (new enhancements/ bug fixes for the existing system), coordinating with Business teams during UAT testing. - Analyze and daily triaging of the defects found during unit and UAT testing, apply fixes and coordinating retest to close out all open defects. - Configure Automated build (continuous Integration and Continuous Delivery) using CoreXT, VSO and GIT - Coordinate and interact with client representatives on the design, development, testing and deployment of applications. - Assess new technologies and tools and programming languages for the each of the new requirement/module. - Adapt new technologies to improve performance of the existing tools. - Create relevant artifacts such as Architecture, Data Model, Database design and Proof of concept development & demo to client. - Using SQL Server to be design the data base and to do CRUD operations on the data. - Consuming the WCF services to connect to the remote server and to retrieve the data from the other platforms. - Enhancing the existing applications by analyzing business objectives and doing modifications for the improvement. - Preparing the technical document for the purpose of documenting activities, providing written reference, and/or conveying the information. - Using the TFS (Team Foundation Server) to mange the projects and tasks which has assigned and following Agile Scrum methodology. “Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!” Other samples, services and questions: When you use PaperHelp, you save one valuable — TIME You can spend it for more important things than paper writing.
OPCFW_CODE
CONNECT REIMAGINE CONFERENCE GUIDE CONNECT REIMAGINE will be utilizing the Hopin platform. Below you will find frequently asked questions, as well as information on how to get the best experience by familiarizing yourself with all that CONNECT REIMAGINE has to offer! Women Who Code is dedicated to providing an empowering experience for everyone who participates in or supports our community. Our events are intended to inspire women to excel in technology careers, and anyone who is there for this purpose is welcome. Because we value the safety and security of our members and strive to have an inclusive community, we do not tolerate harassment of members or event participants in any form. General Hopin Information “How can I prepare before the conference?” Create your profile by adding a profile picture to the avatar, put your title and company in headline, a short bio, and contact information including social media handles. Your profile is located at the top right of your screen. Having your profile filled out will make it easier to connect and share information with sponsors and other participants during the conference especially in the Networking and Expo sections. If you prefer to opt out of sharing your information with other attendees, feel free to leave out a photo, turn off your video and use your initials instead of your name. “What browser should I use to join CONNECT REIMAGINE?” Hopin recommends using either Google Chrome or Mozilla Firefox for the best experience with their platform. “What talks and sessions will be available at CONNECT REIMAGINE?” Look through the speakers list, and begin to see what sessions and talks you might want to attend throughout both days. We have over 50 speakers to choose from, and more are still being added, with sessions based on career advancement, skills building, and new technology. There will be more added leading up to the conference. “Will there be updates on social media?” Yes, follow us on Twitter and Instagram for updates. Feel free to share about the conference before, during, and after using #CONNECTREIMAGINE. You can also watch past conference sessions on our YouTube channel, subscribe. Day of Conference “Is there an easy way to move from area to area in Hopin?” Yes! There’s a simple icon menu on the left-hand side of your screen where you can move to different areas of the event- Reception, Stage, Sessions, Networking, Expo. Check the schedule of all sessions in Reception, and find what is happening now! The Stage is where conference-wide talks will take place, most notably the opening and closing keynotes. In the Sessions area, you will find all lightning talks, technical talks and workshops. You can filter and search for sessions by topics you’re interested in attending. Sessions will have multiple speakers or moderators, double-clicking on a specific screen will allow you to enlarge it and bring it into focus. While watching the presentation, you can type comments and questions into the Session chat for everyone to see, type questions for the presenter in the Q&A, or hide the Chat section to focus on the presentation. Type questions for speakers into the Session Chat. Hide Chat/Polls/People to focus on the presentations. “How can I find the sessions happening now?” Visit the Reception area to find the schedule. When choosing your sessions you can also filter them based on topics you’re interested in. You can filter to find stacks and topics of your interests easier, and also enter talks you’re looking for into the search bar. “I missed the talk I really wanted to see! Will CONNECT Digital be recorded?” Yes! All talks will be on the Women Who Code YouTube Channel two weeks after the event. “Is there a limit on the number of attendees in a particular session?” Yes. The Stage can accommodate all attendees and each Session can accommodate 500 attendees. If you are unable to view a Session because it has hit the limit of attendees, please wait and try again in a few minutes. “Will there be an opportunity to learn about sponsors?” Yes! Visit the expo area where each of our partners will have a virtual booth. Click on the company booth you’re interested in to engage with employees from the company to learn about the culture and potential career opportunities. You can also schedule a meeting, and leave your information to be reached out to directly, be sure your profile is updated with contact information. “I’m looking for a job. Can I upload my resume to share with companies?” Yes! Please upload your resume here. It is also recommended that you put your LinkedIn profile link and a link to your resume (if you have it) in the website area on your profile for quick and easy access for recruiters. Our job board is always available and updated as a resource for job seekers. “Will there be opportunities to Network?” Yes! There will be the chance to make CONNECTions with fellow attendees by clicking on the ‘Networking’ button between sessions. You will be randomly paired with another participant and chat for 3 minutes. You have the option to exchange info with the click of a button so you can continue to Network post-CONNECT Digital! To exchange info during your networking session, hit the “CONNECT” button at the bottom of the screen. You’ll find your connections who mutually opt in on your dashboard. You can also send an invitation for video call, schedule a meeting, or direct message to someone you connect with during talks and sessions. “Do you have a list of suggested ice-breaker questions we can use?” Yes! Here’s a list of some ice breaker questions. Feel free to use your own! Chat allows you to send messages throughout the event. The general chat is visible to all participants in the conference, you can @ someone to speak directly to them in the chat. The Expo booths and Sessions will have their own individual chats. You will occasionally see important messages from organizers popping up on the chat. The Polls tab is where we will be asking you for your feedback during the event using live polls. The People tab lists all participants that have joined the event. You can send a direct message to any participant by clicking on their name. “What if I have a question about CONNECT REIMAGINE during the event?” If at any point during the day you have a question or a concern about CONNECT REIMAGINE, please email firstname.lastname@example.org. We have a help desk in the Expo that will run for the entirety of the event, so feel free to drop your question into the chat and a member of our team will be happy to help you! “How do I get my saved connections after the conference?” You can log into Hopin after the conference to see your connections and follow-up. “How can I send feedback about CONNECT REIMAGINE?” There will be an email sent to attendees after the conference to request feedback and help us plan for the next conference. You can also email email@example.com. “How soon will conference talks and sessions be available on YouTube?” You can expect to see videos uploaded to Youtube within two weeks after the conference. “If I’m not a member, how can I join WWCode?” Join us for free! https://membership.womenwhocode.com/email “How do I enter the raffle?” To enter the raffle you can fill out and submit the raffle form here. We will contact you directly if you win.
OPCFW_CODE
How to describe the relationship with writer of recommendation letter? I am applying for tenure track /postdoc and I asked two professor to write me a research letter. In some job applications I am asked to describe the relationship with them, and I am not so sure what are the correct words. The first one is a professor in the university where I obtain the PhD. They are also an expert in the field. The second one is a world-class expert in my field. I know them personally as we meet in conference over the years and I visited them occasionally. I did not collaborate with both of them. Any suggestions? What's wrong with the descriptions you just wrote? The input area is so small and I suppose it fits only one or two words. @JeffE Bear in mind that the online job application site is often designed to handle everything from secretaries through janitors through professors. Don't overthink this kind of thing too much. If you only have one or two words, and you don't think "senior colleague" is sufficiently accurate/precise, I suggest "see letter". Both don't seem like unusual "relationships" at all, so I don't think there should be any issue describing them just like you did here. If there is a drop-down list, you can select whatever is the most appropriate (presumably there are options amounting to "colleague at my current department" and "collaborator" - I think it counts as "collaborating" if you have visited them occasionally, even if you happened to not have published a paper yet). If there is a freetext field, you can just write what you wrote here. +1, but in mathematics visiting someone does not count as “collaborating”. @DanRomik Hmm, I see. How would you call somebody who you have visited multiple times then, without actually collaborating? "Scientific acquaintance" :) ? That said, I don't really think it matters what OP puts into that text field. Presumably the letter writer will state in their letter how they know OP, and nobody will look at the content of the text field. I would call them somebody I visited multiple times but did not collaborate with. There isn’t a special name for such a person. I don't understand this concept of "visiting somebody multiple times without collaborating with them." What happened on these visits if not research? Why does this world-class expert have so much time that they can host a visitor for something other than research? Who's paying for these visits, and why? I suspect that the asker actually means that they've met the person several times. @DavidRicherby Isn't that very common? For example, most schools hold weekly seminars and people can be invited as speakers every year. They can be organizers of different conference, and people can be invited to speak too. It doesn't matter who's paying. Sometimes the host pay it, sometimes people their own travel grants. @ArcticChar I wouldn't describe any of those things as "visiting somebody". If i give a seminar at your department and have a chat with you, too, or go to the same conference as you, that's "meeting you", not "visiting you". And who pays doesn't matter per se, but travel funding on research grants is for the furtherance of that research. It's hard to see how making multiple visits to somebody furthers your or their research if you're not collaborating on research during those visits. Your letter writer will describe the relationship in the first paragraph of their letter. If you are not sure what they will say, then ask the letter writer. You should be saying the same thing they say.
STACK_EXCHANGE
Cryptocurrency Market Data refers to a collection of information and statistics related to the overall cryptocurrency market. It encompasses various data points, including market capitalization, trading volume, price movements, market trends, and other relevant metrics for different cryptocurrencies. Cryptocurrency Market Data provides insights into the overall state of the cryptocurrency market, helping investors, traders, researchers, and analysts understand and assess market dynamics. Read more What is Cryptocurrency Market Data? Cryptocurrency Market Data refers to a collection of information and statistics related to the overall cryptocurrency market. It encompasses various data points, including market capitalization, trading volume, price movements, market trends, and other relevant metrics for different cryptocurrencies. Cryptocurrency Market Data provides insights into the overall state of the cryptocurrency market, helping investors, traders, researchers, and analysts understand and assess market dynamics. What sources are commonly used to collect Cryptocurrency Market Data? Common sources used to collect Cryptocurrency Market Data include cryptocurrency exchanges, financial data providers, blockchain networks, and cryptocurrency market data platforms. Cryptocurrency exchanges, where users buy, sell, and trade cryptocurrencies, generate data on trading volumes, prices, and market activity. Financial data providers, such as Bloomberg, CoinMarketCap, CoinGecko, or CoinCap, aggregate data from various exchanges and provide comprehensive market data feeds. Blockchain networks, such as Bitcoin or Ethereum, offer on-chain data that includes transaction volumes, block sizes, and other network statistics. Cryptocurrency market data platforms specialize in collecting and analyzing market data, offering real-time and historical data for different cryptocurrencies and market indicators. What are the key challenges in maintaining the quality and accuracy of Cryptocurrency Market Data? Maintaining the quality and accuracy of Cryptocurrency Market Data can be challenging due to several factors. One challenge is the inconsistency and fragmentation of data across different exchanges. Cryptocurrency exchanges can have varying data reporting methodologies, leading to discrepancies in trading volumes, price calculations, or market metrics. It is crucial to reconcile and normalize data from multiple sources to ensure accuracy and consistency. Another challenge is the presence of fake or manipulated trading volumes, known as wash trading or spoofing, which can distort market data. Detecting and filtering out such artificial volumes requires advanced data analysis techniques and cross-validation with reliable sources. Additionally, the rapid pace of market movements and the volatility of cryptocurrencies make real-time data collection and updates essential to reflect the most current market conditions accurately. What privacy and compliance considerations should be taken into account when handling Cryptocurrency Market Data? When handling Cryptocurrency Market Data, privacy and compliance considerations should be taken into account, particularly regarding user privacy and regulatory compliance. Privacy protection measures, such as anonymizing user data or aggregating data at a high level, can help protect individual traders' identities while still providing meaningful market insights. Compliance with data protection regulations, such as the General Data Protection Regulation (GDPR), is crucial to ensure the proper handling and storage of any personal data associated with market data. Furthermore, compliance with anti-money laundering (AML) and know-your-customer (KYC) regulations is important when dealing with market data that may reveal potential illicit activities or suspicious patterns. What technologies or tools are available for analyzing and extracting insights from Cryptocurrency Market Data? Various technologies and tools are available for analyzing and extracting insights from Cryptocurrency Market Data. Data analysis platforms, such as Excel, Python libraries like pandas, or specialized market data platforms, allow users to process and analyze market data, perform statistical calculations, and derive meaningful insights. Visualization tools, such as Tableau, Power BI, or custom-built charting libraries, enable the creation of visual representations of market data, aiding in trend analysis and pattern identification. Machine learning algorithms and data mining techniques can be applied to uncover correlations, detect anomalies, or develop predictive models based on market data. Additionally, natural language processing (NLP) techniques can be used to analyze sentiment data from social media or news sources, providing insights into market sentiment and its impact on cryptocurrency prices. What are the use cases for Cryptocurrency Market Data? Cryptocurrency Market Data has several use cases within the cryptocurrency ecosystem and beyond. Investors and traders rely on market data to track the performance of different cryptocurrencies, analyze market trends, and make informed investment decisions. Cryptocurrency exchanges and trading platforms utilize market data to provide real-time price feeds, trading charts, and market order information to their users. Financial institutions and analysts leverage market data for market research, portfolio management, risk assessment, and the development of trading strategies. Researchers and academics use market data to study market efficiency, price discovery mechanisms, market manipulation, or the impact of regulatory events on the cryptocurrency market. Cryptocurrency Market Data is also valuable for regulatory bodies, allowing them to monitor market activity, detect potential market abuses, and ensure compliance with relevant financial regulations. What other datasets are similar to Cryptocurrency Market Data? Datasets similar to Cryptocurrency Market Data include Cryptocurrency Price Data, Trading Volume Data, Order Book Data, and Blockchain Network Data. Cryptocurrency Price Data focuses specifically on the historical and real-time prices of cryptocurrencies. Trading Volume Data provides insights into the trading activity and volumes of different cryptocurrencies. Order Book Data reveals the current buy and sell orders available on cryptocurrency exchanges, offering insights into market liquidity and depth. Blockchain Network Data encompasses data on transaction volumes, block sizes, network activity, and other network-related statistics. These datasets complement Cryptocurrency Market Data, enabling a comprehensive understanding of the cryptocurrency ecosystem, its market dynamics, and the underlying blockchain networks.
OPCFW_CODE
A Scam Email with the Subject "CitiBusiness customer service: security warning! -Thu, 28 Feb 2008 03:07:42 -0600" was received in one of Scamdex's honeypot email accounts on Thu, 28 Feb 2008 01:07:59 -0800 and has been classified as a Generic Scam. The sender was "CitiBusiness" <mail_system.id81518-5533CBF@citi.com>, although it may have been spoofed. Dear CitiBusiness customer, CitiBusiness new Scheduled Maintenance Program protects your data from unauthorized access. CitiBusiness Online Form is important addition to our scheduled maintenance program. Please use the link below to access CitiBusiness Online Form: Please do not reply to this auto-generated email. Follow instructions above. end: 0x10268983, 0x1574, 0x07, 0x2, 0x777, 0x63042973, 0x639, 0x5, 0x67, 0x91, 0x1, 0x22099169, 0x1, 0x7282 tmp, WKB, I34, AMY, V53T, update, root, 0FL 0x85, 0x1297 3559833219299 ZV31: 0x63150366, 0x6024, 0x53030402 0x323, 0x2, 0x430, 0x88449156, 0x919, 0x5819 0A22: 0x614, 0x28, 0x3677, 0x6, 0x372 exe: 0x8024, 0x225, 0x8, 0x856, 0x160, 0x8319, 0x1, 0x7, 0x9982 interface: 0x620, 0x3, 0x10, 0x8, 0x41, 0x4, 0x7497, 0x6 hex: 0x8755, 0x6, 0x4795, 0x6, 0x0, 0x41594071, 0x42871813, 0x83 MAI0: 0x447, 0x8, 0x5, 0x29, 0x66066077, 0x78734532, 0x50, 0x985, 0x568, 0x53, 0x9391, 0x704 0x07, 0x7640, 0x0, 0x1, 0x28, 0x5744 function: 0x43 JP1, start, YAY, K5L, NG6, common, rev XVPO: 0x4, 0x188, 0x98, 0x86, 0x0, 0x3306, 0x9, 0x749, 0x6, 0x135 9[3-35
OPCFW_CODE
/* Preparación del entorno de testeo y liberación Cuando estamos creando los casos de testeo, es posible que necesitemos preparar el entorno antes de realizar cada prueba. Podemos crear datos necesarios, conexiones, configuraciones, etc. Todo este código podemos agruparlo dentro de una función especial “before” la cual recibe como parámetro la función que debe ejecutarse antes de iniciar el conjunto de casos de testeo. También podemos ejecutar código antes de iniciar cada caso de testeo en particular, para este caso tenemos la función “beforeEach”. Veamos un ejemplo de cómo podemos usar ambas. */ const chai = require('chai'); const assert = chai.assert; describe('Casos de testeo', () => { before(() => { console.log('Al iniciar casos de testeo') }) beforeEach(() => { console.log('Antes de cada caso de testeo') }) it('Caso 1', () => { assert(true, 'True es true'); }) it('Caso 2', () => { assert(true, 'True es true'); }) it('Caso 3', () => { assert(true, 'True es true'); }) }) /* Y la correspondiente salida por consola de la ejecución de las pruebas. Casos de testeo Al iniciar casos de testeo Antes de cada caso de testeo Caso 1 Antes de cada caso de testeo Caso 2 Antes de cada caso de testeo Caso 3 3 passing (10ms) ------------------------- Como podemos apreciar: ● El código de la función before() es llamado una única vez para este conjunto de casos de testeo ● El código de la función beforeEach() es llamado antes de iniciar cada caso de testeo Al igual que disponemos de funciones que se ejecutan antes de los casos de testeo, tenemos la posibilidad de ejecutar funciones luego de la ejecución de cada caso de testeo, o del conjunto de casos de testeo. Veamos el ejemplo anterior, ampliado para estos casos */ const chai = require('chai'); const assert = chai.assert; describe('Casos de testeo', () => { before(() => { console.log('Al iniciar casos de testeo') }) beforeEach(() => { console.log('Antes de cada caso de testeo') }) after(() => { console.log('Ejecutado al final de todos los testeos') }) afterEach(() => { console.log('Ejecutado al finalizar cada caso') }) it('Caso 1', () => { assert(true, 'True es true'); }) it('Caso 2', () => { assert(true, 'True es true'); }) it('Caso 3', () => { assert(true, 'True es true'); }) }) /* Y la correspondiente salida por pantalla al ejecutar este nuevo caso de testeo. Casos de testeo Al iniciar casos de testeo Antes de cada caso de testeo Caso 1 Ejecutando al finalizar cada caso Antes de cada caso de testeo Caso 2 Ejecutando al finalizar cada caso Antes de cada caso de testeo Caso 3 Ejecutando al finalizar cada caso Ejecutando al finalizar todos los testeos 3 passing (10ms) Las funciones before() y beforeEach() son útiles para inicializar datos necesarios por todos los testeos del conjunto. Mientras que las funciones after() y afterEach() son útiles para liberar los datos o recursos que hayamos utilizado en nuestros testeos. Algunos ejemplos en los cuales podemos usar estas funciones son: ● Conexión/desconexión con una base de datos ● Inicialización/borrado de datos en una base de datos ● Asignación/liberación de recursos */
STACK_EDU
Intermittent crash on startup, and on game close This crash usually happens when I exit the game (preventing it from exiting cleanly), but I also encountered it twice while starting the game just now. Appears that it's trying to set the title bar before the title bar actually exists yet, and isn't nil-checking it. 2024-06-27 12:13:40.246 - Thread: 1 -> Plugin Init: avaness.PluginLoader.Main 2024-06-27 12:13:40.247 - Thread: 1 -> [PluginLoader] [Info] Initializing 16 plugins 2024-06-27 12:13:40.265 - Thread: 1 -> Info: Patched methods 2024-06-27 12:13:40.358 - Thread: 33 -> Exception occurred: System.NullReferenceException: Object reference not set to an instance of an object. at FPSCounter.GUI.TitlebarStats.UpdateTitleBar() at FPSCounter.GUI.TitlebarStats.Update() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() 2024-06-27 12:13:40.358 - Thread: 33 -> Showing message 2024-06-27 12:13:40.358 - Thread: 33 -> MyInitializer.OnCrash 2024-06-27 12:13:40.358 - Thread: 33 -> var exception = System.NullReferenceException: Object reference not set to an instance of an object. at FPSCounter.GUI.TitlebarStats.UpdateTitleBar() at FPSCounter.GUI.TitlebarStats.Update() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() 2024-06-27 12:13:42.786 - Thread: 33 -> ================================== CRASH INFO ================================== AppVersion: 01_204_018 GameName: Space Engineers IsOutOfMemory: False IsGPU: False IsNative: False IsTask: False IsExperimental: False ProcessRunTime: 14 PCUCount: 0 IsHang: False GCMemory: 221 GCMemoryAllocated: 221 HWAvailableMemory: 31776 ProcessPrivateMemory: 1605 AnalyticId: SE ================================== OFNI HSARC ================================== Thanks for reporting the issue as well as attaching the error; will be fixing this in a hotfix.
GITHUB_ARCHIVE
Given a triangle in $\mathbb R^3$ I know the barycentre, one vertex, the normal and the length of all three sides. How to compute the other vertices I have an arbitrary triangle in $\Bbb R^3$, i.e. it's scalene. I know the lengths of all sides ($l_0$, $l_1$ and $l_2$) and the coordinates of the barycentre ($O$ in the diagram) and one vertex ($v_1$). I know that this does not uniquely define a single triangle but I also know the normal vector for the triangle, ie for the plane in which it lies which I believe should define a unique triangle. I've spent some time trying to derive a closed form solution for the coordinates of vertices $v_0$ and $v_2$ without success. For example, it seems to me that the intersection of two spheres, one centred at $v_1$ with radius $l_0$ the other at $X$ with radius $\frac{l_1}{2}$, and the plane in which the triangle lies ought to give me $v_2$ but I have been unable to express this in a way which lets me plug in the unknowns to get the result. I'm aware that I could try rotating the whole frame into the XY plane and trying to solve there but it feels like there ought to be a simpler solution. Am I correct in thinking the information I have defines a unique triangle? Is there an efficient solution for locating the two unknown vertices? Image showing triangle with vertices and barycentre labelled The problem is essentially two-dimensional, as you know on which plane it lies. Assume w.l.o.g. that this is a horizontal plane, and we will use just two coordinates. Also assume w.l.o.g. that the barycenter has coordinates $(0,0)$ and $v_i=(x_i,y_i)$, $i=1,2,3$, and you are given $(x_1,y_1)$. Then you immediately get $x_2+x_3=-x_1$; $y_2+y_3=-y_1$; you also are given $l_1=\sqrt{(x_3-x_2)^2+(y_3-y_2)^2}$, $l_2=\sqrt{(x_1-x_3)^2+(y_1-y_3)^2}$, $l_3=\sqrt{(x_1-x_2)^2+(y_1-y_2)^2}$. Thus you have 5 equations and only 4 unknowns, so unless the data are inconsistent you can solve these equations and get a finite number of possible solutions (possibly, more than one, see the diagram for two possibilities with the same side lengths).
STACK_EXCHANGE
[00:05] <pleia2> tsimonq2: I always just look at past issues to refresh my memory, I don't remember either ;) [00:05] <pleia2> Issue 472 has some [00:16] <tsimonq2> pleia2: Ok, thanks [00:50] <tsimonq2> pleia2: Could you please do a review on that section? [00:50] <tsimonq2> pleia2: I want to make sure I got it right [00:52] <pleia2> just added some commas, but otherwise that's fine :) [00:52] <tsimonq2> pleia2: Even the one from the DMB? [00:53] <pleia2> added commas there too ;) [00:53] <tsimonq2> pleia2: Thanks ;) [00:54] <tsimonq2> The only preasom why I picked up that reason is because I'm subscribed to devel-permissions... [00:54] <tsimonq2> *reason [00:54] <tsimonq2> preasom? I don't even know where that came from... lol [00:56] <pleia2> I thought the DMB used to send these things to the news team mailing list [00:57] <tsimonq2> I did too [00:58] <pleia2> most boards need a gentle reminder from time to time :) [00:58] <tsimonq2> :) [00:59] <tsimonq2> pleia2: I was going to ping Łukasz but he's not on IRC atm :P [01:14] <Unit193> pleia2: Did you find some automated linkcheck thing? I didn't get any pings last two weeks, though I still try to grab the link when I see it. [01:18] <pleia2> Unit193: I quit 2 weeks ago [01:18] <Unit193> Oh. [01:18] <Unit193> Wow. [01:18] <pleia2> I mean, it's probably hard to tell since I still keep doing things, but tsimonq2 and jose have been handling things :) [01:18] <tsimonq2> Unit193: I pinged you last week, but didn't for 502 [01:18] <tsimonq2> But I certainly did for 501, and got no response [01:29] <Unit193> Didn't see it. [01:36] <tsimonq2> k [03:16] <guiverc> just occurred to me that I didn't add header to OerHeks article I added .. will add tomorrow when I add planet... [10:33] <OerHeks> sad news, ubuntu is/was vulnerable with a kernel heap out-of-bounds access bug >> http://blog.trendmicro.com/results-pwn2own-2017-day-one/ [10:34] <OerHeks> maybe more distro's too, not mentioned in this article
UBUNTU_IRC
convert hex string to byte not byte array i needed to convert hex which i get from color code int color = singleColor.getColor(); String rgbString = "R: " + Color.red(color) + " B: " + Color.blue(color) + " G: " + Color.green(color); String hexRed = "0x" + Integer.toHexString(Color.red(color)); String hexGreen = "0x" + Integer.toHexString(Color.green(color)); String hexBlue= "0x" + Integer.toHexString(Color.blue(color)); Log.e("hex string", hexRed + hexGreen + hexBlue); log generated is........ 0xb30x120xff which is completed i wanted but i want to convert this in byte and then join them as byte array like this which is working completely ok byte[] hex_txt = {(byte)0xff, (byte)0x00, (byte)0x00}; sendData(hex_txt); my question is how to convert this string to like this so i can send data.... byte byteRed = Byte.valueOf(hexRed); this is not working number format exception also tried other solutions which are not working advance thx for help What is hexRed when the above gives you the NumberFormatException? possible duplicate of java convert to int valueOf does not recognize 0x. See this answer for some solutions. number format exception occurs when i direct convert with hexred so thats problem when this occurs... so you are right about it but is there any difference between byte byteRed = Byte.valueof("ff") and i sendd thiss byteRed or convert thiss (byte)0xff ? #1: If you want to use valueOf on a string of hex digits, you need to say Byte.valueOf(yourString, 16). #2: Byte is a signed type that goes from -128 to 127, so it will still fail if you give it "ff" because that is 255 and is out of range. If you want to do it that way, use Integer.valueOf and then cast to a Byte. (Actually you'll need to cast to an (int) to unbox the type first, then cast that to (byte).) can you give an example of that as i am getting confused ...is there difference between adding 0xff and just ff to byte ? I don't understand your question. Some methods that convert strings to integers understand 0x and some methods don't. Are you having difficulty understanding that? valueOf doesn't understand 0x, but you can use it like byte byteRed = (byte)(int)Integer.valueOf(substring(hexRed,2),16);. sorry my question was hexRed = "oxff" byte byteRed = (byte)(int)Integer.valueOf(hexRed.substring(2),16); and byte byteRed = (byte)oxff; both are same ? The first one will assign byteRed to 0xff, which I think is what you want. You can only convert Strings to a byte array and vice versa. String example = "This is an example"; byte[] bytes = example.getBytes(); System.out.println("Text : " + example); System.out.println("Text [Byte Format] : " + bytes); Google "convert String to byte" and you will find plenty on converting string to byte[] http://www.mkyong.com/java/how-do-convert-string-to-byte-in-java/ The question has to do with converting strings that contain hexadecimal numbers in them. getBytes does not do that. can you please explain in which i have asked as i needed in specific format...you means 0xb30x120xff string to getbytes will work ?
STACK_EXCHANGE
hibidy wrote:One of the things I've noticed is that there is a difference in the "makers" of DAW's vs the "users" There will always be odds because some people want general comforts, some don't care, and there are always a ton of people who back up the idea that we don't NEED onwards and upwards. In all these years (which was about the time I joined KVR) I've seen and been a part of dramatic change, but in fact, nothing has changed that much. For example, if tracktion had continued development under Jules way way back, I don't think I'd have gone through the MARATHON of hosts I've tried. When mackie got involved it a) died b) didn't work all that well for many people. So the idea that a host doesn't "need" x or y is an endless debate but I think some of the things missing are critical for a modern DAW. Yeah, good points. For me it boils down to the balance between developer vision and user needs/vision and how well the communication works between those. To create a full DAW you need a lot of motivation and a rather large amount of headstrong rigour to keep on track and don't lose focus every time a user or even a large group of users decides to give you pressure in a certain direction. Since you need to keep the whole application in focus and how everything works together, things are way more complicated and have much more implications down the road as the typical user is aware of or cares about (I explicitly include myself here ). At the same time, the developer needs to be open minded about things, since of course he has his own preferences and some of them may NOT be motivated by how to best do things, but simply grounded in his own interests and bias. You can see this clearly in some of the smaller DAWs, that struggle endlessly uphill since they somehow miss some basic things that would make them "whole" for a larger audience but somehow the developer doesn't see them as important enough or that critical. That can be GUI and workflow things or must-have features that are Yes/No things for too many. The user on the other hand needs to get the feeling that he is heard and cared for, that he is welcome, that he's on a positive ride towards a better application, even if something is missing ATM. If that can be transported by clear and open communications, it's half of the mission done. The other half of course is to create that better application on several levels, from GUI to workflow to features to docs to overall "feel" of software and company. Until now, Bitwig had "carte blanche" to do and change things as they wanted (more or less of course). As soon as the application is released, this is over, since from that moment on, you need to keep everything in line and working. Your freedom as a developer is drastically reduced. So before release, you need to make sure that the basis and groundworks are strong and flexible enough to survive many years of continued development, even if some features may be missing - the foundation is actually much more important for a 1.0, even if the future userbase may heavily disagree... And still you will always have areas where after a while you realize, you painted yourself into a corner in some regards. That's simply unavoidable. In addition, some decisions simply create certain dependencies. If clean PDC is one of the goals, certain other things suddenly become very complicated. You can't just send data around tracks as otherwise a modular host could do easily, since you simply wouldn't be able to keep things in time. Or the decision to create a DAW for live use: You can't trim it as heavily towards low CPU use as a pure studio DAW, since on stage, you need to enable and disable tracks and effects etc. fluidly, so they basically need to run all the time, even if disabled/muted... Other things are limited by the GUI: If you offer unlimited Sends, how do you deal with them GUI-wise? I found it rather healthy to be involved in some such projects to realize, how some "very tiny things" can cost hours and day of work. Adding one tiny feature can explode in all directions, from keeping presets and scenes consistent and backwards compatible to showing a new parameter in the GUI in a consistent way to adding it to the docs, not even mentioning the whole enchilada that may arise internally. So buying into BWS 1.0 isn't about getting the perfect host on day one IMHO. But if all parties involved do their best, it should become a hell of a ride.
OPCFW_CODE
Can the collapse of a gas cloud lead to an elliptical structure? Is it likely, unlikely, or impossible for an elliptical structure to form when a gas cloud collapses? Due to the conservation of angular momentum, one would expect that disk structures are much more likely to form than elliptical structures. Elliptical galaxies then formed by merging of disk structures (or of earlier elliptical structures, which themselves must have originated from disk structures). The presence of mostly very massive elliptical galaxies in the centers of galaxy clusters (with high mass densities) would support such a merging history. Were the first galaxies all disk-like and are old large ellipticals the result of continuous merging events? On the other hand, the majority of stars in ellipticals seem to have formed in a relatively short rapid burst of star formation. If elliptical structures were the results of mergers, wouldn't it be unlikely for the stars in these merging galaxies to have all formed at the same time? Shouldn't we, therefore, find various stellar populations in ellipticals that are old but do not have the same age? Or perhaps the currently observed old populations in ellipticals were formed during the merging process, and the remaining gas of the merging partners was used up then. Still, then we should be able to detect some older stars from before the merging. An interesting question then is what proportion do these "pre-merger stars" represent in ellipticals? I feel like there is more than one question here; one about the dynamics of gas clouds and one about identifying stellar populations in large ellipticals? Yes, that is probably, right feel free to ask any of them. If I have time this week I will make two seperate posts. The shape of a collapsing cloud is influenced by various factors, including the initial shape, angular momentum, ability to dissipate heat, and sub-clumping processes. When a gas cloud collapses, it can only contract by a factor of 2 in each direction before the rising heat content requires energy dissipation for it to continue its collapse. Generally, gas can dissipate and so the collapse proceeds but quite slowly. If the cloud forms sub-clumps at only a few collapse factors, that is, it rapidly forms stars, globular clusters, or molecular clouds, it maintains its shape at this point. Those gas clouds that were not rotating rapidly before the collapse would be primarily supported through an anisotropic velocity dispersion (pressure from random velocities), not rotational support, and will likely end up in an elliptical shape. From the distribution of specific angular momentum of primordial gas clumps in cosmology simulations, one expects around 20% of first collapse galaxies would be ellipticals. This is about the fraction of galaxies that are elliptical and indicates that ellipticals formed by mergers are a minority of all elliptical. For a gas clump that possesses significant angular momentum and sub-clumps slowly enough, it will collapse along the rotation axis into a flat rotating disk. This is the Population II stars in spirals. The stars that formed earlier, before the clump formed a flattened disk, compose the elliptical shaped stellar halo of Population I stars. The halo plays an essential role in this process. With a dissipating gas cloud embedded in a non-dissipating halo, the dynamics become more complex. Because the universe began as a homogeneous expanding gas, there is very limited angular momentum. All angular momentum is picked up by tidal interactions between neighboring density enhancements. The dark matter halo only collapses by a factor of two, enabling the baryonic matter (ordinary matter) to collapse by a few extra factors which leads to higher rotational velocities that we observe in galaxies. Without the influence of this dark matter, more galaxies would be ellipticals. The prevalence of ellipticals in clusters does not support the picture that all ellipticals are formed by mergers. Mergers between galaxies require velocities to be closely matched. This is not the case in clusters because they have high velocity dispersions. Thus, the merger rate is low in clusters once it forms. This suggests that the elliptical galaxies in clusters are primordial rather than formed by the late merger process. There are a couple of potential explanations for the preferential formation of ellipticals in clusters. Firstly, galaxies forming in clusters may have collapsed into stars at a faster rate due to the higher densities present. The dense environment could lead to more efficient star formation processes, resulting, as described above, in the formation of elliptical galaxies. Secondly, the process of tidal torque spin-up, which contributes to the formation of spirals, might be less effective in cluster environments. Tidal torque spin-up typically requires pure expansion to avoid tidal locking. Two caveats for the "merger rate is low in clusters" argument: 1) the buildup of cluster velocity dispersions may be gradual, allowing some merging to continue for a while; 2) ellipticals at the centers of clusters are clearly the result of multiple mergers. Yes. The cD galaxies at the center of clusters grow by mergers as galaxies repeatedly fall through the center. I forgot to mention that. The build up of cluster velocity may be slow as the density rises. But, that means that one can't point to the high density in clusters as the reason for the predominance of ellipticals. Also, the cores of rich clusters formed quite early. You said "Due to the conservation of angular momentum disk structures are more likely to form" . Well, there may not be any angular momentum in the first place, so disc structures can not form at all. Angular momentum can for instance be transferred from the central region of a rotating gas cloud via 'angular momentum transfer' to the outer regions via magnetic fields created by the 'dynamo effect' in partially ionized gases (see http://th.nao.ac.jp/MEMBER/tomisaka/Lecture_Notes/StarFormation/5/node94.html for more). So there will always be a tendency for the central region of gaseous structures to have small angular momentum (just look at the angular momentum distribution in the present solar system). A galaxy cluster will therefore tend to have smaller systematic velocities (i.e. be closer to hydrostatic equilibrium) in the central region as compared to the outer region. It's probably impossible for there to be zero angular momentum.
STACK_EXCHANGE
#include "math/AABB.h" #include "math/MathUtils.h" namespace bge { AABB::AABB(Vec3f min, Vec3f max) : m_MinExtent(min) , m_MaxExtent(max) { } // Transform all 8 points of the AABB and choose new min/max AABB AABB::Transform(const Mat4f& transform) const { // Get the 8 points from the min and max extents // m_MinExtent = bottom left behind // m_MaxExtent = top right front Vec4f bottomLeftBehind = Vec4f(m_MinExtent[0], m_MinExtent[1], m_MinExtent[2], 1.0f); Vec4f topRightFront = Vec4f(m_MaxExtent[0], m_MaxExtent[1], m_MaxExtent[2], 1.0f); Vec4f bottomRightBehind = Vec4f(m_MaxExtent[0], m_MinExtent[1], m_MinExtent[2], 1.0f); Vec4f bottomRightFront = Vec4f(m_MaxExtent[0], m_MinExtent[1], m_MaxExtent[2], 1.0f); Vec4f TopRightBehind = Vec4f(m_MaxExtent[0], m_MaxExtent[1], m_MinExtent[2], 1.0f); Vec4f TopLeftBehind = Vec4f(m_MinExtent[0], m_MaxExtent[1], m_MinExtent[2], 1.0f); Vec4f TopLeftFront = Vec4f(m_MinExtent[0], m_MaxExtent[1], m_MaxExtent[2], 1.0f); Vec4f bottomLeftFront = Vec4f(m_MinExtent[0], m_MinExtent[1], m_MaxExtent[2], 1.0f); // Now transform all points bottomLeftBehind = transform * bottomLeftBehind; topRightFront = transform * topRightFront; bottomRightBehind = transform * bottomRightBehind; bottomRightFront = transform * bottomRightFront; TopRightBehind = transform * TopRightBehind; TopLeftBehind = transform * TopLeftBehind; TopLeftFront = transform * TopLeftFront; bottomLeftFront = transform * bottomLeftFront; // Recalc the min/max points and construct new AABB Vec4f min = GetMinValues( GetMinValues( GetMinValues( GetMinValues( GetMinValues(GetMinValues(GetMinValues(bottomLeftBehind, topRightFront), bottomRightBehind), bottomRightFront), TopRightBehind), TopLeftBehind), TopLeftFront), bottomLeftFront); Vec4f max = GetMaxValues( GetMaxValues( GetMaxValues( GetMaxValues( GetMaxValues(GetMaxValues(GetMaxValues(bottomLeftBehind, topRightFront), bottomRightBehind), bottomRightFront), TopRightBehind), TopLeftBehind), TopLeftFront), bottomLeftFront); return AABB(Vec3f(min[0], min[1], min[2]), Vec3f(max[0], max[1], max[2])); } AABB AABB::Expand(const Vec3f& amt) const { return AABB(m_MinExtent - amt, m_MaxExtent + amt); } AABB AABB::MoveTo(const Vec3f& destination) const { return Translate(destination - GetCenter()); } bool AABB::Intersects(const AABB& other) const { return (m_MaxExtent >= other.m_MinExtent) && (m_MinExtent <= other.m_MaxExtent); } bool AABB::Contains(const Vec3f& point) const { return (point >= m_MinExtent) && (point <= m_MaxExtent); } bool AABB::Contains(const AABB& other) const { return (other.m_MinExtent >= m_MinExtent) && (other.m_MaxExtent <= m_MaxExtent); } AABB AABB::Translate(const Vec3f& amt) const { return AABB(m_MinExtent + amt, m_MaxExtent + amt); } AABB AABB::ScaleFromCenter(const Vec3f& amt) const { const Vec3f center = GetCenter(); const Vec3f scaledExtents = GetExtents() * amt; return AABB(center - scaledExtents, center + scaledExtents); } AABB AABB::ScaleFromOrigin(const Vec3f& amt) const { return AABB(m_MinExtent * amt, m_MaxExtent * amt); } AABB AABB::AddPoint(const Vec3f& other) const { return AABB(GetMinValues(m_MinExtent, other), GetMaxValues(m_MaxExtent, other)); } AABB AABB::AddAABB(const AABB& other) const { return AABB(GetMinValues(m_MinExtent, other.m_MinExtent), GetMaxValues(m_MaxExtent, other.m_MaxExtent)); } Vec3f AABB::GetCenter() const { return (m_MinExtent + m_MaxExtent) * 0.5f; } Vec3f AABB::GetExtents() const { return (m_MaxExtent - m_MinExtent) * 0.5f; } bool AABB::operator==(const AABB& other) const { return m_MinExtent == other.m_MinExtent && m_MaxExtent == other.m_MaxExtent; } bool AABB::operator!=(const AABB& other) const { return m_MinExtent != other.m_MinExtent && m_MaxExtent != other.m_MaxExtent; } } // namespace bge
STACK_EDU
Kubernetes CrowdSec Integration - Part 2: Remediation In this article, we will see how to install CrowdSec in a Kubernetes (K8s) cluster, configure it to monitor the applications of our choice, and detect attacks on those applications. Hello again to the readers who have read the first part of the article about how to integrate CrowdSec to Kubernetes and detect attacks. For the others, welcome to part 2, which will cover the remediation part on Kubernetes and, more precisely, on Nginx Ingress Controller. First, you need to have a ready Kubernetes cluster using Nginx Ingress Controller, an app using this controller and the CrowdSec helm chart installed (again, follow the 1st part to get it). So after detecting attacks from the previous article, we can now delete all the alerts to start from a clean CrowdSec database. Install Crowdsec Lua bouncer plugin To install a bouncer, we need to generate a bouncer API key, so the bouncer can communicate with the CrowdSec API to know if it needs to block the IP or not. Still, in the same crowdsec-lapi container shell, generate the bouncer API key using this command: You will get an API key, you need to keep it and save it for the ingress-nginx bouncer. Now we can patch our ingress-nginx helm chart to add and enable the crowdsec lua plugin using the following configuration (the API_KEY and API_URL for the bouncer to communicate with crowdsec LAPI). You can put this configuration in a file `crowdsec-ingress-bouncer.yaml`. Once we have this patch we can upgrade the ingress-nginx chart Now we have our ingress controller patched with CrowdSec Lua bouncer plugin. We'll start an attack again using Nikto on `http://helloworld.local`. Getting a shell in the CrowdSec agent pod and listing the alerts, you'll see your IP is attacking the helloworld app. Now, if we try to access the helloworld app using CURL Tadaaa! We can see that the Nginx ingress controller blocked our IP (by sending us a 403 HTTP code), and we cannot access the helloworld application. To make the app accessible again, from the crowdsec-agent pod, we just need to delete the decision on our IP. And CURL the helloworld app again. And we can see that we have access again. Over both Part 1 and Part 2 of this article, we've shown how to integrate CrowdSec in a Kubernetes environment on both the detection and the protection parts. So again, if you have an idea or a need for K8s bouncer integration, feedback, or suggestions, feel free to contact us using our community channels (Gitter and Discourse). Don't forget to join our Discord, too! About the author Coming from a sys admin then pentester/secops background, Hamza is now DevSecOps at CrowdSec. He is also a member of the core team.
OPCFW_CODE
While the Problem Runs Whether you click "Run" or proceed through the "Domain Review", once the problem begins running, the icon on the problem tab will change from the Edit icon () to the Run icon (). The screen will look something like this: On the left is the "Status Panel", which presents an active report of the state of the problem execution. It contains a text based report, a progress bar for the current operation, several history plots summarizing the activity, and a "Thumbnail" window of the current computational grid. The history plots summarize the number of nodes/cells in the mesh, the convergence of the current solver, the error estimates for the solution, and the current time step (in the case of time dependent problems). Clicking on any plot will display a legend indicating meaning of the plot traces. The format of the printed data will depend upon the kind of problem, but the common features will be: |•||The elapsed computer time charged to this problem.| |•||The current regrid number.| |•||The number of computation Nodes (Mesh Vertices).| |•||The number of Finite Element Cells.| |•||The total Degrees of Freedom per variable (number of interpolation coefficients).| |•||The number of Unknowns (DOF times variables).| |•||The amount of memory allocated for working storage (in KiloBytes).| |•||The current estimate of RMS (root-mean-square) spatial error.| |•||The current estimate of Maximum spatial error in any cell.| Other items which may appear are: |•||The current problem time and timestep| |•||The stage number| |•||The RMS and Maximum temporal error for the most recent iteration| |•||The iteration count| |•||A report of the current activity| On the right side of the screen are separate "Thumbnail" windows for each of the PLOTS or MONITORS requested by the descriptor. In steady-state problems, only MONITORS will be displayed during the run. They will be replaced by PLOTS when the solution is complete. In time-dependent problems, all MONITORS and PLOTS will be displayed simultaneously, and updated as the sequencing specifications of the descriptor dictate. PLOTS will be sent to the ".pg7" graphic record on disk for later recovery. MONITORS will not. In eigenvalue problems, there will be one set of MONITORS or PLOTS for each requested mode. In other respects, eigenvalue problems behave as steady-state problems. A right-click in any "thumbnail" plot brings up a menu from which several options can be selected: The menu items are: Causes the selected plot to be expanded to fill the display panel. You can also maximize a thumbnail by double-clicking in the selected plot. Causes a maximized plot to be returned to thumbnail size. Sends the window to the printer using a standard Print dialog. Invokes a dialog which allows the selection of a format for exporting the plot in standard format to other processes. Currently, the options are BMP, EMF, EPS, PNG, PPG and XPG. For bitmap formats (BMP, PNG, PPG and XPG) the dialog allows the selection of the drawing linewidth and resolution of the bitmap, independent of the resolution of the screen. For vector formats (EMF, EPS) no resolution is necessary (FlexPDE uses a fixed resolution of 7200x5400). EPS produces an 8.5x11 inch landscape mode PostScript file suitable for printing. 3D plots can be rotated in polar and azimuthal angle. The zoom level of a plot can be dynamically changed using "Zoom In", "Zoom Out", and "Cancel Zoom". With the right-click, the zoom will be centered around the click-position. This may also be done with the keyboard. Left-click once inside the plot first to ensure the plot has focus (clicking and holding will report the plot coordinates of mouse position). Then Z will zoom in, M will zoom out, and 0 will cancel the zoom and restore the zoom level to 100%. L, R, U, and P or the arrow keys will pan left, right, up, and down. The zoom change is centered around the most recent mouse position. A typical CONTOUR plot might appear as follows: At the top of the display the "Title" field from the problem descriptor appears, with the time and date of problem execution at the right corner, along with the version of FlexPDE which performed the computation. At the bottom of the page is a summary of the problem statistics, similar to that shown in the Status Window: |•||The problem name| |•||The number of gridding cycles performed so far| |•||The polynomial order of the Finite-Element basis (p2 = quadratic, p3 = cubic)| |•||The number of computation nodes (vertices)| |•||The number of computation cells| |•||The estimated RMS value of the relative error in the variables| In staged problems, the stage number will be reported. In eigenvalue problems, the mode number will be reported. In time dependent problems, the current problem time and timestep will be reported. By default, FlexPDE computes the integral under the displayed curve, and this value is reported as "Integral". Any requested REPORTS will appear in the bottom line. A typical ELEVATION plot might appear as follows: Here all the labeling of the contour plot appears, as well as a thumbnail plot of the problem domain, showing the position of the elevation in the figure. For boundary plots, the joints of the boundary are numbered on the thumbnail. The numbers also appear along the baseline of the elevation plot for positional reference. While the problem is running, you can return the display panel to the editor mode by clicking the Edit Script tool () or the Show Editor item in the Controls menu. The Run icon () will continue to be displayed in the problem tab as long as the problem is running. When the problem terminates, the problem tab will again display the Edit icon (). You can return to the graphic display panel by clicking the Show Plots tool () or the Show Plots item in the Controls menu.
OPCFW_CODE
A 2D computer graphic is a representation of two-dimensional objects using colors, lines, shapes, and transparency. These graphics are intended for viewing on a computer screen, and appear flat unless you move the mouse over them. However, shading can be applied to create the appearance of three-dimensional objects. To understand the difference between a 2D computer graphic and a 3D one, let’s first consider what a 3D computer image is. A 2-D computer graphic is much smaller than a traditional digital image, and it can be rendered in different resolutions. It is typically used in documents and illustrations. The process of creating a 2-D computer graphic began in the 1950s, and vector graphics devices were the primary means of displaying two-dimensional graphics. In the following decades, raster-based graphics devices were more common. A landmark development in the field of 2-D computer graphics was the development of the PostScript language and the X Window System protocol. A 2D computer graphic is an image that is not an actual representation of an object. It is a representation of a structure or scene without adding any extra semantic value. It is often smaller than a digital image and can be rendered at a higher resolution. The two types of graphics have different advantages and disadvantages, but they are similar to each other. The main difference between the two is that a 2D computer graphic is smaller and less flexible than a 3D version. A 2D computer graphic does not offer 3D models or optical phenomena. Rather, it models multiple layers of objects. These layers can be transparent or opaque. They are also defined by a single number. This makes 2D computer graphics extremely versatile and efficient for various applications. If you want to design an attractive and memorable infographic, a 3D computer graphic is an excellent choice. If you’re trying to convey a complex concept in a simple way, a 2D graphic will help you get the message across. There are several types of computer graphics. The most common ones are 2D surfaces and 3D objects. While they differ in their definitions, they share similar characteristics. For instance, the two-dimensional surfaces are more useful in many applications, while objects can be deformed and resized by a 3-D model. This is an important consideration when designing a 3D computer graphic. It’s important to understand the difference between these two types of images. A 2D computer graphic is an image created using two-dimensional models. These are generally more common in advertising and in applications based on traditional printing methods. A 2D image has added semantic value, which makes it more desirable. It’s a better choice if you’re looking to illustrate a complex idea or product. When you design a 3D image, you have two options. The first option is to draw the picture in a three-dimensional format. A 2D computer graphic can be a two-dimensional model or a two-dimensional image. Both of them are digital images. In general, they are not true representations of real-world objects. But they have semantic value and can be used in other applications. They are used for typography, while a 3D model is a digital image. Both types of graphics are used for different purposes. A 2D computer graphic can be smaller than a digital image and have varying resolutions. A 2D image is stored in a 2D graphic file. These graphics were first used in the 1950s. In the 1960s, raster-based devices began to replace vector graphics devices. In the 2000s, the X Window System and PostScript language were two of the most important developments in the field of 2D computer graphics. A 2D computer graphic may be a layered model. A 2D model may contain geometric models, digital images, and mathematical equations. A two-dimensional computer graphic may be manipulated or modified by two-dimensional geometric transformations. It is also possible to use objects to make a two-dimensional image. The underlying mathematics of computer graphics is used in 3D printing.
OPCFW_CODE
Lock screen control not showing song details as shown in notification Lock screen control not showing song details as shown in notification this is the notification screen while this is the lockscreen for the same song This is the code I am trying , its really a basic implementation of the media3 mediasession class PlaybackService : MediaLibraryService() { private lateinit var mediaSession: MediaLibrarySession lateinit var exoPlayer: ExoPlayer //MediaSession private val librarySessionCallback = CustomMediaLibrarySessionCallback() override fun onGetSession(controllerInfo: MediaSession.ControllerInfo): MediaLibrarySession? { return mediaSession } override fun onCreate() { super.onCreate() exoPlayer = ExoPlayer.Builder(this).build() mediaSession = MediaLibrarySession.Builder(this, exoPlayer, librarySessionCallback).build() } private inner class CustomMediaLibrarySessionCallback : MediaLibrarySession.Callback { override fun onAddMediaItems( mediaSession: MediaSession, controller: MediaSession.ControllerInfo, mediaItems: MutableList<MediaItem> ): ListenableFuture<MutableList<MediaItem>> { val updatedMediaItems = mediaItems.map { it.buildUpon().setUri(it.mediaId).build() }.toMutableList() return Futures.immediateFuture(updatedMediaItems) } } override fun onDestroy() { exoPlayer.release() mediaSession.release() super.onDestroy() } } and class MainActivity : ComponentActivity() { private lateinit var controllerFuture: ListenableFuture<MediaController> private lateinit var controller: MediaController override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { SimpleOwnGithubMedia3PlayerTheme { Box(contentAlignment = Alignment.Center, modifier = Modifier.fillMaxSize()) { Button(onClick = { //val url = "android.resource://$packageName/${R.raw.test}" val url = "https://download.samplelib.com/mp3/sample-15s.mp3" **//we changed the song here, but the result is same for any song** play(url) }) { Text(text = "Play") } } } } } override fun onStart() { super.onStart() val sessionToken = SessionToken(this, ComponentName(this, PlaybackService::class.java)) controllerFuture = MediaController.Builder(this, sessionToken).buildAsync() controllerFuture.addListener( { controller = controllerFuture.get() initController() }, MoreExecutors.directExecutor() ) } override fun onStop() { MediaController.releaseFuture(controllerFuture) super.onStop() } private fun initController() { //controller.playWhenReady = true controller.addListener(object : Player.Listener { override fun onMediaMetadataChanged(mediaMetadata: MediaMetadata) { super.onMediaMetadataChanged(mediaMetadata) log("onMediaMetadataChanged=$mediaMetadata") } override fun onIsPlayingChanged(isPlaying: Boolean) { super.onIsPlayingChanged(isPlaying) log("onIsPlayingChanged=$isPlaying") } override fun onPlaybackStateChanged(playbackState: Int) { super.onPlaybackStateChanged(playbackState) log("onPlaybackStateChanged=${getStateName(playbackState)}") } override fun onPlayerError(error: PlaybackException) { super.onPlayerError(error) log("onPlayerError=${error.stackTraceToString()}") } override fun onPlayerErrorChanged(error: PlaybackException?) { super.onPlayerErrorChanged(error) log("onPlayerErrorChanged=${error?.stackTraceToString()}") } }) log("start=${getStateName(controller.playbackState)}") log("COMMAND_PREPARE=${controller.isCommandAvailable(COMMAND_PREPARE)}") log("COMMAND_SET_MEDIA_ITEM=${controller.isCommandAvailable(COMMAND_SET_MEDIA_ITEM)}") log("COMMAND_PLAY_PAUSE=${controller.isCommandAvailable(COMMAND_PLAY_PAUSE)}") } private fun play(url: String) { log("play($url)") log("before=${getStateName(controller.playbackState)}") // controller.setMediaItem(MediaItem.fromUri(url)) //we changed here val media = MediaItem.Builder().setMediaId(url).build() controller.setMediaItem(media) controller.prepare() controller.play() log("after=${getStateName(controller.playbackState)}") } private fun getStateName(i: Int): String? { return when (i) { 1 -> "STATE_IDLE" 2 -> "STATE_BUFFERING" 3 -> "STATE_READY" 4 -> "STATE_ENDED" else -> null } } private fun log(message: String) { Log.e("=====[TestMedia]=====", message) } } What should we do to show the song details on the lock screen also Detials Android studio dolphin Tablet: Samsung Galaxy tab A7 lite Android version 11 Hi @pawaom, Thanks for reporting! However, the described issue was not reproducible on our demo-session app. The details of the reproduction: OS version: Android 11. Media library versions: 1.0.0-alpha01, 1.0.0-alpha02, 1.0.0-alpha03, 1.0.0-beta01 and 1.0.0-beta02 at default release branch. So this looks like an app specific issue. Unfortunately we can't give 1:1 support for solving app specific issues. We'll leave this issue open for ~2 weeks in case anyone wishes to answer it here. If you think this is a problem with Media library, please narrow down your issue to describe the Media library behaviour that doesn't match your expectations. thanks for the reply i will check if this is a samsung specific issue, and get back to you, just to clarify did you try the above code I have shared or the mediasession demo code, that code works fine , however the above code which I had found on Stackoverflow which at best can be described as a basic implementation of mediasession causes the issue, I want to know what should we do to show the notification properly on lock screen also.
GITHUB_ARCHIVE
Page elements moving on window resize Being new to CSS, I have looked at similar posts on stackoverflow regarding this issue, but none of the resolutions seem to help with my site. I am using a template for the site and trying to edit the CSS so that the page will maintain one width, and not shift it's elements when the window is resized. An example page can be found here: (removed link for client) The content is contained within a wrapper currently set to relative position: #page_wrapper { position: relative; } I tried to change it to this: #page_wrapper { min-width: 960px; } This doesn't seem to be doing the trick though. When I resize the window, everything still shifts. Any ideas what else I need to change? Your site is using Twitter Bootstrap: twitter.github.com/bootstrap/ It won't be a totally simple process to do what you want but a starting point would be going to this page: http://twitter.github.com/bootstrap/customize.html There you could uncheck the "Responsive" checkboxes and change the Grid System elements to be whsatever you want. It may however be best to leave those as they are. Then download the css files and replace the ones on your site and see if that helps (ensure you make a back-up of your current files first). Great answer. I tried this and it definitely helped, however it seems so many other items are relying on it that it shifted how some of the other elements are displayed. One of the headaches I guess for using someones template. Thanks for the help No problem. I really like Bootstrap. I've only been using it for about a month or so and I've built three client sites in a lot quicker time that it would have taken previously. If you're using it from scratch it's a great tool. I use it from initializr.com and it's even better. There are a few things going on here: The navigation has float: right on it somewhere. This means that when its width, plus the width of anything it sits next to is wider than its container, it's going to shift so that it can fit. Your min-width is too narrow If your min-width is 960px, but the width of your navigation, plus the width of your logo (the two elements that sit side by side), plus any margins, paddings, and borders, add up to anything more than 960px, then it's not going to sit in line. You need to either adjust your min-width, or adjust the calculated width of the elements to fit within that 960px minimum. This can be done either by making margins/paddings smaller, decreasing the text size, setting explicit widths, or any combination thereof. Another great answer. Thanks a lot for helping with this. I'm not sure if I can mark two responses as answering the question, but I'll see if I can. Edit: it seems I can't. Thanks a lot for the help though! @Cineno28 - You can always upvote answers by pressing the up arrow. your elements are probably moving around because you have them in the same tag so if you want your elements to hold their positions you need to use a different for each element and align them to your preference perhaps on css or inside the tag(that's up to you). Otherwise in a div tag if you follow the same procedure for each element you shouldn't have any problems. That goes for sentences too... you need to make each word in a sentence be in between individual
STACK_EXCHANGE
If you tell any web or mobile developer that you’ve been spending your days and nights reading and writing embedded C, they’ll probably think you’re courageous for embarking on the journey or they might give you a hug and tell you the name of this “really great therapist” they know. Well, I’ll take a high five, a hug, or a combination of both from you and your therapist. While Embedded development is both challenging and exciting, it requires a different approach to many things. The constraints are different: memory, speed, stack size, heap size, etc. The toolchain is different: compilation, linking, memory maps, debugging, physical hardware integration. Things rarely thought about before become a daily consideration. Even with these differences, I’m finding that many of the conventions, practices, and values I’ve acquired over the years working at a higher level still apply. Isolating sections of code has always been a catalyst for my learning; then each one can be understood as a unit, rather than just a giant bunch of highly suspect code. C code should, by default, be under suspicion. Its easily grotesque and cryptic syntax is a place where issues can hide in the open. In the code I’m currently working with there are either too few function boundaries or they’re poorly named. Either way, I’ve spent far too much re-reading code that I just read a few days ago, figuring it out, again. A few decently named functions would have helped quicken the understanding and save that repeated time and effort. There are lots of ways to improve C code and I’d like to share some with you. I’m by no means an expert, but I do have a fresh set of eyes. So here’s something I learned higher up the stack that I’ve been effectively applying when writing embedded C code: small, well-named, functions. Small, well-named functions Having small, well-named functions is better than having fewer functions or, heaven forbid, one monstrous function. It’s better because when you’re working with smaller, bite-size chunks of code it’s easier to understand, maneuver around, refactor, extract, test, and delete. Because smaller functions tend to be more focused it also makes the job of naming them a bit easier. We get to avoid attempting to name aFunctionThatDoesThisAndThatAndAnotherThingEtc which then often leads to aCrypticallyOrPoorlyNamedFunctionThatDoesStuff. I much prefer the simpler approach of clearly naming aFunctionThatDoesOneThing. Once I have a few well named functions I find myself wishing all my other functions were equally as well named. There’s something about the allure of clear communicative code that is both enticing and challenging at the same time. It’s hard to beat the feeling that accompanies pulling out 20 lines of code from some init or processEvent function and giving it an identity. For example, consider the function MyProgram_Init, which, has down in its bowels, intermingled with code doing all sorts of things, a section of code that sets Bluetooth Smart advertising data. It’s not clear unless you tediously comb through the code. Nicely done, a function that we can both form some ideas about without even having to dive into its internals. Not to mention, once there are well-named functions the functions that call them become more self-documenting, and this recursively applies as you go up the function call stack. In four days, four weeks, or four months you may find yourself having to revisit this code and the reduced cognitive overhead may allow you to ramp back up more quickly. Or, if it’s not you, it may help the next person who touches the code grok it with much less time and effort. There are two sides to this coin: readability and stack size. Embedded devices have limited amounts of space and stack sizes are typically fixed. Overflowing the stack will not have a good outcome. Every time a function is called all of those automatic variables – temporary local variables, temporary results of expressions, processor state during interrupts, processor registers that need to be restored, etc. – are put on the stack. So stack usage is directly related to your function call-tree. There are ways around stack size issues such as the compiler optimization for inlining functions, but there are usually some constraints around when and what can be inlined. Inlining functions also increases code size which may or may not be an issue based on the amount of storage space you’re working with. So far I haven’t had any issues running out of stack so I’m going to continue organizing my code primarily for the reader (rather than the compiler/linker) while keeping in mind the very real constraints that exist. Compilers are much better at mangling code than I am anyways, despite my best efforts. In the end, I want my future self to be proud of the code my present self is writing even though he will indubitably know that he can always do better than his past self. Now that’s something to share with your developer therapist.
OPCFW_CODE
import os from mikecore.DfsuFile import DfsuFile, DfsuFileType # type: ignore from ._dfsu import Dfsu2DH from ._layered import Dfsu2DV, Dfsu3D from ._spectral import DfsuSpectral class Dfsu: def __new__(self, filename, *args, **kwargs): filename = str(filename) type, dfs = self._get_DfsuFileType_n_Obj(filename) if self._type_is_spectral(type): return DfsuSpectral(filename, dfs, *args, **kwargs) elif self._type_is_2d_horizontal(type): return Dfsu2DH(filename, dfs, *args, **kwargs) elif self._type_is_2d_vertical(type): return Dfsu2DV(filename, dfs, *args, **kwargs) elif self._type_is_3d(type): return Dfsu3D(filename, dfs, *args, **kwargs) else: raise ValueError(f"Type {type} is unsupported!") @staticmethod def _get_DfsuFileType_n_Obj(filename: str): ext = os.path.splitext(filename)[-1].lower() if "dfs" in ext: dfs = DfsuFile.Open(filename) type = DfsuFileType(dfs.DfsuFileType) # dfs.Close() elif "mesh" in ext: type = None dfs = None else: raise ValueError(f"{ext} is an unsupported extension") return type, dfs @staticmethod def _type_is_2d_horizontal(type): return type in ( DfsuFileType.Dfsu2D, DfsuFileType.DfsuSpectral2D, None, ) @staticmethod def _type_is_2d_vertical(type): return type in ( DfsuFileType.DfsuVerticalProfileSigma, DfsuFileType.DfsuVerticalProfileSigmaZ, ) @staticmethod def _type_is_3d(type): return type in ( DfsuFileType.Dfsu3DSigma, DfsuFileType.Dfsu3DSigmaZ, ) @staticmethod def _type_is_spectral(type): """Type is spectral dfsu (point, line or area spectrum)""" return type in ( DfsuFileType.DfsuSpectral0D, DfsuFileType.DfsuSpectral1D, DfsuFileType.DfsuSpectral2D, )
STACK_EDU
Move include files from internal folders to ~/includes. Title Move include files from internal folders to ~/includes folder. Summary This is the first commit to this PR which will move all include files from internal folders to the master ~/includes folder. It will be done in the following way: If the include file does not exist in ~/includes, then it will be moved there and all links to it will be changed accordingly. If an include file with the same token already exists in ~/includes, then all references to the internal link will be changed and the internal file will be deleted. This commit moves all of the files from: docs/includes docs/csharp/includes Issue Fixes #2179 Suggested Reviewers @mairaw Thumbs way UP :+1: for ~/ linking! :heart: Build has failed with an exception (checked on the OPS portal since this was still processing). Let's try again. Awesome, it worked! Feel free to proceed with the rest of the changes, @tompratt-AQ! Great! Thanks for closing and re-opening the PR last night to get it unstuck. Commit 2 - Moves the include files from these internal folders to ~/includes.: docs/csharp/getting-started/includes docs/csharp/language-reference/compiler-messages/includes docs/csharp/language-reference/compiler-options/includes docs/csharp/language-reference/keywords/includes docs/csharp/language-reference/preprocessor-directives/includes Take a look at the merge conflict @tompratt-AQ Commit 3 - Fixes merge conflict in: docs/visual-basic/programming-guide/language-features/objects-and-classes/index.md Moves include files from this internal folder to ~/includes: docs/csharp/misc/includes I tried to resolve it here using the Resolve conflicts button, but it won't let me save my changes. So, I made another commit including the conflicting file that should have the right changes. Commit 4 - Updated file to really fix merge conflict (didn't happen in Commit 3): docs/visual-basic/programming-guide/language-features/objects-and-classes/index.md Moved include files from this internal folder to ~/includes: docs/csharp/programming-guide/classes-and-structs/includes Commit 5 - Updated file to really, really fix merge conflict (All includes are fixed in this file, they weren't before): docs/visual-basic/programming-guide/language-features/objects-and-classes/index.md Moved include files from this internal folder to ~/includes: docs/csharp/programming-guide/concepts/linq/includes docs/csharp/programming-guide/concepts/threading/includes docs/csharp/programming-guide/events/includes Commit 6 (The final commit for this PR unless there are errors to fix) Moved include files from this internal folder to ~/includes: docs/csharp/programming-guide/interop/includes docs/csharp/programming-guide/types/includes docs/visualbasic/developing-apps/customizing-extending-my/includes docs/visualbasic/developing-apps/includes docs/visualbasic/developing-apps/printing/includes docs/visualbasic/language-reference/objects/includes docs/visualbasic/misc/includes docs/visualbasic/reference/command-line-compiler/includes @tompratt-AQ should I remove the WIP label? Is this ready for review? @mairaw Yes, please remove the WIP label. This is ready for review. Because of an earlier merge conflict that I fixed, and other issues, Commit 6 (that I listed above) may not have been included in this PR. Once this PR goes through, I will confirm that and do a separate PR for the ~40 files in that group. @mairaw When you get a chance, please restart this one. Thanks. @mairaw This is ready for review. Thanks. @mairaw I don't have write access so I can't easily resolve this merge conflict. Is it possible for you or someone else to resolve this one file? My PR was attempting to move it from: docs/includes/net-standard-table.md --> ~/includes/net-standard-table.md And update 2 references to it in 2 files (docs/core/tutorials/libraries.md and docs/standard/library.md) If that can't be done, then just undo any change to the file. I want to submit a second PR anyway to clean up a few more changes to Fix Issue #2179. For some reason (PR too big, build issues) my last commit didn't get pushed and I can't easily reconcile it on my end. @mairaw Thanks so much for clearing up the merge conflict for me. This check does seem to be taking a long time though, I think you started it yesterday around 5pm. I wonder if it is stuck or just slow due to the number of changes? @tompratt-AQ, the build report shows that it partially succeeded and finished at 6:02 yesterday evening. I'll close and reopen. Thanks for restarting this Ron, glad to see it passed. It was reviewed by Maira previously and ok'd for merge. @tompratt-AQ, I've merged it.
GITHUB_ARCHIVE
Does DoWhy have a function to validate a causal graph? I am looking for a function that will take a causal graph (.gml) and dataframe, and will return a binary output whether the data confirms the supplied graph or not. For example if the graph says there's a confounder [A]<-[X]->[Y] but the data doesn't show a correlation between A and X, then the result will be false. Is this supported by DoWhy at the moment? (couldn't find it in any of the docs), if not, any recommendations how to implement this in a methodic way? (want to scan a complete graph, not go manually over each relationship per, direct and in-direct and test it) Hey @fredthedead that's a great question. No, currently does not support it. But this would be a great addition as a new refutation method. So given a causal model m, we can refute/test whether the dataset follows constraints from the causal model. The trick is to find all falsifiable constraints that are entailed by a causal model. One methodic way is to consider all pairs, triplets and so on. Here's a start: Enumerate over all pairs (edges) and check for non-zero correlation with a statistical test. Enumerate all triplets in the graph, identify its type (path structure, confounder structure, collider structure, etc.) and then check whether the data follows expected correlation and independence constraints. Pairs and triplets can be extended to quartets and so on. That said, it is still a heuristic because we leave out longer paths or nodes connected by more than 1-2 edges. What do you think? Thanks for your reply @amit-sharma, much appreciated. The approach you describe is what I had in mind and was hoping it might of been implemented. I've started looking at the code a bit to estimate how much time it'll take me to add it, might give it a go next time I have some availability... @amit-sharma @fredthedead was this implemented? Not yet unfortunately. Would you like to contribute @mauriciozuardi ? @amit-sharma I'm still familiarizing with the concepts (of causal analysis) and the code (of the repository), but as soon as I get confident enough I'll give it a try. Got it. Let me try to start the implementation with something simple and then maybe you can contribute. Hi all, this is something that I e been looking for as well, and it seems that it would be a commonly used part of the causal analysis process. Any developments on this implementation? Yes, there is now a causal graph refutation functions that checks the independence constraints. This notebook includes an example usage (see Step 4): https://github.com/py-why/dowhy/blob/8fb32a7bf617c1a64a2f8b61ed7a4a50ccaf8d8c/docs/source/example_notebooks/graph_conditional_independence_refuter.ipynb#L239 The documentation needs to be improved to make this easier to find. You can also take a look at: https://github.com/py-why/dowhy/blob/main/dowhy/gcm/validation.py#L24 and see the very recent discussion in https://github.com/py-why/dowhy/discussions/926. We are actually working on a new method there, hopefully can add this soon. Quick update regarding the novel refutation method mentioned by @bloebp. In #930 we added a function validate_lmc which tests the implied (conditional) independencies (via local Markov condition [LMC]) on some data (very similar to what @amit-sharma described). Additionally, we added a function falsify_graph to compare the result of this test to a baseline of random node-permutations to find whether the graph is significantly better than random. Here you can find an example notebook that highlights the key ideas and features of this function. We would greatly appreciate any feedback or comments! Please let me know if you have any questions! Much appreciated! @eeulig
GITHUB_ARCHIVE
#============================================================================== # Enemy Stat Variance # Version: 1.0 # Author: modern algebra (rmrk.net) # Date: January 4, 2012 #++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # Description: # # This script allows you to attach a variance to each enemy stat, so that # enemy instances aren't all just clones of each other but can have stat # differences. #++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # Instructions: # # Paste this script into its own slot in the Script Editor, above Main but # below Materials. # # Just place the code for each of the stat variances you want into the notes # box of the Enemy in the Database. The possible codes are: # # \vary_hp[x] # \vary_mp[x] # \vary_atk[x] # \vary_def[x] # \vary_mat[x] # \vary_mdf[x] # \vary_agi[x] # \vary_luk[x] # # Each of the codes will give a variance of x to the stat chosen. A variance # x means that the enemy will have a random number between 0 and x added to # its base stat. So, if the base stat set in the database is 120, and x is # set to 30, then that stat will be between 120 and 150 for any instance of # that enemy. So, if you are fighting two slimes, then one of them could have # that stat be 127 while the other has it at 142, for example. # # If, instead of being added on, you want the variance to be by percentage, # then all you need to do is add a percentile sign: # # \vary_hp[x%] # \vary_mp[x%] # \vary_atk[x%] # \vary_def[x%] # \vary_mat[x%] # \vary_mdf[x%] # \vary_agi[x%] # \vary_luk[x%] # # If the codes have a percentage to them, then it will take a random number # between 0 and x and add that percentage of the stat to the enemy's stat. So, # if an enemy's max HP is 200 and you set \variance_hp%[10], then the script # will choose a random number between 0 and 10. In this case, let's say it # chooses 6, then .06*200 will be added to the enemy's HP, resulting in that # enemy having 212 HP # # Additionally, it should be noted that these are stackable; it would be # valid, for instance to have one enemy have this in its notebox: # \vary_hp[10%]\vary_hp[30] #============================================================================== $imported = {} unless $imported $imported[:MA_EnemyStatVariance] = true #============================================================================== # ** RPG::Enemy #++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # Summary of Changes: # new public instance variable - maesv_add_params # new method - initialize_maesv_data #============================================================================== class RPG::Enemy #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # * Public Instance Variables #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ attr_accessor :maesv_add_params #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # * Initialize Enemy Stat Variance Data #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ def initialize_maesv_data @maesv_add_params = [] # Scan note for Variance Codes note.scan(/\\VARY[ _](.+?)\[(\d+)(%?)\]/i) {|param_n, value, percent| param_id = ["HP", "MP", "ATK", "DEF", "MAT", "MDF", "AGI", "LUK"].index(param_n.upcase) @maesv_add_params << [param_id, value.to_i + 1, !percent.empty?] if param_id } end end #============================================================================== # ** Game_Enemy #++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # Summary of Changes: # aliased methods - initialize; all_features #============================================================================== class Game_Enemy < Game_Battler #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # * Object Initialization #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ alias maesv_initialze_4gj1 initialize def initialize(*args, &block) @maesv_percent_params = [] # Initialize Percentile Params array maesv_initialze_4gj1(*args, &block) # Add to stats according to variance in notes enemy.initialize_maesv_data unless enemy.maesv_add_params enemy.maesv_add_params.each {|param_id, value, percent_true| if percent_true # Percentile @maesv_percent_params << RPG::BaseItem::Feature.new(FEATURE_PARAM, param_id, 1.0 + (rand(value).to_f / 100.0)) else # Add the randomized value to the parameter add_param(param_id, rand(value)) end } recover_all # Ensure the enemy is at full strength end #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # * All Features #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ alias maesv_allfeatrs_5jk6 all_features def all_features(*args, &block) result = maesv_allfeatrs_5jk6(*args, &block) # Run Original Method result + @maesv_percent_params end end
STACK_EDU
For cost estimates, read our piece on API Pricing or API business models for ideas. Rate limits are introduced to ensure that no single client consumes too many resources and becomes the noisy neighbor. The appropriate HTTP status code for rate limiting has been argued over about as much as tabs vs spaces, but there is a clear winner now; RFC 6585 defines it as 429, so APIs should be using 429. In the spring, we laid out a vision for our API platform and published a public roadmap. If you access Twitter from a corporation, event or conference, you may be sharing the same network address with many people. 12/16/2019; 2 minutes to read; In this article . Calls to rate_limit_status do not count against the rate limit. To better monitor your org’s API usage and limits, you can use these resources: The API Usage section of the System Overview page in Setup. Doing your best to avoid hitting rate limits is a good start, but nothing is perfect, and the API might lower its limits for some reason. Tuesday, 14 November 2017. We also promised more predictability so that you can confidently build … Limits; Places API: Requests are applied against the total number of Maps APIs Credits you purchased for your Premium Plan. We made a commitment to give you, our developer community, a unified platform with scalable access to Twitter data. Learn how to optimize web service usage or request a rate limit (QPS) increase. Also, updates to the search, groups, twitscoop and 12seconds columns do not count towards the rate limit since the data does not (directly) come via the … Twitter tried to mend its relationship with developers earlier this year with the launch of a new API platform which focused on streamlining APIs and the promise of additional tiers of access. The Foursquare API has a limit of 950 Regular API Calls per day and 50 Premium API Calls per day for Sandbox Tier Accounts. Best Practices For API Rate Limiting. API.rate_limit_status ¶ Returns the remaining number of API requests available to the requesting user before the API limit is reached for the current hour. The pricing for the premium APIs ranges from $149/month to $2,499/month, based on the level of access needed. Returns rate limit information for one or more Twitter tokens, optionally filtered by rtweet function or specific Twitter API path(s) rate_limit (token = NULL, query = NULL, parse = TRUE) rate_limits (token = NULL, query = NULL, parse = TRUE) Arguments. Twitter to place new restrictions on its API to stop abuse. Effective October 2019, to help ensure service levels, availability and quality, there are entitlement limits to the number of requests users can make each day across model-driven apps in Dynamics 365 (such as Dynamics 365 Sales and Dynamics 365 Customer Service) Power Apps, and Power Automate. The change is a significant decrease in the existing rate of post activity allowed from a single app by default, Twitter said. A breakdown of which endpoints are classified as regular and premium can be found on our endpoints page here. One approach to API rate limiting is to offer a free tier and a premium tier, with different limits for each. One of the most-cited issues to emerge was Twitter’s API rate limits and token restrictions. This post will explore approaches to use the REST search API optimally in order to find as much information as fast as possible and yet remain within the constraints of the API. This is useful for the API provider to apply limits on the developers who have signed up to use their API, however, it does not help, for example, in throttling individual end users of the API. Places API client-side services Here are the specifics for the premium APIs: Up to 500 requests/period: $149/month; Up to … The new limits will take effect on September 10. There are many things to consider when deciding what to charge for premium API access. Requests limits and allocations. The Tweet limit of 2,400 updates per day is further broken down into semi-hourly intervals. Premium API calls return rich content such as photos, tips, menus, URLs, ratings etc. Default rate limit is 50 requests per second. Plans for Twitter’s new premium APIs start at $149 per month … If authentication credentials are provided, the rate limit status for the authenticating user is returned. token: One or more OAuth tokens. Link copied successfully . About Twitter search rate limits In order to control abuse, Twitter limits how often you can search from a single network address. Working with API limits in Dynamics 365 Business Central. For limits that are time-based (like the Direct Messages, Tweets, changes to account email, and API request limits), you'll be able to try again after the time limit has elapsed. One of the most-cited issues to emerge was Twitter’s API rate limits and token restrictions. Sending data to Twitter (posting), such as posting an update or a direct message, favoriting a tweet, unfollowing or following a user, does not count towards the limit and you can continue to do so even when your rate limit has been exceeded.
OPCFW_CODE
first and last record in one query i want to select first and last score from such data WITH t AS ( SELECT 1 user_id, '2019-05-15'::date AS created_at, 4 as score UNION ALL SELECT 1 user_id, '2019-05-13'::date AS created_at, 12 as score UNION ALL SELECT 2 user_id, '2019-05-15'::date AS created_at, 7 as score UNION ALL SELECT 3 user_id, '2019-05-13'::date AS created_at, 6 as score ), first_score AS ( SELECT DISTINCT ON (user_id) created_at, score, user_id FROM t ORDER BY user_id, created_at ASC ), last_score AS ( SELECT DISTINCT ON (user_id) created_at, score, user_id FROM t ORDER BY user_id, created_at DESC ) select md5(u.user_id::varchar) user_id, u.created_at signup_date, fhs.created_at first_score_created_at, fhs.score first_score, lhs.created_at last_score_created_at, lhs.score last_score from t as u INNER JOIN first_score fhs ON fhs.user_id = u.user_id INNER JOIN last_score lhs ON lhs.user_id = u.user_id; my question: is exists a way to make one subquery to join the first_score and last_score in single query? WITH t AS ( SELECT 1 user_id, '2019-05-15'::date AS created_at, 4 as score UNION ALL SELECT 1 user_id, '2019-05-13'::date AS created_at, 12 as score UNION ALL SELECT 2 user_id, '2019-05-15'::date AS created_at, 7 as score UNION ALL SELECT 3 user_id, '2019-05-13'::date AS created_at, 6 as score ) SELECT md5(user_id::varchar) user_id, created_at signup_date, MIN(created_at) OVER (PARTITION BY user_id) first_score_created_at, FIRST_VALUE(score) OVER (PARTITION BY user_id ORDER BY created_at ASC) first_score, MAX(created_at) OVER (PARTITION BY user_id) last_score_created_at, FIRST_VALUE(score) OVER (PARTITION BY user_id ORDER BY created_at DESC) last_score FROM t PS. You think that you can use LAST_VALUE(score) .. last_score for to use the same window as for first_score? Sorry, but it can give wrong result (seems to be a bug)... It's no bug, it's to do with the default window frame. Basically, when you specify ORDER BY, the frame ends at the current row (it's slightly more complicated than that but will do for this scenario), unless you explicitly specify it to be something else. If you set the frame as ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING to cover the entire partition, you can use LAST_VALUE just fine. See here for more details. @AndriyM Thanks. In my queries I always set rows/range, so I have no seen this problem... and here I tried to save bytes...
STACK_EXCHANGE
cv::imread image is not show on MFC's picture box I'm building the dialog-based MFC application. When I click the Button then I can choose the image from the file explorer and that image is also loaded into cv::imread(). And also shows in the Picture Control. I can load and show an image in the Picture Control with the following code. void CMFCApplication3Dlg::OnBnClickedButton1() { cv::Mat src = cv::imread("D:/source/repos/Testing_Photos/Large/cavalls.png"); Display(src); } But not with the following code. void CMFCApplication3Dlg::OnBnClickedButton1() { TCHAR szFilter[] = _T("PNG (*.png)|*.png|JPEG (*.jpg)|*.jpg|Bitmap (*.bmp)|*.bmp||"); CFileDialog dlg(TRUE, NULL, NULL, OFN_HIDEREADONLY | OFN_OVERWRITEPROMPT, szFilter, AfxGetMainWnd()); if (dlg.DoModal() == IDC_BUTTON1) { CString cstrImgPath = dlg.GetPathName(); CT2CA pszConvertedAnsiString(cstrImgPath); std::string strStd(pszConvertedAnsiString); cv::Mat src = cv::imread(strStd); Display(src); } } And the following is the "Display" function. void CMFCApplication3Dlg::Display(cv::Mat& mat) { CStatic * PictureB = (CStatic *)GetDlgItem(IDC_STATIC); CWnd* cwn = (CWnd *)GetDlgItem(IDC_STATIC); CDC* wcdc = cwn->GetDC(); HDC whdc = wcdc->GetSafeHdc(); RECT rec; cwn->GetClientRect(&rec); cv::Size matSize; matSize = cv::Size(rec.right, rec.bottom); BITMAPINFO bitmapinfo; bitmapinfo.bmiHeader.biBitCount = 24; bitmapinfo.bmiHeader.biWidth = mat.cols; bitmapinfo.bmiHeader.biHeight = -mat.rows; bitmapinfo.bmiHeader.biPlanes = 1; bitmapinfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bitmapinfo.bmiHeader.biCompression = BI_RGB; bitmapinfo.bmiHeader.biClrImportant = 0; bitmapinfo.bmiHeader.biClrUsed = 0; bitmapinfo.bmiHeader.biSizeImage = 0; bitmapinfo.bmiHeader.biXPelsPerMeter = 0; bitmapinfo.bmiHeader.biYPelsPerMeter = 0; StretchDIBits( whdc, 0, 0, matSize.width, matSize.height, 0, 0, mat.cols, mat.rows, mat.data, &bitmapinfo, DIB_RGB_COLORS, SRCCOPY ); } I am very new to MFC and I really don't have any idea where I'm wrong. Please help me! Thank you. In order to debug this, you have to validate each substep. 1) Did the cv::Mat src get loaded correctly, add instrumentation or view its members in the debugger. 2) Is the BITMAPINFO being initialized correctly? 3) Is the StretchDIBits being setup correctly? You can find samples on MSDN or SO on how to use that structure with StrechDIBits. 4) Check the return value of StretchDIBits, and possibly GetLastError (if the API actually uses). 5) Does any other GDI drawing call work with those window values, like can you set a background color? You might need WM_PAINT message? As a general rule, rendering should always be done in response to receiving a WM_PAINT message. But since you already have CStatic control, you don't need to render anything. Just construct your image, and call CStatic::SetBitmap. Rendering is subsequently handled by the control. This is not the correct method to paint. The picture will be erased every time there is a paint request. That's what happens with CFileDialog which forces another repaint immediately after you paint the image. You have to respond to OnPaint or OnDrawItem, or use CStatic::SetBitmap as noted in comment. You can do this with CImage class, there is no need for OpenCV: CImage img; if(S_OK == img.Load(L"unicode.jpg")) { CStatic *control = (CStatic*)GetDlgItem(IDC_STATIC); control->ModifyStyle(0, SS_BITMAP); auto oldbmp = control->SetBitmap(img.Detach()); if(oldbmp) DeleteObject(oldbmp); } Or you can use OpenCV to create HBITMAP handle. Note that OpenCV does not handle Unicode filenames. CW2A converts Unicode to ANSI character encoding. This fails if one or more code points cannot be represented in the currently active code page. To work around it, we can open the file with CFile or std::ifstream, read it as binary, then open with cv::imdecode instead. Example: //open the file from unicode path: const wchar_t *filename = L"unicode.jpg"; std::ifstream fin(filename, std::ios::binary); if(!fin.good()) return; //read from memory std::vector<char> vec(std::istreambuf_iterator<char>(fin), {}); cv::Mat src = cv::imdecode(vec, cv::IMREAD_COLOR); if(!src.data) return; //create hbitmap BITMAPINFOHEADER bi = { sizeof(bi), src.cols, -src.rows, 1, 24 }; CClientDC dc(this); auto hbmp = CreateDIBitmap(dc, &bi, CBM_INIT, src.data, (BITMAPINFO*)&bi, 0); //send hbitmap to control auto control = (CStatic*)GetDlgItem(IDC_STATIC); auto oldbmp = control->SetBitmap(hbmp); if (oldbmp) DeleteObject(oldbmp); SetBitmap transfers ownership of the current bitmap into the control, as well as the previous bitmap out of the control. The implementation doesn't seem to account for this, prematurely deleting the new bitmap, and potentially leaking the previous one. A simple fix is to use the following pattern: img.Attach(control->SetBitmap(img.Detach()));. This properly deals with the transfer of ownership in both directions. @IInspectable yes my method was wrong, corrected it. img.Attach(control->SetBitmap(img.Detach())) won't work, because upon the first call to SetBitmap, it returns NULL, and CImage::Attach will throw Debug Assertion. We have to check the old bitmap before deleting it, or re-attaching to CImage I wasn't aware, that CImage::Attach() would assert when passing a nullptr. After all, it does have a CImage::Detach() member, so not owning any resources seems to be a valid state. @IInspectable Yes, it looks like a minor design flaw. They should have made the method accept NULL input.
STACK_EXCHANGE
Microsoft Loop is growing in popularity amongst Microsoft 365 users, it's really trying to compete with Notion, a fantastic wiki application tons of people use and love. So what's the difference between Microsoft Loop vs Notion? Let's look at what features make Microsoft Loop stand out and differentiate it from similar applications on the market. Jumpstart inside Microsoft Loop is a content suggestion feature. To use this feature you will have to enable it in your settings, it's worth using especially if you need to create a workspace from scratch. When you start a new Workspace inside Microsoft Loop, you'll be given a blank canvas. Use Jumpstart to, well, Jumpstart your workspace. It does this by using AI to retrieve relevant information, documents and content from your Microsoft 365 space. Give your workspace a title and search keywords to find what you need quickly, then add it to your workspace to get started. 2. Comment Emojis Inside a document you and those working on projects can use comment emojis and inline comments to asynchronously collaborate and communicate. There aren't many emojis to choose from, you have a heart, thumbs up and a smiley face, however, this is a great way to quickly react in line and leave comments and questions if you need to. In the top left-hand corner inside Microsoft Loop, you have Recents and Ideas. Ideas is basically a private note-taking section for thoughts, ideas, processes, mock-ups and anything you don't want to share with the team just yet. Once you are ready, you can just drag the idea down into the shared workspace for others to read and collaborate on. Essentially, Ideas is just a holding bay, you can work inside ideas with team members, not inside the Loop workspace, and share when you're ready. 4. Show/Hide in Tables A simple and easy-to-use feature inside Microsoft Loop is the share/hide feature for tables inside workspaces and pages. You may have a table with lots of components that are not relevant to you, if this is the case, click to hide certain columns and only view what you need. This saves time and avoids distractions. 5. Basic Task List This very basic feature is quite useful in terms of assigning tasks, adding due dates and getting the ball rolling with projects and jobs that need to be done. Task Lists can be used as Loop components, and shared with others in the workspace. All you need to do is use the slash commands (/) and search task lists. From here you will have a little table pop up where you can type in tasks, and set an assignee and a due date. 6. Share Locations of Components When you create a component inside Loop, which is essentially a shareable, transferable component within your team, you can view where the component has been shared, where it's being used and where the component has last been. So, for example, you may click on the locations of this component and see it is currently being used inside Outlook by a member of the team. If you have access to this, you can jump in and work there too. Share Locations inside Microsoft Loop is overall a good way to see where a component is being stored and utilised. 7. Status Inline Comments With this feature, you can add things like labels and other comments in line with pages. This is helpful for giving a page more context and a better overview if someone is quickly glancing through it. For example, if a task is incomplete, you can add an inline comment here to show it is still in progress, or you can add other labels and customise them to fit your needs. 8. Voting Columns in Tables When creating tables you can add the collum type 'Voting' this allows for members of the team to vote on different ideas, projects and whatever you need really. It's a quick and easy way to get opinions and move forwards with a project. It's just a nice little prebuilt in plugin for tables to further the experience. 9. Add to a Workspace Add any page you are currently working on, to another workspace inside Microsoft Loop. You can quickly add a duplicate of a page or to another workspace by clicking the Share dropdown at the top. This sends the page somewhere else, saving time creating the same content again, or sharing with individuals. We all know you can use the @ mention to add other people to a page, however, this feature is pretty cool inside Microsoft Loop. You can use the @ to add files from other Microsoft 356 apps into your Microsoft Loop workspace. This enhances your pages further, giving more detail to those in your team, and makes the whole experience more accessible. This feature is great because it doesn't require any additional API, it is already built into all Microsoft 365 apps, making it easy to use documents and other content inside Loop. Personally, we think Microsoft Loop isn't anywhere near the capabilities of Notion just yet. Of course with Notion you have a completely customisable space and the possibilities are almost endless with what you can create, and what for. Microsoft Loop however is a great looking tool with high potential, especially for teams and those working inside the Microsoft 365 workspace already. Want to Get Started With Microsoft Loop? We have a full tutorial on getting started with Microsoft Loop Here. This ultimate Microsoft Loop guide will teach you everything you need to know, and go into more depth about Loop's different features from Notion. The guide is suitable for Microsoft professionals, and beginners. You're sure to find some helpful information here. - 1. Jumpstart - 2. Comment Emojis - 3. Ideas - 4. Show/Hide in Tables - 5. Basic Task List - 6. Share Locations of Components - 7. Status Inline Comments - 8. Voting Columns in Tables - 9. Add to a Workspace - 10. @Files - Notion: Explained - Want to Get Started With Microsoft Loop? Join 6,000+ Readers Finding Productivity Tools Subscribe to Tool Digest and join 6,000+ people finding productivity software. Best Productivity Tools: Find with lists Explore More Productivity Tools & Lists
OPCFW_CODE
HBase Multithreaded Scan is really slow I'm using HBase to store some time series data. Using the suggestion in the O'Reilly HBase book I am using a row key that is the timestamp of the data with a salted prefix. To query this data I am spawning multiple threads which implement a scan over a range of timestamps with each thread handling a particular prefix. The results are then placed into a concurrent hashmap. Trouble occurs when the threads attmept to perform their scan. A query that normally takes approximately 5600 ms when done serially takes between 40000 and 80000 ms when 6 threads are spawned (corresponding to 6 salts/region servers). I've tried to use HTablePools to get around what I thought was an issue with HTable being not thread-safe, but this did not result in any better performance. in particular I am noticing a significant slow down when I hit this portion of my code: for(Result res : rowScanner){ //add Result To HashMap Through logging I noticed that everytime through the conditional of the loop I experienced delays of many seconds. These delays do not occur if I force the threads to execute serially. I assume that there is some kind of issue with resource locking but I just can't see it. I also noticed that I'm using OpenJDK and not the Oracle JDK that came with my HBase software package (Cloudera). I'm not sure if this may be an issue but I heard there might be some problem with threading issues between different JDK's. I am new to HBase myself, but what are the Xms/Xmx settings for the HBase server's JVM? It could be that HBase is forced to run to many GC runs that slow things down. I realized my HBase setup was only storing this data in one region server. It apparently had a Block size too large for the amount of data I had in HBase. I changed the Block size and put more data in just to make sure there were multiple region servers and plenty of data. This resulted in threads returning faster than the previous sequential runs, but it wasn't by some factor of the number of threads. It returned, with 6 threads running, in around 3400 ms. Thank you all for your help. These were all great suggestions and I will continue to utilize them in order to increase my performance Make sure that you are setting the BatchSize and Caching on your Scan objects (the object that you use to create the Scanner). These control how many rows are transferred over the network at once, and how many are kept in memory for fast retrieval on the RegionServer itself. By default they are both way too low to be efficient. BatchSize in particular will dramatically increase your performance. EDIT: Based on the comments, it sounds like you might be swapping either on the server or on the client, or that the RegionServer may not have enough space in the BlockCache to satisfy your scanners. How much heap have you given to the RegionServer? Have you checked to see whether it is swapping? See How to find out which processes are swapping in linux?. Also, you may want to reduce the number of parallel scans, and make each scanner read more rows. I have found that on my cluster, parallel scanning gives me almost no improvement over serial scanning, because I am network-bound. If you are maxing out your network, parallel scanning will actually make things worse. I attempted to use the BatchSize and it resulted in actually losing some of the data I was querying for. I noticed very minimal changes when changing the Caching on the Scan objects. I'll try them again and see if there was just something I was missing. just tested again with a changed BatchSize and I didn't lose any data, but it took about 99k ms. not exactly a performance increase. Don't set them too high- 100 or so should do. If you set them too high, your scanners may time out on the regionserver while your client is processing the current batch. I set the BatchSize to 1000 and Caching to 200 and those were the results i got. I'm testing again with BatchSize set to 100 or so and see if that helps any. ..... cache @ 200 with batch disabled = 79k ms cache @ 200 with batch @ 1000 = 99.9k ms cache @ 200 with batch @ 100 = 82.6k ms This doesn't appear to help I keep getting apparently random return times. I commented out the BatchSize and ran with those same cache values and others to see if there's a pattern to it (like increasing the cache decreases my time) but nothing was remotely the same the 2nd or 3rd or even 4th time through. Have you considered using MapReduce, with perhaps just a mapper to easily split your scan across the region servers? It's easier than worrying about threading and synchronization in the HBase client libs. The Result class is not threadsafe. TableMapReduceUtil makes it easy to set up jobs.
STACK_EXCHANGE
extern crate quickcheck; #[allow(unused_imports)] #[macro_use] extern crate quickcheck_macros; #[cfg(any(feature = "xxh32", feature = "xxh64", feature = "xxh3"))] mod tests { use quickcheck::TestResult; use std::hash::Hasher; use std::num::{NonZeroU8, NonZeroUsize}; use xxhash_c_sys as sys; // In practice 2048 bytes of data should cover all cases for the streaming hashers. // So we use a limit 10 times that to cover more chunking variations. const MAX_STREAM_SIZE: usize = 2048 * 10; #[cfg(feature = "xxh3")] #[quickcheck] fn xxh3_chunked_matches_buffered( chunk_size: NonZeroUsize, xs: Vec<u8>, times: NonZeroU8, additional: u8, ) -> TestResult { // additional argument doubles down as the hasher seed let seed = additional as u64; // the vecs produced by quickcheck are perhaps a bit small by default. // additional should add some noise to avoid only getting nice even lengths. let target_size = (xs.len() * times.get() as usize + additional as usize) % MAX_STREAM_SIZE; let xs = xs.into_iter().cycle().take(target_size).collect::<Vec<_>>(); // write all at once let mut h0 = xxhash_rust::xxh3::Xxh3::with_seed(seed); h0.write(&xs); let h0 = h0.finish(); // write in chunks let mut h1 = xxhash_rust::xxh3::Xxh3::with_seed(seed); for chunk in xs.chunks(chunk_size.get()) { h1.write(chunk); } let h1 = h1.finish(); let one_shot_result = xxhash_rust::xxh3::xxh3_64_with_seed(&xs, seed); let sys_result = unsafe { sys::XXH3_64bits_withSeed(xs.as_ptr() as _, xs.len(), seed) }; // compare all against reference assert_eq!(h0, sys_result); assert_eq!(h1, sys_result); assert_eq!(one_shot_result, sys_result); TestResult::passed() } #[cfg(feature = "xxh64")] #[quickcheck] fn xxh64_chunked_matches_buffered( chunk_size: NonZeroUsize, xs: Vec<u8>, times: NonZeroU8, additional: u8, ) -> TestResult { // additional argument doubles down as the hasher seed let seed = additional as u64; // the vecs produced by quickcheck are perhaps a bit small by default. // additional should add some noise to avoid only getting nice even lengths. let target_size = (xs.len() * times.get() as usize + additional as usize) % MAX_STREAM_SIZE; let xs = xs.into_iter().cycle().take(target_size).collect::<Vec<_>>(); // write all at once let mut h0 = xxhash_rust::xxh64::Xxh64::new(seed); h0.write(&xs); let h0 = h0.finish(); // write in chunks let mut h1 = xxhash_rust::xxh64::Xxh64::new(seed); for chunk in xs.chunks(chunk_size.get()) { h1.write(chunk); } let h1 = h1.finish(); let one_shot_result = xxhash_rust::xxh64::xxh64(&xs, seed); let sys_result = unsafe { sys::XXH64(xs.as_ptr() as _, xs.len(), seed) }; // compare all against reference assert_eq!(h0, sys_result); assert_eq!(h1, sys_result); assert_eq!(one_shot_result, sys_result); TestResult::passed() } #[cfg(feature = "xxh32")] #[quickcheck] fn xxh32_chunked_matches_buffered( chunk_size: NonZeroUsize, xs: Vec<u8>, times: NonZeroU8, additional: u8, ) -> TestResult { // additional argument doubles down as the hasher seed let seed = additional as u32; // the vecs produced by quickcheck are perhaps a bit small by default. // additional should add some noise to avoid only getting nice even lengths. let target_size = (xs.len() * times.get() as usize + additional as usize) % MAX_STREAM_SIZE; let xs = xs.into_iter().cycle().take(target_size).collect::<Vec<_>>(); // write all at once let mut h0 = xxhash_rust::xxh32::Xxh32::new(seed); h0.update(&xs); let h0 = h0.digest(); // write in chunks let mut h1 = xxhash_rust::xxh32::Xxh32::new(seed); for chunk in xs.chunks(chunk_size.get()) { h1.update(chunk); } let h1 = h1.digest(); let one_shot_result = xxhash_rust::xxh32::xxh32(&xs, seed); let sys_result = unsafe { sys::XXH32(xs.as_ptr() as _, xs.len(), seed) }; // compare all against reference assert_eq!(h0, sys_result); assert_eq!(h1, sys_result); assert_eq!(one_shot_result, sys_result); TestResult::passed() } }
STACK_EDU
Civ 3 disk worked perfectly 3 mos. ago. I tried to load it last week & got a 7" x 3" Civ 3 screen with buttons (several others such as "Register" and "Exit") "Reinstall" and "Uninstall" but no "Install" button! When I click either Uninstall or Reinstall they both say "Do you want to delete all components of game? I go to My Computer & click Autorun (for Civ 3) and it says "Will not start because file binkw32.dll is missing. Installation of this file may solve problem". I already downloaded this dll file from DriverGuide.com, installed it and got same problems all over, so I finally deleted it. The screwey thing is, this dll file is on the Civ 3 CD (found it during a My Computer examination of all the files on the Civ CD)!!!! How do I install a dll file that is already on my disk?? I think I am missing something ! (Really ) So, then I went to Comp USA & bought the latest "Civ 3 Game of the Year Edition", tried to install it & got exact same 3x7 screen & same error message as before. The guy at Comp had looked at my original disk & said it looked corrupted from scratches. Could it be when I installed the scratched disk, it installed a "partial file" & completely screwed up my hard drive (only as far as running this game). When I "think" I delete all files of Civ 3, a file still shows up in Control Panel and I cannot delete it. This keeps happening after hundreds of delete tries. I have already done all diagnostics on XP, Direct X, Soundcard, etc etc etc!!! I have done 100s of searches & cannot find this 1.04 MB file that Control Panel says is part of my Program File. Is this file causing my problems? Please HELP!!!!!!!!!!!!!!!!!!!! Bruce Message Edited by boardman on 02-28-2003 12:31 AM Message Edited by boardman on 02-28-2003 12:42 AM Message Edited by boardman on 02-28-2003 12:14 PM if your trying to uninstall it from control panel and its giving you that error its proably somthing with your registrey (i'm guessing) but i can suggest a couple of things to do with corrupt files: 1st is civ still listed in your exploer? if so delete. 2nd. do a search for anything relating to the game. if somthing comes up you should be able to delete it without a problem. 3rd. Try running a repair on windows XP. You can do this by setting your system to boot from cdrom and putting in our windows xp cd. Once it boots to the cd let it load its files and then hit enter->f8->and then chose repair, it takes about a half hour to an hour to finish. what it does it goes through and replaces any corrupt files that are associated with xp. Hopefully that would fix your problem. I have the same problem. I tried re-installing my XP, but I got the same error message. What did you do to fix this problem?
OPCFW_CODE
Screen Capture Tool for Dual Monitors I have been using MWSnap for a long time. It is generally considered one of the best free screen capture tools. There are a surprising number of comments like the one below at the MWSnap forum. Thu Jan 15, 2004 5:51 pm : MWSNAP is GREAT! Thank you for offering this software for free. It is extremenly useful in many ways. It is easy to use and does a great job capturing what I need. Before this I used to work between MS Paint and Windows Print Screen. This saves me alot of time and effort! I would love to make a donation to show my gratitude. I would not have needed to look at any other options except for one limitation of MWSnap: it does not work with dual monitors. (The link to the forum may not take you directly to the referenced post. On my last visit, the forum was having some minor problems.) I use Visual Studio 2005 almost all day. You have probably heard about all the bugs in VS 2005. One of the bugs apparently shows up only in certain dual monitor configurations - like mine. I found a work around for it by running most of my VS 2005 windows on monitor two. Unfortunately, MWSnap cannot access monitor two, so capturing images related to my work got to be really frustrating. I could not find any other good freeware alternatives to MWSnap that would support dual monitors. (See UPDATE below.) So I asked around and SnagIt seems to be the most highly regarded commercial product. It also earned a good review at PC Magazine. I'll give you the conclusion right now: I am probably going to purchase SnagIt. (UPDATE: I didn't. I went with WinSnap.) It looks to be very feature rich. It is certainly easy to use. However, I want to describe the full story because my attempt at using SnagIt was somewhat problematic and the SnagIt support center doesn't address this issue. Maybe my solution will help you avoid or solve this problem if you run into it. The steps below show the problem I encountered as well as how I solved it. First, I downloaded SnagIt. (Your email address is not required, even though it looks like it is.) Then I installed it and here is what I saw when I tried to start SnagIt. (Note that if you choose to lauch SnagIt as part of the installation, you will not see this error until the next time you try to launch SnagIt.) The dialog goes away only after you click "OK" about a dozen times. And, of course, SnagIt will not load. As you can probably tell from the quality of this image, SnagIt was causing a lot of problems. Not only could I not use it to capture this dialog, it was causing my other tools to malfunction as well. The dialog says something like this: The instruction at "0x00000000" referenced memory at "0x00000000". The memory could not be "written". I attempted to repair my installation. I also uninstalled and reinstalled SnagIt. Don't waste your time with those steps, if you are having this problem. They won't help. This problem is related to Data Execution Prevention (DEP), which Microsoft introduced in Windows XP SP 2. I don't actually run XP. I run Windows Server 2003 as my workstation OS. But it has the same DEP features. I found the exact problem by debugging in Visual Studio. Here is the short answer to how to fix this problem: Change the DEP settings by this sequence of steps: Start > Control Panel > System Properties, click the Advanced Tab > Performance Settings > Data Execution Prevention. Add SnagIt32.exe to the exclusions. Here are the steps in a bit more detail. From the Start Menu, go to the Control Panel and select System Properties. You will see this dialog box. Look in the Performance group and click the Settings button. In the dialog below, click Add. Brows to your SnagIt installation folder and select SnagIt32.exe (assuming you are running a 32 bit version of Windows). The default installation folder is "C:\Program Files\TechSmith\SnagIt 8". Accept/close these dialogs and SnagIt will now start and run correctly. I think I'm going to be satisfied with SnagIt, but my experience getting SnagIt to run just underscores how good a product MWSnap really is. I never had a moment's problem with it, and if not for the bugs in Visual Studio 2005, I would probably still be OK with using MWSnap on only one of my two monitors. I never had to troubleshoot MWSnap like I just did with Snagit. And MWSnap is free! I'll miss it. UPDATE: I almost jumped the gun with my near-decision to use SnagIt. After writing this post, I came across WinSnap. It is free (for personal use) and it works on dual monitors. SnapFiles gives it an excellent review and after trying it out myself I actually like it more than SnagIt and MWSnap. As I write this I am uninstalling SnagIt and I'm going with WinSnap. WinSnap is the winner. Here is a good resource that lists several freeware screen capture tools. It does not discuss which ones will work with dual monitors. However, I can vouch that WinSnap does work well with dual monitors. I'll quote part of what they say about WinSnap: WinSnap is a screen capture utility that enables you to take screenshots of non-rectangular windows and applications, using a background of your choice as well as regular windows, the desktop, popup menus and more. It can automatically enhance the capture with a smooth drop shadow effect, add a watermark, change the coloring and optionally save as a new file or copy it to the clipboard. Other features include image rotation, advanced auto-saving, image scaling, send by email (MAPI), keyboard shortcuts and more. WinSnap can save images in PNG, GIF, BMP, TIF and JPG format. They give WinSnap 4 out of 5 on the popularity scale, but it has only 1 user review compared to about 16 for MWSnap. That would make me think MWSnap is far more popular. Both are good products, but at this point I am far more happy with WinSnap.
OPCFW_CODE
When you get a stack trace in Eclipse with SDK classes in the stack, how can you see the SDK sources? When you are developing an Android application with the Eclipse plugin and debugger, and get a stack trace, you will not see any of the SDK source code. What steps do you need to make to fix this? Assume beginner Java programmer. To clarify, I want Eclipse to automatically show me the correct source files and lines when I jump into a stack frame. I assume I would need to find the correct SDK sources, put them on my local system and then tell Eclipse how to find and use them. The question is, how exactly do I do these steps. Thanks to lukehutch on #android IRC channel I got pointed to a blog post that describes how to fix the problem. The reason why this is even a problem is because Google did not include the sources with the SDK. There is a bug to get this fixed. The workaround, as described in more detail in the blog post, is to get the sources with git (I specified release-1.0 branch for repo command which I hope corresponds with SDK 1.0-r2), collect all the java source files and put them in the correct directory structure under sources/ directory (which goes right next to your android.jar from the SDK), and refresh the jar in Eclipse at which point you can browse the SDK class sources. Finally, run your application in the debugger until you get a stack trace from an SDK class, and you will see a button to configure your sources: add the sources directory you created. The blog link above has a small Python script that can collect all the java files and create the correct directory structure out of them. If you look at your launch configuration in Eclipse (Debug->Run as...), you will see a tab called "sources" If you choose "add" and then supply an archive or file system directory with the relevant sources, the debugger is supposed to allow you to trace into them. You can get the sources of the SDK from the Android site, just make sure your Jar and your sources version are the same. There is no sources tab for Android configurations. The available tabs are: Android, Target and Common. None of them have any place to set sources. (Regular Java configurations do seem to have sources tab.) Some nice people have now put up the source zips (up to 2.3.3) in a Google Code project, so just do this: cd path\to\android-sdk\platforms svn checkout http://fanfq-android-demo.googlecode.com/svn/trunk/android-sdk-src/ . Then, go into Eclipse, and: Click on an SDK class and hit F3 Click the "Attach Source" button > External File Select path\to\android-sdk\platforms\android-x.x.x-y-src.zip You should now have source visible when hitting F3 on any Android class, and in the debugger etc.
STACK_EXCHANGE
Uncheck the box next to âAllow connections only from computers running Remote Desktop with Network Level Authenticationâ This will allow insecure connections without NLA (network-level authentication) and you will no longer be prompted with failed connections to a Windows machine due to the CredSSP ⦠a) A windows 7 machine hosting Remote Desktop: A client Windows 7 PC had no problem connecting to it, but the same user connecting from a Windows 10 machine failed. This setting defines how to build an RDP session by using CredSSP, and whether an insecure RDP is allowed. To restore remote desktop connection, you can uninstall the specified security update on the remote computer (but it is not recommended and you should not do this, there is a more secure and correct solution).. To fix the connection problem, you need to temporarily disable the CredSSP version check on the computer from which you are connecting via RDP⦠It provides extra security and helps you, as a network administrator control who can log into which system by just checking one single box. The Microsoft Security patch issued on Tuesday, May 8th, triggered the problem by setting and requiring remote connections at the highest level (CredSSP Updates for CVE ⦠The default configuration of Windows 7, 2008, and 2012 allows remote users to connect over the network and initiate a full RDP session without providing any credentials. For assistance, contact your system administrator or technical support. This issue occurs when Network Level Authentication (NLA) is required for RDP connections, and the user is not a member of the Remote Desktop ⦠From carrying out some research into this, it seems rdesktop does support CREDSSP + kerberos which is a subset of NLA ⦠I have no idea why local GP setting had been disabled, but now I've updated the local GP setting to 'Vulnerable', it's letting ⦠Network Level Authentication is good. When we enter the machine name in the MSTSC client and click connect, it will send a request to the server that the client is looking to connect. Caused by a Microsoft Security Patch. Failed to connect, CredSSP required by server. Once the user enters their creds NLA kicks in. CredSSP stands for Credential Security Support Provider protocol and is an authentication provider that processes authentication requests for other applications. In vulnerable versions of CredSSP there is a problem, identified recently, that allows remote code execution: an attacker who exploits this ⦠1 The client has the CredSSP update installed, and Encryption Oracle Remediation is set to Mitigated.This client will not RDP to a server that does not have the CredSSP update installed. NLA is the first stage of the CredSSP protocol, which is how those creds you typed in make it to the target server securely. Solution 1: Disabling NLA using Properties. Chances are you may have arrived here after a vulnerability scan returns a finding called âTerminal Services Doesnât Use Network Level Authentication (NLA)â. Examples. NLA works by first opening an ⦠But the session will be exposed to the attack. So after applying rule 1 of system administration (turn it off & back on again), always try rule 2 (apply updates). Remote Desktop Connection: The system administrator has restricted the type of logon (network or interactive) that you may use. A CredSSP authentication to TERMSRV/fs-elucid-db failed to negotiate a common protocol version. 2 The server has the CredSSP update installed, and Encryption Oracle Remediation is set to Force updated clients.The server will block any RDP connection from clients that do not have the CredSSP ⦠If you choose this, make sure that your RDP client has been updated and the target is domain authenticated. Ok, so that attempt failed as CREDSSP is required by the target server. The remote host offered version 4 which is not permitted by Encryption Oracle Remediation. Perhaps some other magic occurred when installing updates in the server but the authentication issue using remote desktop has gone (at least from the one client computer I tried). Basically, when we attempt a RDP connection to the server, the newer OSs (Windows 7/8) implement client side authentication (this is separate from NLA). ⦠b) If the client is not patched while the server is updated, RDP can still work. rdp nla credssp authentication failed
OPCFW_CODE
I've recently made an all flash cluster vSAN with the following components: I've created 4 vLans on the switch: Esxi version is 6.7 U3 HPE Branded, and I've run the SSP from HP 2020.3, so latest drivers are in use. "Skyline healt" is all green, and everything seems perfect, but, despite the read performance that are perfect, I notice a huge write lantency that affect any kind of operation. Attached you can find the output of healt check on the first node. Any help would be greatly appreciated. Moderator: Please do not post multiple threads on the same topic. Your duplicate posted today has been archived. No problem - as your post in the Italian area was in English anyway, that's the thread I archived. First off, just so that you (and anyone else reading this) is aware - vSAN Health is not intended to identify every single possible issue or misconfiguration with a host/cluster. Fair enough, it covers probably 10x the things it did at its inception, but it still doesn't or can't look at everything (for various reasons). Why I note this is that both models of SSDs you are using (SAMSUNG MZ7LN512 (some variant of a PM871b) and SanDisk SD6SB2M5) appear to be consumer-grade devices and obviously not on the vSAN HCL - if this is just a homelab this is okay (provided you don't care about the safety of the data or performance), but if this is a Production cluster (or anything you or your organisation/customer care about) you need to replace these with supported enterprise-grade devices before even considering putting any data on this cluster. "I notice a huge write lantency that affect any kind of operation." Can you please elaborate on what the actual impact here is aside from seeing (an admittedly alarming) high latency in the graph? I ask as there is literally NO workload on the host/cluster in your screenshot - it is bouncing between 0 and 1 IOPS, this can break graphs (not just in vSAN or VMware products but in computing in general) due to the fact that the math backing these is (generally) going to be designed around 1. positive whole integers (e.g. if your average over X time is 0.5 IOPS you may see weird/wrong results) and 2. an actual workload across a reasonable number of sampling intervals. While you should definitely be replacing the disks here (unless this is a homelab), to confirm whether this is a case of the graphs getting broken from the low/no IO, can you run HCIBench EasyRun on this cluster, post the results and share a screenshot of the performance graphs while this is running? This should always be the next step after setting up any new cluster anyway. first thanks a lot for your interest. This is a demo environment, so we don't need data protection or maximum performance, but, even using consumer-grade SSDs we do expect good performance, even because we are using a good equipment in general... I'm saying that there is a huge write latency, because I've did several tests, such as VM creation, copy/paste inside the VM, etc.. and every time I didn't received more that 10-15MB/s of writing speed. I've attached to this answer the HCIBench. Thanks a lot, So, in no way is my aim here to deride you or your cluster, but this is likely the worst easyrun output I have ever seen (and I have seen a LOT of them) - is is abundantly clear that there is either something deeply wrong/broken here or else these drives are just designed for reads and not up to the task - in the HCIBench output there is constant and shockingly high latency when doing any size of writes (even 4k ones which vSAN is kind of optimised for). You should be able to get more insight into this (and other things like ruling out whether this is solely a storage issue or has a network aspect also) by looking at the host-level vSAN performance stats for while HCIBench was running that show the per disk and per Disk-Groups stats - if you have suspicions that one device/Disk-Group or the network is slowing down the whole show then check if you have the same poor performance when VMs/HCIBench are configured for FTT=0 and the VM/test-VMs are running on the same node as their Objects reside. "we do expect good performance, even because we are using a good equipment in general..." If you mean to say that one should expect good storage performance when everything (except the storage devices) are of okay quality, this is the IT equivalent of getting a Ferrari, taking the alloys and Pirellis of it, replacing them with old wooden cart-wheels and then wondering why they start falling apart when it goes faster than 20 km/h. Kind of aside the point (as it is clear the drives cannot handle write-workloads), but file-copy is not by any means a good test for testing storage on vSAN or otherwise - it is especially unsuited for vSAN as it is writing to a single vmdk (which at low sizes typically will only have 2 components on 2 Capacity-tier devices and thus is only using a fraction of the clusters storage) over a single vscsi handle. thanks for your -frankly- explanation. I'm a bit frustrated about a product (vSAN) that doens't gives clear insight and hints on where to look for issues, because, even you admitted that: vSAN Health is not intended to identify every single possible issue or misconfiguration with a host/cluster. It is possible that a solution like this doesn't have a "config check" that helps in a non too hard way how to identify bottlenecks or misconfigurations? Or if you go not too far away from the "certified" hardware you go to the hell? Then about our equipment isn't a "Ferrari", but I think that having more than 60ms of write latency with a bit more than 170 IOPS is not resonable for all flash disks group - even if those are consumer class SSDs - . Saying that, I've run agian HCIbench using just two SSD as disk group on the same vSAN node with FFT=0 and the results was pretty the same. Now probably I have the ability to exange those disks for another project, and I can chose to replace with: Do you have any hints on that? "It is possible that a solution like this doesn't have a "config check" that helps in a non too hard way how to identify bottlenecks or misconfigurations?" Yes, there clearly is via the simple Day-1 step of running HCIBench and taking an even cursory 10 minute peruse of the vSAN Performance data which have shown you here what the problems are. This is also why VMware advise vSAN ReadyNodes so that one can know what performance to expect and whether it fits their workload before buying it - if you choose to build your own vSAN nodes with components of your choosing then planning and assessing this is in your responsibility. Having any form of health-check for disks being on the HCL is not by any means a trivial task, at least probably not without it having an annoyingly large amount of false positives - this is due to the fact that there are literally thousands of certified devices, many of which get shipped with different part-numbers (which ESXi can't see) or rebranded OEM IDs, get seen by ESXi differently depending on 3rd party storage-tools and may not even have the device model exposed to ESXi/vSAN if in RAID0 mode: "Or if you go not too far away from the "certified" hardware you go to the hell?" It is in no uncertain terms noted in multiple Day-0/pre-purchase vSAN guides that the HCL should be adhered to if you want reliable results: 'All capacity devices, drivers, and firmware versions in your Virtual SAN configuration must be certified and listed in the Virtual SAN section of the VMware Compatibility Guide.' 'Only use hardware that is found in the VMware Compatibility Guide (VCG). The use of hardware not listed in the VCG can lead to undesirable results.' This reason for this statement is not arbitrary, it is based on the fact that anything that makes it onto the vSAN HCL has been rigorously tested (for performance, stability and reliability) first by the hardware vendor (who decide if they want to test it for vSAN at all and then provided they are happy with it,) then by VMware engineering testing teams who either reject it or certify it for specific usages (e.g. this is why not all SSD/NVMe are rated for All-Flash cache-tier). "is not resonable for all flash disks group - even if those are consumer class SSDs - ." All SSDs are not equal and/or not equally good at different tasks (e.g. your ones here look to be good at read only from the HCIBench), the justification that 'but it's All-Flash' doesn't really hold water as (even when just considering devices on the vSAN HCL) the difference between the lowest capability and highest capability All-Flash Disk-Group/cluster is massive - going even further, I have seen Hybrid Disk-Groups/clusters that have far outperformed lower-end All-Flash ones. SSDSC2BX200G4R (S3610 variant) are on the vSAN HCL but as I mentioned regarding 'specific usages', these are on the low-end (SATA, Mixed-use, MLC) and thus are only certified as suitable for All-Flash capacity-tier (e.g. they are not even rated as suitable for Hybrid cache-tier): SSDSC2BA200G3P looks to be a HPE rebrand variant of S3700 (??) having trouble finding any decent information relating to this (going back to my point about how would one possibly automate this reliably), are you sure they are not 'SSDSC2BA200G3'? If they are then yes these are much better devices than the above: at the end we've managed to replace the old SSDs with the new ones I've mentioned (Dell SSDSC2BX200G4R), and as you can see from the bench attached, now the situation has totally changed. The performance are simply awesome. Thanks a lot for your help in idetify the issue, even if I gave you a big help using "garbage-class" SSDs
OPCFW_CODE
Smart locks, smart thermostats, smart cars -- we know what locks, thermostats, and cars are, but what makes one smart? In an ever-connected world, there is a transformation underway that is aims to connect the whole ecosystem of physical objects that make up our everyday world by designing and integrating them with wireless connectivity to be monitored, controlled and linked over the Internet via mobile apps. When it comes to what objects can be connected to this internet-of-things, virtually anything from wearable fitness-bands, to light bulbs will be connected. The Internet of Things (IoT) refers to scenarios where network connectivity and computing capabilities extend to predominantly physical objects that are not normally considered computers, which are embedded with electronics, software, sensors, and network connectivity and enables them to collect, generate, and exchange data over a network without requiring human-to-human or human-to-computer interaction. *Due to the nascency of this emerging field, some assert that there is no single, universal definition for the internet of things. Internet of Everything (IoE), Physical Internet, Ubiquitous Computing, Ambient Intelligence, Machine to Machine (M2M), Industrial Internet, Web of Things, Connected Environments, Smart Cities, Pervasive Internet, Connected World, Wireless Sensor Networks, Situated Computing, Future Internet, Physical Computing. The internet of things (IoT) is gaining significant market traction and bringing about fundamental changes in traditional business models. At its very basic level, the internet of things (also called the Internet of Everything or IoE) refers to the connection of everyday objects to the Internet and also to one another, with the goal of being able to provide users with smarter, more efficient experiences. The essence of this is to make it possible for just about anything to be connected and to communicate data over a network. From a more technically approach, the Internet of Things (IoT) enables any physical device, embedded with a valid IP-address, to transfer data seamlessly over a wireless network. It is a system of interrelated computing devices, mechanical and digital machines, objects, and natural beings that are all provided with unique identifiers. The pairing of common everyday ‘things’ with a unique identifier is what makes them so called ‘smart.’ Furthermore, the connectivity now provides these objects with the ability to transfer data without requiring human-to-human or human-to-computer interaction. The end result of this connectivity is that previously static objects (e.g. refrigerator) can now be equipped to provide significantly more utility. Instead of your refrigerator only keeping your food cold, it will now be able to send you coupons for your favorite ice cream, notify you that the water filter needs to be changed, or that your warranty may be expiring soon. Because the term internet-of-things, covers such a broad landscape, it is divided into several different categories. For simplicity sake, the general categories are first divided into the internet and then the things, with several subcategories for each category. The Internet of Things needs a strong backbone to support the billions of connected devices and apps. Below are a few of the components of IoT Hardware Infrastructure: To realize the true potential of Internet of Things (IoT), the data generated by sensors has to be analyzed in real-time. Below are a few categories of the types of IoT API Cloud services: IoT cloud scale platforms enable software and services that use the power of the global hardware and connectivity to enable IoT deployments at any scale. Below are a few examples: IoT applications are the end uses the internet of things makes possible. Below are only a small sample, among the many many types of IoT applications: Understanding how the internet of things works sounds like a bigger challenge than it actually is. One can easily understand how the internet of things works by breaking it down into smaller categories such as the general categories mentioned above: wireless networks, the ‘things’ themselves, and cloud services. The underlying infrastructure can be thought of as the highway, which is necessary for things to travel on. Underlying infrastructure includes essential hardware components such as fiber connectivity, data centers and various wireless radios that allow IoT enabled devices to connect to the Internet and to each other. Examples of the wireless radio standards include familiar standards like RFID, Wi-Fi, NFC, BLE (Bluetooth Low Energy), and some that you’ve probably haven’t heard of, like Xbee, ZigBee, Z-Wave , Wireless M-Bus and 6LoWPAN. The connected things can be thought of as cars, that drive on the highway. Without ‘Things’, there would be no internet of things, but what are the actual things? So called ‘Things’ can be anything from a door lock, to a t-shirt, to your car or smartphone. The only requirement of these ‘things’ is that they have a valid IP-address to transfer data seamlessly over a wireless network. In certain cases, a group of things may be connected to a central hub which allows them to connect to eachother (similar to the way an intranet functions). If the underlying infrastructure is the highway, and the things are the cars, then the cloud services are the fuel to drive cars. Cloud services enable the collection and analysis of data so people can see what’s going on and take action via their mobile apps. The role that humans will play in the internet of things often gets overlooked because it is easy to overly focus on how much the IoT will improve our lives. Like many technologies, the internet of things is best considered from the perspective of adoption rather than purely invention. Although most of the internet of things is based on machine-to-machine interactions, the human-to-machine interaction is equally important because the human input is the driver of what should be used. Humans can control the environment via mobile apps. Also known as the ‘Things’ in the context of the Internet of things (IoT), is an entity or physical object that has a unique identifier and the ability to transfer data over a network. This means that objects can be both physical and virtual objects such as Electronic tickets, Agendas, and Books. Most ‘things,’ from automobiles to smoke alarms, and the human body included, have long operated “dark,” with their location, position, and functional state unknown or even unknowable. The strategic significance of the IoT stems from the ever-advancing ability to break that constraint, and to create information without human observation. Sensors are an integral component, and the IoT wouldn’t be possible without sensors. A sensor is a device that generates electronic signal from a physical condition or event. Sensor endpoints can be thought of a the fundamental enablers of the IoT. They convert non-electrical input into an electrical signal that can be sent to an electronic circuit. Sensors detect and measure changes in position, temperature, light, etc. and are necessary to turn billions of objects into data-generating “things” that can report on their status. Like sensors, actuators are an integral component in the physically facing environment of the internet of things. A simple example of an actuator is an electric motor that converts electrical energy into mechanical energy. Actuators receive electronic signals from sensors and turn it into into action by converting the electrical signals into non-electrical energy, such as motion. It’s the synergistic combination of both sensors and actuators that can be used to change the position of a physical object. Most internet of things connected devices, from refrigerators to cars, have massive cloud-based back ends. This means that the cloud components of these technologies are become more systemic to IoT enabled devices. Many of these cloud-based services for devices are provided by major infrastructure providers such as Amazon Web Services, Google, and Microsoft Azure. Many of these cloud based backend services can be used to: Internet of Things middleware connects all of the different IoT components together and enables harmonious interaction among them. It is software that serves as an interface, facilitating the interaction between the ‘Internet’ and the ‘Things’, making otherwise non-existent communication possible. It also provides a connectivity layer for sensors and the application layers that support services that ensure effective communications among software. It also enables connectivity for a huge numbers of diverse Things. IoT components are tied together by networks that use various wireless and wireline technologies, standards, and protocols to provide widespread connectivity. Below is a list of several of the more common types of connectivity networks used in the internet of things:
OPCFW_CODE
Create a Script for Login to Samba Share Here is a scenario where you have Windows users who need to log into an encrypted directory that is mounted on a Linux Samba share. This provides an interesting option for security. The Linux user, who has sudo access logs in and when they do are asked to mount the truecrypt volume and also restart samba server because the volume will exist on a Samba share. It is imperative that the user who mounts the share is in the adm group so they are able to enter the sudo commands to run programs as root. Verify that the user is in the /etc/group file and listed in the “adm” group. Here we have two users mike and sue in the adm group. adm:x:4:mike , sue Edit the Samba Server Edit your /etc/samba/smb.conf file to allow the user mike to login to the encrypted directory. Be sure your workgroup is the same for your Windows machines. Notice that passwords are encrypted and the more secure tdbsam is used for the database backend. This is a copy of what you need in your /etc/samba/smb.conf file. netbios name = linuxserver workgroup = WORKGROUP server string = Public File Server security = user encrypt passwords = yes passdb backend = tdbsam comment = Truecrypt Directory path = /media/truecrypt3 valid users = mike browsable = no guest ok = no read only = no Create a smbpasswd Account Be sure that the user you are using has their password on both the Linux system account and also on samba as they are separate databases. smbpasswd -a mike New SMB password: Retype new SMB password: Added user mike. Edit the User’s .bashrc File Each user that logs in to the Linux box has their environment created by the hidden .bashrc file which is in every user home directory. What you want to do is at the end of the .bashrc file you want to add a line that will execute the script that you create. Here is the information needed for the user mike to execute the script that will be in the user’s home directory. Create the truecrypt Script This simple script will ask for the password of the sudo user (the password for the user mike in this example who has sudo rights), then it will mount the directory and restart samba so the directory is available with Samba. # Truecrypt Script truecrypt -k “” –protect-hidden=no –mount /protect/encrypt.tc /media/truecrypt3 sudo /etc/init.d/samba restart Here is the output. Last login: Thu Jan 1 06:59:09 2009 from 192.168.5.178 Enter password for /protect/encrypt.tc: Enter system administrator password: [sudo] password for mike: * Stopping Samba daemons [ OK ] Starting Samba daemons [ OK ] Verify that the mount is running with this command: 3: /protect/encrypt.tc /dev/mapper/truecrypt3 /media/truecrypt3 Login from another Linux box with this command or with samba from a Windows machine, this is the Linux example. Enter mike’s password: Domain=[WK] OS=[Unix] Server=[Samba 3.2.3] smb: \> ls . D 0 Fri Dec 26 05:49:43 2008 .. D 0 Thu Jan 1 11:39:10 2009 debconf.conf 2969 Fri Dec 26 05:49:42 2008 sensors.conf 85602 Fri Dec 26 05:49:43 2008 adduser.conf 2986 Fri Dec 26 05:49:42 2008 nsswitch.conf 475 Fri Dec 26 05:49:42 2008 ltrace.conf 13144 Fri Dec 26 05:49:42 2008 xinetd.conf 289 Fri Dec 26 05:49:43 2008 host.conf 92 Fri Dec 26 05:49:42 2008 47157 blocks of size 2048. 41816 blocks available
OPCFW_CODE
| Lesson 4 |Using a text editor | Use the vi text editor. Using vi Text Editor Shell scripts are text files and cannot contain any special formatting or font information. Shell scripts can be created in any text editor like Notepad++ , as long as they are saved as plain text files. Some text editors can color code shell scripts, highlighting comments, commands, and variables. This information is to help you understand the script. The formatting and colors are not saved as part of the text file. Why use vi? The most well-known text editor on modern UNIX systems is the vi editor . UNIX systems have other editors, including graphical editors that are easy to use. But because vi is available on all UNIX systems, we will use it as the basis for writing our shell scripts. The information presented in this lesson should be a review for you. It is not easy to use vi until you have mastered a few of its fundamentals. The vi text editor has several modes. The two main modes you will use are command mode and insert mode. In command mode, characters that you enter are interpreted as commands; in insert mode, characters that you type are entered as part of your document. After inserting text (or if you are not sure what mode you are in), you can always change to command mode by pressing the Esc key, which toggles between the two modes. UNIX Text Editors Many text editors are included with most UNIX operating systems. Some are popular, others are specialized and may not be of much use. If you are already familiar with another text editor, feel free to use that editor to create shell scripts during this course. Some of the popular UNIX text editors include the following: - emacs, a very powerful text editor with hundreds of configuration options and keyboard commands. A graphical version of emacs, called xemacs, provides menus to access basic editing features. - pico, a simple editor with on-screen keyboard commands to make it easy for new users to create and edit text files. - ed, an old-fashioned editor that works on a single line of a file at a time. - joe, an easy to use editor with basic functionality Of the editors in the above list, vi and nano are available for Red hat and Ubuntu Linux. If you are working on a UNIX system at your site, you may have one or move of these text editors, in addition to a graphical text editor. Each version of UNIX provides a basic text editor from its graphical menus. Consult your documentation or system administrator for assistance in located a graphical editor on your UNIX system. What Are Shell Scripts? A shell script is a text file that contains one or more commands. This seems pretty simple after all this buildup, but that is all a script really is. The power of shell scripts, however, lies not in the simplicity of the concept, but in what you can do with these files. In a shell script, the shell assumes each line of the text file holds a separate command. These commands appear for the most part as if you had typed them in at a shell window. (There are a few differences, covered in the chapters to follow.) For example, this code shows two commands: The following Try It Out shows you how to make these two commands into a script and run it. You will see a number of very short examples that should provide a quick overview to scripting. VI Editor Modes The first thing most users learn about the VI editor is that it has two modes: command and insert. The command mode allows the entry of commands to manipulate text. These commands are usually one or two characters long, and can be entered with few keystrokes. The insert mode puts anything typed on the keyboard into the current file. VI starts out in command mode. There are several commands that put the VI editor into insert mode. The most commonly used commands to get into insert mode are a and i. For example, hit i key and type "This is EASY.", then hit the escape key. Once you are in insert mode, you get out of it by hitting the escape key. You can hit escape two times in a row and VI would definitely be in command mode. Hitting escape while you are already in command mode doesn't take the editor out of command mode. It may beep to tell you that you are already in that mode. Basic vi commands The following table describes some basic commands in vi. Unix Text Editor - Exercise Click on the Exercise link below to practice using vi in the UNIX Lab. Unix Text Editor - Exercise The next lesson introduces a command to write information to the screen from a script.
OPCFW_CODE
from Communication import BluetoothSerial from time import sleep #Yes, I know the following imports tie it to Windows #This is for the manual part. Consistent speed isn't important here. This is fast enough. #pip install pypiwin32 import win32con import win32api #pip install pythonnet import clr from System.Windows import Forms from System import Drawing class HUD(Forms.Form): def __init__(self): self.InitializeComponent() def InitializeComponent(self): self.SuspendLayout() self.ClientSize = Drawing.Size(64, 64) self.FormBorderStyle = Forms.FormBorderStyle.FixedToolWindow self.BackColor = Drawing.Color.GhostWhite self.KeyDown += self.OnKeyDownEvent self.KeyUp += self.OnKeyUpEvent self.Closing += self.OnClosingEvent self.ResumeLayout(False) self.Input_Up = False self.Input_Down = False self.Input_Left = False self.Input_Right = False self.Input_Boost = False self.Input_Magnet = False def OnKeyDownEvent(self, sender, e): if e.KeyCode == Forms.Keys.W: self.Input_Up = True elif e.KeyCode == Forms.Keys.S: self.Input_Down = True elif e.KeyCode == Forms.Keys.A: self.Input_Left = True elif e.KeyCode == Forms.Keys.D: self.Input_Right = True elif e.KeyCode == Forms.Keys.Space: self.Input_Boost = True elif e.KeyCode == Forms.Keys.V: self.Input_Magnet = not self.Input_Magnet elif e.KeyCode == Forms.Keys.Escape: self.Close() e.Handled = True def OnKeyUpEvent(self, sender, e): if e.KeyCode == Forms.Keys.W: self.Input_Up = False elif e.KeyCode == Forms.Keys.S: self.Input_Down = False elif e.KeyCode == Forms.Keys.A: self.Input_Left = False elif e.KeyCode == Forms.Keys.D: self.Input_Right = False elif e.KeyCode == Forms.Keys.Space: self.Input_Boost = False e.Handled = True def OnClosingEvent(self, sender, e): pass def controls_to_motor_signal(up, down, left, right, speed_scaler = 0.6): #Speed scaler should be between 1 and 0 #The following part has been coded to exhibit the following properties: # - Backwards has higher priority than forwards # - No turning if both turn keys down # - Turn keys turn robot if driving backwards/forwards # - Turns in opposite direction when going backwards # - Turn keys turn robot in place if standing still motor_signal = [0, 0] if left == right: if down: motor_signal = [-255, -255] elif up: motor_signal = [255, 255] elif left: motor_signal[1] = 255 elif right: motor_signal[0] = 255 if left != right: if down: motor_signal = [-motor_signal[1], -motor_signal[0]] elif not up: if right: motor_signal[1] = -255 elif left: motor_signal[0] = -255 return [int(signal * speed_scaler) for signal in motor_signal] #Program starts here print("Connecting to robot...") com = BluetoothSerial("COM3", 9600) try: com.open() except: print("Connection failed. Program terminated.") exit() print("Connection established") #Create HUD window = HUD() window.Show() while window.Visible: Forms.Application.DoEvents() if window.Input_Boost: speed_scaler = 1 else: speed_scaler = 0.50 motor_signal = controls_to_motor_signal(window.Input_Up, window.Input_Down, window.Input_Left, window.Input_Right, speed_scaler) if window.Input_Magnet: aux_signal = 255 else: aux_signal = 0 sleep(0.032) com.send_motor_signal(motor_signal, aux_signal) if not window.IsDisposed: window.Close() print("Closing connection...") com.close() print("Connection closed. Program terminated.")
STACK_EDU
IGlk progress Progress on #38 Remove unused functions. They'll be easy to add back if we need them later. Fix casing on Frankendrift.GlkRunner so that references work on case sensitive file systems Use a Memory for glk_request_line_event_uni. I didn't change the glk_put_buffer* functions to use Spans yet, but we probably should in the future. The projects no longer need to be unsafe. This is all the changes I'm thinking of for now, though I might need to make a few more for AsyncGlk, not sure yet. For reference, this is the error I'm getting: Ausnahme ausgelöst: "System.Runtime.InteropServices.MarshalDirectiveException" in FrankenDrift.GlkRunner.Gargoyle.dll Cannot marshal 'parameter #2': Non-blittable generic types cannot be marshaled. bei FrankenDrift.GlkRunner.Gargoyle.Garglk_Pinvoke.glk_request_line_event_uni(WindowHandle win, Memory`1 buf, UInt32 maxlen, UInt32 initlen) bei FrankenDrift.GlkRunner.Gargoyle.GarGlk.glk_request_line_event_uni(WindowHandle win, Memory`1 buf, UInt32 maxlen, UInt32 initlen) in ...\FrankenDrift.GlkRunner\FrankenDrift.GlkRunner.Gargoyle\Main.cs: Zeile117 bei FrankenDrift.GlkRunner.GlkHtmlWin.GetLineInput() in ...\FrankenDrift.GlkRunner\FrankenDrift.GlkRunner\GlkHtmlWin.cs: Zeile117 bei FrankenDrift.GlkRunner.MainSession.Run() in ...\FrankenDrift.GlkRunner\FrankenDrift.GlkRunner\GlkSession.cs: Zeile65 bei FrankenDrift.GlkRunner.Gargoyle.GarGlkRunner.Main(String[] args) in ...\FrankenDrift.GlkRunner\FrankenDrift.GlkRunner.Gargoyle\Main.cs: Zeile182 Asking the other way around, am I right in assuming that this is the target API we need to call, and that it takes a reference to an array that will be filled once the line event occurs? The next best thing to an array reference that the interop docs mention is a Span (we can re-create a Span from the pointer and length parameters in the interface implementation, so we don't break native interop), which then becomes a JS MemoryView, which you'd then have to shoehorn back into an array reference. (reinterpret_cast anyone?). Worst case, we can do a janky-ass workaround like this: Pass IntPtr to JS, which just becomes a Number. Stow it away somewhere. Create a new JS array to pass to the above function. Write a .NET function that takes an IntPtr and an array, and unsafely copies the array to the pointer location Once the line event occurs, call that .NET function to make it seem as if we had passed a reference to the array Congratulations, we have now reinvented Remote Procedure Calls I guess 😛 This is wrong so I'll close it. Eventually I'll figure out what is right. Hey gus 😀 How far away are you from a release for gargoyle then? Well, the app can dynamically load libgarglk and run games with it. That part is mostly done. The big remaining issue (that I don't have the time or energy to fix right now) is to make Gargoyle use dynamic linking on all platforms (currently, it only does that on Windows AFAIK), and then teach its build system how to compile a .NET project. I'm thoroughly impressed that this has actually been done at all. Great effort 😀 Is there already a way to run games in gargoyle? I have a proof-of-concept build from a few weeks ago (based on a preview version of .NET 8) lying around on my machine and it does work, but it's all manually assembled in several steps. Now that Gargoyle is capable of compiling natively on Windows, what remains to be done is documenting all this and integrating it into Gargoyle's build process. I don't currently have a timeline for that. Fantastic !!! It's probably the main reason people don't use Adrift 5 more often. I'm so much looking forward to this. Thanks for the update 😄
GITHUB_ARCHIVE
Passing Parameters from View to Action in MVC I am currently trying to take a value from a drop down list, store it into a variable, and pass it into a controller's action result. When I run my code, the Index is supposed to store the value of the selected item in the dropdown box in the variable SelectedDistrict, then then go to the Query action. Query will take a variable of DistrictViewModel as a parameter and then use var school = getQuery(variable.SelectedDistrict) to go into the function I have. In the function however, it's saying that the variable sd is null whenever i debug. Maybe the value from the drop down box is not storing properly? In the end, I want to display in a table all of the schools in my school table that come from the selected district in the drop down. The table is not being populated because of the null value. Here is my code for more clarity. District View Model: School View Model: Controller w/ getQuery function: Index View: Query View: The table when I run my code: When you are on the Query page, what is the URL? I assume that there is no query string there. @howcheng http://uwfii-util-mcs-schools.azurewebsites.net/District/Query is this what you're looking for? The problem is that your controller method for Query is expecting some data to be passed in the form of an object, but you aren't passing anything to it. If it's a GET request (and it appears to be because you just set location.href on click), the values would need to be in the query string. Alternatively you can make your form POST to that controller action instead. So place [HttpPost] above my query action result? I wanted to submit the value chosen from the select list and save it into the variable SelectedDistrict. But what you're saying is that the value can be posted directly to the Query action result as soon as a value is selected? The problem is that your controller method for Query is expecting some data to be passed in the form of an object, but you aren't passing anything to it. If it's a GET request (and it appears to be because you just set location.href on click), the values would need to be in the query string. Alternatively you can make your form POST to that controller action instead. You'll need <form> tags. You can GET or POST to your controller method, it won't matter (model binding works either way). It depends on whether you want people to be able to deep-link directly to the search results or not. <form action="@Url.Action("Query", "District") method="get"> @Html.DropDownListFor(m => m.SelectedDistrict, Model.Districts) <button type="submit" value="Submit"/> </form> That pretty much should do it, or at least get you on the right path. That did it! I realized that the value from the dropdownlist was not being passed but I can't believe i didn't realize that a simple form would do the trick. I thought that after the selection was made, I could call the variable from the modelview and pass it into the Query method but that wasn't the case. Thank you!
STACK_EXCHANGE
Tips From an Examiner for Passing Written Exams No one likes exams, but it's pretty hard to get anywhere these days without passing at least some of them (even if it's just your driving theory test). The good news is that exams, like anything else, involve skills that you can learn. Here are eight solid exam technique tips I've picked up over many years of both taking and marking exams. In other words "Read the full (hem hem) question!". You're answers might be true. They might be brilliant. But if they are not about the question I can't give you marks. There is a marking schedule for most exams. If your point is not on the schedule it won't score a mark (unless you came up with something indisputably brilliant and original, but don't kid yourself, you won't manage many of those). Don't waste your time in the exam. Read the question. Twice if you have to. Make sure you understand what sort of answer it is looking for. Then give it. That's it. Because... 2. Writing Too Much? That's a Waste. If there are easy points to make, make them and move on. Don't waste time and extra words saying the same thing over and over again. In some exams there are points for style. In others you don't even need to write in complete sentences. Make sure you know what it is expected from you - so you don't waste effort on flowery sentences when you could be writing bullet points. Or you don't lose style points because you didn't use proper punctuation. Also don't waste time writing a very long answer when there are only a few marks available. The examiner won't give you more points than the maximum for the question, so there is no point spending the extra time and effort. Sometimes Risks Pay Off 3. But Make Sure you Write Enough If there are lots of marks for a question the examiners are looking for a long answer. Make sure you make enough points in that question for the number of marks available. 4. Be Clear It's tough when you are tired and under pressure, but there is no point in being vague. You want to make sure that tired busy examiner can tell you have understood and answered the question. Funny - But Not Great Exam Technique 5. Try All the Questions (Or as many as you are meant to try if you have a choice). There will be easy marks on most questions. It would be a shame not to pick them up because you spent too long on the nearly impossible parts of another question. The hard parts may not be worth more points so you are only making the exam tougher for yourself. It's worth being disciplined with your time. Work out how long you have per mark. (E.g. in a two hour exam with 100 marks you have 120/100 = 1.2 minutes per question). This will give you a time budget for each question. When you've used it up move on. You can always come back later if you get through some of the other questions quicker. 6. Be Concise This will help in so many ways. It will help your time management. It will help you score points because the examiner can find them easily. It will help you make sure you understand what you mean. Woolly writing can mean woolly thinking and that won't impress anyone in an exam. 7. State the Obvious Most exam questions have a few easy marks to get you started. These will often be for the basic ideas that the question is testing. And writing down these basic ideas can be a really quick easy way to score. So make sure that you get the easy marks before moving on to prove how clever you are on the rest of the question. 8. And to State the Obvious: Study Because all the tips in the world won't save you if you don't know what you are talking about. Prepare well and the exam will be, not fun exactly, but less painful. If you hate studying now, think how much worse it would be to have to take it again! Exams are stressful. But most people give their best performances when they relax and focus. Top athletes have sports pyshologists to help them do just that. But the rest of us can be our own "exam coaches". So breathe deep, get enough sleep and remember "it's just an exam". Then go in there and give it all you have got. Anything I missed? Why not add your best tips on passing exams as painlessly as possible in the comments.
OPCFW_CODE
using System; using System.Collections.Generic; namespace P13.MergeCollections { public class StartUp { public static void Main() { LinkedList<int> linkedList = new LinkedList<int>(); LinkedList<int> linkedList1 = new LinkedList<int>(); LinkedList<int> linkedList2 = new LinkedList<int>(); for (int number = 1; number <= 8; number++) { if (number <= 3) { linkedList1.AddLast(number); } else { linkedList2.AddLast(number); } } if (linkedList1.Count > linkedList2.Count) { linkedList = MergeLinkedLists(linkedList1, linkedList2); } else { linkedList = MergeLinkedLists(linkedList2, linkedList1); } Console.WriteLine("Result: " + String.Join(", ", linkedList)); } private static LinkedList<int> MergeLinkedLists(LinkedList<int> bigLinkedList, LinkedList<int> smallLinkedList) { LinkedList<int> newLinkedList = new LinkedList<int>(); LinkedListNode<int> smallLinkedListNode = smallLinkedList.First; LinkedListNode<int> bigLinkedListNode = bigLinkedList.First; while (smallLinkedListNode != null) { newLinkedList.AddLast(smallLinkedListNode.Value); newLinkedList.AddLast(bigLinkedListNode.Value); smallLinkedListNode = smallLinkedListNode.Next; bigLinkedListNode = bigLinkedListNode.Next; } while (bigLinkedListNode != null) { newLinkedList.AddLast(bigLinkedListNode.Value); bigLinkedListNode = bigLinkedListNode.Next; } return newLinkedList; } } }
STACK_EDU
Downloading from MASS To help researchers run models from and compare results with the UK Met Office we have limited access to their data archives. On request and with approval from the UKMO we are able to download data from their research runs to NCI. The following is documentation for the CMS team, researchers affiliated with the centre just need to get in touch by emailing the helpdesk email@example.com with the ID of the run they're wanting to access. - Register for Jasmin - https://accounts.jasmin.ac.uk/ - Register for a MASS account with UKMO - the collaboration team are able to do the sponsoring - http://help.ceda.ac.uk/article/228-how-to-apply-for-a-met-office-mass-account - Register for access to the mass-cli1 server on Jasmin - http://help.ceda.ac.uk/article/229-how-to-apply-for-access-to-the-met-office-mass-client-machine After this is processed you will get an email from the Met Office storage team with a credentials file, and an email from Jasmin saying you can access mass-cli1. If there are issues contact firstname.lastname@example.org The `moose` credentials file needs to be installed using the mass-cli1 server. Copy it to the Jasmin login node (there is a shared home drive across all Jasmin servers) scp moose email@example.com:~/ Connect to mass-cli1 (via the login node) ssh -A firstname.lastname@example.org ssh mass-cli1 and install and check the credentials moo install moo si -v Getting data from MASS To see the datasets we have access to moo projinfo -l project-jasmin-umcollab If the desired project isn't available contact the collaboration team and ask that it be authorised To list a dataset's contents moo ls moose:/crum/u-ai718 UM outputs are organised into a directory for each output stream, within each directory is timestamped files. Data should be extracted into a 'project workspace' before copying to NCI (home directory is too small). Your Met Office contact should be able to recommend a location (here I'm using mo_gc3) mkdir /group_workspaces/jasmin2/mo_gc3/swales/u-ai718/p6 moo get moose:/crum/u-ai718/ap6.pp/ai718a.p61950\* /group_workspaces/jasmin2/mo_gc3/swales/u-ai718/p6 Getting data to NCI To copy data across to NCI swap to the server `jasmin-xfer3`, this has a fast connection to Australia. `bbcp` is the recommended transfer tool, it is a parallel version of `rsync`. Copy the files to the Raijin data mover, `r-dm3.nci.org.au`. ssh jasmin-xfer3 bbcp -a -k -s 10 -T "ssh -x -a -oFallBackToRsh=no %I -l %U %H module load bbcp ; bbcp" -v -4 -P 5 -r --port 50000:51000 \ /group_workspaces/jasmin2/mo_gc3/swales/u-ai718/p6/ \ email@example.com:/g/data1/w35/saw562/HighResMIP/u-ai718/p6 Data transfer rate should be in the ballpark of 50 mb/s Remember to clean up once the data is transferred rm -r /group_workspaces/jasmin2/mo_gc3/swales/u-ai718/p6/ With everything on Raijin process the output to cf-netcdf. ssh raijin module load conda cd /g/data1/w35/saw562/HighResMIP/u-ai718/p6 for file in *.pp; do iris2netcdf $file done
OPCFW_CODE
Using servlets to implement REST web services in Java I have created many REST web services providing JSON before using PHP and NodeJS and I know the concept. Now I want to re-implement those web services using Java instead. After doing some research for how to implement web services in Java, I found some standards or libraries like JAX-RS, Spring or Jersey. However I not know the difference between all of them. I wonder why we do not make a simple servlet which will be called through HTTP request and returns the result in the JSON format. And if I wanted to use one of these standards, what would be the best choice to implement web services that accepts HTTP requests and returns JSON? Because if you create a simple servlet, then you'll have to do all the boilerplate code yourself that is elegantly handled by the existing frameworks. Which happens to be the answer to all of "why should I use X instead of doing Y myself" questions. You can use a stone to drive a nail into the wall. For sure you can. But why would you do that if you have a hammer available? Using the proper tool will make your life a lot easier. In a similar way, you can create REST applications using only the Servlet API. However, there are other APIs that are designed to create REST applications. So, why don't you use them? JAX-RS and Jersey JAX-RS, currently defined by the JSR 339, is the standard Java API for creating RESTful web services and it's built the top of the Servlet API. It's important mention that JAX-RS is an specification. In order to use it, you will need an implementation, such as Jersey, which is the reference implementation. A few resources that may be useful: JAX-RS 2.0 specification Jersey documentation Spring Framework The Spring Framework allows you to create RESTful web services and it can be easly integrated with other Spring projects. A few resources that may be useful: Spring Framework website Spring Framework documentation Guide to build a RESTful web service with Spring Framework. Other resources you may consider useful Why use a framework for RESTful services in Java instead of vanilla servlets Why use JAX-RS / Jersey? Spring 4 vs Jersey for REST web services Difference between JAX-RS and Spring Rest You can do it using Servlet API actually. But you won't get all the benefits of JAX-RS like url mapping, parameters injection, ... You would have to write all this "by hand". By the way, the difference between JAX-RS and Jersey is that JAX-RS is a specification, a standard and Jersey is an implementation of that standard. There are other implementations as well (RestEASY for example). Spring also has a module for REST services.
STACK_EXCHANGE
About my services My name is Nyshia marie I am a spiritual advisor and life coach who gives Honest answers to help you find your path to love and happiness. Love readings, Psychic readings, Tarot readings Ratings and reviews Nyshia was spot on with her reading and confirmed what I had been suspecting. I so appreciate your insights to this complicated situation. Thank you! She was excellent! Awesome reading as always. Thank you so much for the clarity and I’ll keep you posted. I really feel like there is def a spiritual connection with her readings she seems very genuine and in tune Seem to be very accurate about my POI Did not resonate with me. Other than getting the reading at the last second- it was a bunch of advice on what she would do in my position. She mentioned they are looking at my page to what I’ve been posting- this is impossible as I have zero social media. She also kept referring to the same place and position but it’s not - it’s different which I said in my reading request….. I guess we will see the outcome but honestly not much of a reading. A bit disappointed since a past reading I had with her was good. Always taking her time to clearly explain what is going on. I recommend her 100xs 100 🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹LOVE1111🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹😇😇😇😇😇😇😇😇😇😇😇😇😇TRULY GIFTED ANGEL😇😇😇😇😇😇😇😇😇😇😇😇😇🔮🔮🔮🔮🔮🔮🔮🔮🔮🔮🔮GREAT INSIGHT AND CLARITY 🔮🔮🔮🔮🔮🔮🔮🔮🔮🔮🔮🔮😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘😘💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗💗 Always grateful for your sound guidance and intuition. Thank you for your words of wisdom. Always a beautiful help and a guiding star. 💛 Always on point You were awesome I wish I had a little more clarity about person #1 but what you told me is what I needed to hear thank you a million Amazing! She picked up so much with great detail and accuracy! Will be back GREAT INSIGHT 🔮🔮🔮🔮🔮🔮🔮🔮I HIGHLY RECOMMEND TO FAMILY AND FRIENDS💗💗💗💗💗💗💗💗💗TRULY GIFTED AND GREAT CONNECTION🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹LOVE1111🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹🌹😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇😇 Thank you so much for the Clarity beautiful! You always come through with the accuracy! I truly appreciate you..Blessings!❣ Thank you so much🙏 Beautiful and accurate reading🥰 She has a beautiful spirit. Thank you 💜 Loved love! She read the energy of my POI thoroughly. I thought her reading resonated and she even gave me good advice on what to do regarding my POI. Always a pleasure to speak with. You don’t have to shop around once she gives you a reading Appreciate the insights. Thank you. Fabulous as always ❤️ Loved the reading and the advice My favorite girl on her, give her a try she’s so accurate and picks up on things you don’t mention. She’s also so sweet and has great energy ♥️ I love her readings so much, because she picks up on everything! Even small details that are important ♥️ I love this queen so much. Always so much information packed within the 3 min.
OPCFW_CODE
Because of our research and solution development teams, we have constant needs for new ideas. Working with interns is beneficial for us as well as for them. OSR-Lab benefit from their fresh thinking and creativity. On their side the interns get some experience in a company, get high-end knowledge in security, cryptography and circuit design. The ideal intern presents the following characteristics: - PhD or MSc student - Major in Information Security, Signal processing, EE. Basic knowledge of cryptographic is a plus - Willingness to learn - Problem solving attitude These skills are preferred, but we also often enroll interns with different background. If you think you can contribute to OSR-Lab, do not hesitate to apply. Two of our management members are in charge of supervising our interns: Dr. Ir. Frederik Vercauteren is an expert on elliptic curve cryptography, pairing based cryptography, homomorphic encryption, post-quantum, cryptography and side channel analysis for public key cryptography. He is a co-author of the Handbook of elliptic and hyperelliptic curve cryptography, the inventor of the ate pairing and optimal pairings and has made major contributions to the development of homomorphic encryption. - Handbook of elliptic and hyperelliptic curve cryptography: - The eta pairing revisited: http://homes.esat.kuleuven.be/~fvercaut/papers/ate.pdf - Optimal pairings: http://homes.esat.kuleuven.be/~fvercaut/papers/optimal.pdf - Fully homomorphic SIMD operations: http://homes.esat.kuleuven.be/~fvercaut/papers/DCC2011.pdf - Somewhat practical fully homomorphic encryption: http://eprint.iacr.org/2012/144 At 35, Dr Junfeng FAN is considered one of the influencer of the cryptography community in China. He graduated from Zhejiang University with a Master of Electronic Information, then obtained his PhD from the Belgian Catholic University of Leuven in Applied Cryptography. He contributed to the research community under the supervision of the IACR President Bart Preneel and internationally renowned expert Professor Ingrid Verbauwhede. He published articles in 6 international journals, presented academic work and and has more than 20 papers at conferences, including 5 at CHES. Dr Junfeng FAN is also a reviewer for Journal of Cryptology. - Efficient Hardware Implementation of Fp-arithmetic for Pairing-Friendly Curves. (IEEE Trans. Computer - 2012) : https://www.computer.org/csdl/trans/tc/2012/05/ttc2012050676-abs.html - State-of-the-art of secure ECC implementations: a survey on known side-channel attacks and countermeasures (Host Workshop – 2010) : http://ieeexplore.ieee.org/document/5513110/ - Design and design methods for unified multiplier and inverter and its application for HECC (Integration, The VLSI Journal - 2011) : http://www.sciencedirect.com/science/article/pii/S0167926011000344 - Novel RNS Parameter Selection for Fast Modular Multiplication (IEEE Trans. Computers – 2014) : https://www.computer.org/csdl/trans/tc/2014/08/06504454-abs.html - Accelerating Scalar Conversion for Koblitz Curve Cryptoprocessors on Hardware Platforms (IEEE Trans. VLSI Syst – 2014) : http://eprint.iacr.org/2013/535 - Fair and Consistent Hardware Evaluation of Fourteen Round Two SHA-3 Candidates (IEEE Trans. VLSI Syst – 2012) : http://ieeexplore.ieee.org/document/5756688/
OPCFW_CODE
#include "Player.h" #include "Input.h" Player::Player() {} Player::~Player() {} void Player::init(Engine::InputController *control, int player_num) { this->control = control; controller = false; this->player_num = player_num; if(!inputQ.empty()) inputQ.pop(); for(int i = 0; i < MAX_BUTTONS; i++) { input_buttons[i] = false; } } void Player::update() { //get input from controller and fill the queue //check if the controller is connected if(controller) { if(control->isConnected()) { populateQcheck(A, XINPUT_GAMEPAD_A); populateQcheck(B, XINPUT_GAMEPAD_B); populateQcheck(X, XINPUT_GAMEPAD_X); populateQcheck(Y, XINPUT_GAMEPAD_Y); populateQcheck(SELECT, XINPUT_GAMEPAD_BACK); populateQcheck(START, XINPUT_GAMEPAD_START); //populateQcheck(R1, XINPUT_GAMEPAD_RIGHT_SHOULDER); //populateQcheck(R2, XINPUT_GAMEPAD_A); //populateQcheck(L1, XINPUT_GAMEPAD_A); //populateQcheck(L2, XINPUT_GAMEPAD_A); populateQ(UP, XINPUT_GAMEPAD_DPAD_UP); populateQ(DOWN, XINPUT_GAMEPAD_DPAD_DOWN); populateQ(LEFT, XINPUT_GAMEPAD_DPAD_LEFT); populateQ(RIGHT, XINPUT_GAMEPAD_DPAD_RIGHT); } }else { populateQcheckkeyboard(A, DIK_J); populateQcheckkeyboard(B, DIK_K); populateQcheckkeyboard(X, DIK_U); populateQcheckkeyboard(Y, DIK_I); populateQcheckkeyboard(SELECT, DIK_R); populateQcheckkeyboard(START, DIK_T); populateQcheckkeyboard(R1, DIK_Y); populateQcheckkeyboard(R2, DIK_O); populateQcheckkeyboard(L1, DIK_Q); populateQcheckkeyboard(L2, DIK_E); populateQkeyboard(UP, DIK_W); populateQkeyboard(DOWN, DIK_S); populateQkeyboard(LEFT, DIK_A); populateQkeyboard(RIGHT, DIK_D); } } void Player::clearQ() { if(!inputQ.empty()) inputQ.pop(); } std::queue<int> * Player::getQ() { return &inputQ; } int Player::getPlayer() { return player_num; } void Player::populateQcheck(int button, WORD controller_code) { if(control->getState().Gamepad.wButtons & controller_code) { if(!input_buttons[button]) { input_buttons[button] = true; inputQ.push(button); } }else input_buttons[A] = false; } void Player::populateQ(int button, WORD controller_code) { if(control->getState().Gamepad.wButtons & controller_code) inputQ.push(button); } void Player::populateQcheckkeyboard(int button, unsigned char keycode) { if(Engine::InputKeyboard::instance()->push_button(keycode)) { if(!Engine::InputKeyboard::instance()->check_button_down(keycode)) { Engine::InputKeyboard::instance()->set_button(keycode, true); inputQ.push(button); } }else Engine::InputKeyboard::instance()->set_button(keycode, false); } void Player::populateQkeyboard(int button, unsigned char keycode) { if(Engine::InputKeyboard::instance()->push_button(keycode)) inputQ.push(button); } void Player::shutdown() { control = nullptr; } void Player::setController(bool arg) { controller = arg; }
STACK_EDU
"Liquidators are underexamined actors in the DeFi space, working, like miners and validators, behind the scenes to keep the entire system functioning and being handsomely rewarded for doing so. Unlike miners and validators, however, liquidators require effectively no upfront capital investment, creating an ecosystem of professionals operating from potentially anywhere in the world, entirely anonymously, getting paid to keep markets solvent." Links to the best articles, videos and podcasts about Ethereum. "What is a multisig and which multisig should I use? Here’s your answer." "Ethereum is a place to build things that all compete for value" "2020 is shaping up to be a pivotal year for Ethereum 2 with the expected launch of the first phase, known as the beacon chain, accelerated work on additional phases, and growth of the supporting ecosystem. This is a personal view of my expectations for Ethereum 2 over the next year, based on the work needed to deliver Ethereum 2 and the current state of this development." "This post is intended for the average user of the internet — someone who uses apps but does not necessarily develop them makes online purchases but does not necessarily understand how they are transacted behind the scenes." "One common strand of thinking in blockchain land goes as follows: blockchains should be maximally simple, because they are a piece of infrastructure that is difficult to change and would lead to great harms if it breaks, and more complex functionality should be built on top, in the form of layer 2 protocols: state channels, Plasma, rollup, and so forth. Layer 2 should be the site of ongoing innovation, layer 1 should be the site of stability and maintenance, with large changes only in emergencies" "Most of the discussion will be at a relatively high and intuitive level; it is assumed only that the reader has a basic familiarity with zero-knowledge proof systems, group theory, and computational complexity up to understanding the class of problems known as NP." "In a zero-knowledge proof, a prover wants to convince a verifier that some public statement is true. However, the proof of this statement holds sensitive details that can’t be shared." "In the case of Bitcoin or Ethereum, a blockchain can be thought of as a shared, public database. Anyone can download a copy of this database and participate in adding new records to it." "This is the second installment in the Ethereum 101 series. Previously, we explored blocks and how they’re linked to form a blockchain, then downloaded Ethereum Grid and poked around at real block data on a test network." "The Ethereum network, like many common-pool resources, has some maintenance problems: Burgeoning state bloat, ever-increasing sync times, declining popularity of running full nodes. If left un-addressed, these issues pose an existential threat to the future of the Eth1.x network." "State Channels and Sidechains are the two terms in Ethereum community that are often used interchangeably, thus causing mass confusion." "A key requirement of any blockchain is its ability to secure its chain. A secure chain is one in which transactions are verifiable and immutable i.e. by referring to the blockchain someone can obtain the details of a transaction that took place, and be sure the details are not faked and the transaction cannot be altered retrospectively." "Why does optimistic rollup have the properties it has, why is it secure and decentralized, and why is it the most promising avenue for scaling Ethereum today?" The Ethereum Name Service is one of the most popular projects on Ethereum right now and for good reason. As the website states, “ENS offers a secure & decentralized way to address resources both on and off the blockchain using simple, human-readable names.”' "Running an Ethereum full node might seem like a complicated endeavor, but, despite what you might have heard on Twitter, it it turns out to be easy." "In 2014, I made a post and a presentation with a list of hard problems in math, computer science and economics that I thought were important for the cryptocurrency space (as I then called it) to be able to reach maturity. In the last five years, much has changed." "This article aims to introduce formal verification, review existing tools in the age of blockchains and emphasize challenges specific to Ethereum."
OPCFW_CODE
M: Apple Isn't The Only Disruptor: How Amazon Is Killing Publishers - FluidDjango http://techcrunch.com/2012/01/19/apple-isnt-the-only-disruptor-how-amazon-is-killing-publishers/ R: corin_ Somewhat depressing that the submission to the actual source yesterday (or was it the day before) didn't make it to the front page, only had three upvotes when I found it hours after it's submission... but when TechCrunch regurgitate it, here it is on the front page. I really hope this was caused by luck and timing, and perhaps by the overwhelming focus on SOPA - but a voice in the back of my head is saying that it could well be getting more attention for being from TechCrunch, which would be sad. Regardless, better to see it on the front page now than not at all, good piece. R: jaysonelliot I have no love lost for the publishing industry, but the death of printed books is one of the most tragic large-scale events in my lifetime (actual human suffering excepted, of course). I'm still young, but I never thought I would see this day come. Go back to 1990 and tell the average person on the street that in the next few decades, bookstores would all go out of business and the only way to buy a book would be to read it on a digital device. They'd either flatly refuse to believe you, or be frozen in horror. Yet that's where we're headed, not so far in the future, and as far as I can tell, there's nothing we can do about it. R: javanix I don't understand this at all. I don't read books for the experience of turning pages - I read them for the experience of reading and interpreting and taking in a story. What exactly is more romantic or useful about reading the latest Stephen King novel on a tablet versus a bound sheaf of paper? The death of the printed book boils down reading to the bare essence of appreciating the written word. R: Florin_Andrei I think the tablets and e-readers these days still suck. The interface is still rough around its virtual edges. I can't get into the "zone" if I'm reading a digital book - not yet. This may change in the future, as today is still the early childhood of this technology. I'm also concerned about volatility. Paper books last for decades, or even more if properly cared for. I struggle to keep e-documents even a few years old. This is a fundamental problem and it needs a proper solution, quickly. Finally, DRM gets in the way big time. Lending an e-book? Forget it. Finding one day that the content on your tablet was remotely deleted from the mothership? It's unlikely, but the reality is you're at the mercy of whoever made the tablet and they can certainly do so by invoking some obscure paragraph in the 200-page EULA that nobody ever reads. You're the "owner" of an e-book in name only; Amazon is the real big kahuna who calls all the shots. We will inevitably move to electronic formats, no doubt. But this is the Stone Age of e-books - these are the stone tablets of e-content. We're still in a very very early stage. R: ghshephard Re: Zone - I'm not distracted by changing pages anymore. Just an unconscious muscle twitch and (with newer kindles, and all iPads), I'm onto the next page. I've had many a 12+ hour reading sessions, and several "Get sucked into a book" weekends with digital texts. Volatility - My books are in the Amazon Cloud. All of my digital books are still available to me. 100%. I have access to about 5% of the paper books I've purchased in my life. The _worst_ case scenario, is that Digital Books become as Volatile as paper books, if Amazon.com goes out of business, and I'm stuck with (horrors) the physical representation of the book as present on my kindle/iPad (and Laptop backup, and back blaze backup of my Laptop). The cloud changes everything for volatility. DRM Sucks. Big Time. With that said, Amazon's implementation only impact is that I don't casually lend books to other - which I typically never have anyways, so no impact on me. I too like (and frequently take use) of the fact that 100% of my books are available on my Kindle, iPad, iPhone, Laptop, and CloudReader. But - with that said - the DRM on physical books is pretty harsh too - You need to go to a photocopier to make a copy of the book. I just lend my mother my cloud reader account, and she can read any of my books on the web. Or lend a second friend my Kindle, and voila - he has all of my books. While I still have all of them on my iPad and iPhone. Or, if I don't have five lent out, I can pull them onto my old K2. I don't think DRM on the kindle has ever stopped me from reading, or lending a book to someone I know. It _does_ prevent casual lending. Agreed there. I haven't read a paper book in about 4 years. I find it bizarre to even consider moving back to them. R: newbusox I don't know if publishing companies (or even book sellers, like Borders) have attempted this in the past, but this article (and the article it links to) lays out a pretty fair case for predatory pricing (sometimes referred to as "dumping") which is when a business attempts to root out competitors by selling below cost and then (hypothetically), when its competitors are out of the market, raises the price above competitive levels. This is generally illegal under the Sherman Act (antitrust law) in the United States (and likely in other countries, too), although, unfortunately, it is quite difficult to succeed on these sorts of claims in the US, most notably because, while a company is engaging in predatory pricing, consumers are better off since they are paying less (and a fair number of economists don't believe that predatory pricing is rational or even possible to work for various reasons). Still, there are remedies for companies that engage in this behavior--maybe they'll come too late to save small publishers in this case, but they exist. R: nextparadigms Google missed a huge opportunity by not doing anything seriously disruptive themselves, too, when they entered the e-book market. Did they really expect to take anything from Amazon by just competing with Amazon head-on? When market leaders are so entrenched, your only hope to beat them is by disrupting them and changing the game forever. One way they could've done that is by trying harder to bypass the middle-men and enforce self-publishing. They're sort of trying to do that with Google Artist Hub for music artists, but I doubt they are taking that very seriously, either. R: frabcus Amazingly you've all missed the big problem with electronic books. One I worry about from time to time. There's a reasonable chance our civilisation will collapse in the near future (within 100 years). (Not going to go into detail of why - think climate change, viruses, meteors... take your pick) When it does, there'll still be people around. And if it is a severe collapse, there won't be the infrastructure to recharge, install new books, manufacture or maintain e-readers. This is a pain, as books will be _really_ useful then. e.g. About farming, traditional methods of manufacture etc. R: macrael Do you think the internet is a bad idea too? There is an awful lot of digital knowledge and tools out there these days we won't have access to in the event of a collapse. I think it is a bit of a stretch to say that this is "the big problem with electronic books".
HACKER_NEWS
- User - resource - User - tags - Resource - users - Resource - tags - Tag - resource - Tag - users User country ≠ Resource country In this case I am interested in studying users collections of bookmarked resources, especially establishing the facts based on which country the resources are originated from. Using the cross-border metrics I can take a snapshot of the resources and calculate a cross-border resources value for the use. - E.g. User Finland has bookmarked Resource1 Poland , Resource2 Spain and Resource3Finland - This would make a User Finland to have a resource profile Poland 33%, Spain 33% and Finland 33% - In this case, as the user is from Finland, the cross-border profile would be 66% which would most likely have a value of .66, if we imagine that the cross-border value is between 0 and 1. - This allows me to categorise this user into cross-border user of resources. I assume that users have differences in their inclination of using resources that come from different countries, some use them a lot others do not want to bother with them. - So this metric allows me to study who does what and thus better understand our user-base. - On the long run this of course will make it easier to recommend resources to users, as we already know that in their profile it shows that they are inclined to use cross-border resources. This allows to me to look at the thing from a different point of view. Here, I am interested in establishing a profile for a resource. It appears that some resources are used a lot by people from different countries, whereas others are used predominantly by users from the same country than the resource itself is from. - E.g. Resource Finland has been bookmarked by User1 Poland , User2 Spain and User3Finland - This makes the ResourceFinland to have a profile Poland 33%, Spain 33% and Finland 33%. - In this case, as the resources is from Finland, the cross-border profile would be 66% of users, which would most likely have a value of .66, if we imagine that the cross-border value is between 0 and 1. Second, we can use this information to make filter out the resources that we think cross borders easily. This could be cool for example on our portal, we could flag out these resources for users, and furthermore, we could give these resources a priority when other repositories are harvesting or searching us in a federated manner. Resource country Taglanguage It'll also be interesting to create profiles for resources based on tags in different languages. For tag, we do not trace the country of origin, rather just the language. So in this case I'm interested in looking at resource profile on tags. - E.g. Resource Finland has been added a Tag1 Polish, Tag2 Spanish and Tag3 Finnish - This makes the ResourceFinland to have a tag profile Polish 33%, Spanish 33% and Finnish 33%. - In this case, as the resources is from Finland, the cross-border tag profile would be 66% of users, which would most likely have a value of .66, as above. Here an interesting case seem to emerge for topics like Language learning, say, English as Second Language (ESL). Language learning and teaching resources seem to be easily reusable in another language context. Interestingly, though, we've seen that in these cases teachers tend to tag them in the language in question. E.g. User Finland has added a Tag English for ESL Resource Poland Tag language Resource country We can also look at the things from tags perspective. - E.g. Tag Finnish has been added to Resource1Poland, Resource2 Spain and Resource3 Finland - This makes the TagFinnish to have a resource profile Polish 33%, Spanish 33% and Finnish 33%. - In this case, as the resources is from Finland, the cross-border tag profile would be 66% of users, which would most likely have a value of .66, as above Tag language ≠ User country On the other hand, we also find tags that have been used by users from different countries. These are the tags that we have previously identified as "travel well" tags. They have some interesting properties that make them easily understandable without translations, e.g. names (people, country, place), acronyms, common terms (web2.0). By looking at the connection between Tag language and User country we can possibly identify such tags. The other common case for this seems to be that these people have tagged the resource in English. In any case, if many people have done that, we can identify these terms and manually analyse them. The hypothesis is that they either are "travel well" tags or then they are some super popular tags that could also count high on tag non-obviousness metric by Farooq et l (2007). User country - Tag language Lastly, just to enumerate the cases, we also have the relation User country and Tag language. This can be used to study user's personal tagging behaviour. In the previous study in Calibrate we found that on average users tag in their mother tongue and in English (75% to 25%). It seems though that things look different in MELT, where teachers are tagging more in English. We are not sure whether these are personal preferences or the influence of social awareness, as in MELT tags are made readily available to others through a tag cloud, whereas in Calibrate they were only used for personal knowledge management reasons. In any case, this relation allows us to measure individual differences between users and thus understand our user-base and possible user scenarios better. What next? I will make a case study to apply these measures to MELT tags that we've got in the system so far
OPCFW_CODE
What is Microsoft’s vision? Two decades ago, Microsoft’s vision was clear and Bill Gates was executing brilliantly on it. His vision was to simply put Windows in the center of everything making it the most popular platform on earth, helping developers build hundreds of thousands of applications on it, and deliver it to hundreds of millions of customers worldwide. Looking back at that vision and how Microsoft succeeded executing it, you have to wonder about Microsoft’s current vision. The leadership at Microsoft has changed, the times have changed, and the competition is stronger but the same old vision remained the same. Microsoft is still putting Windows, an operating system that was built for the “PC era”, in the center of almost everything while other companies are inventing new markets, building new devices, and innovating on many different levels to disrupt many different industries. If you look back at the last twenty years, you will need to think for a while before pointing at something or an industry that Microsoft has invented or innovated in except for the XBOX (and of course, Kinect). There has been tons of small innovations here and there inside of existing products like Office and even Windows itself, but Microsoft missed tons of opportunities in many different markets including Mobile, Music, Movies, Tablets, and tens of other markets Microsoft either ignored or simply entered way too late to make a real difference. Microsoft is not only losing its fan base, they are also losing marketshare by refusing to “update” their vision and maybe even their strategy. The company has become too big to move quickly, something Google is trying to avoid by continuing its efforts of getting the best talent and producing the most innovative products as quickly as possible by giving engineers inside the company the opportunity to make new products, launch them to market, and test the waters to see if they will succeed. Microsoft’s official mission statement is “to help people and businesses throughout the world realize their full potential”, that’s very nice but completely vague and means absolutely nothing. It is like saying my goal is to “change the world” but I am not saying how. What Microsoft needs right now is the “Think Different” ad campaign. They don’t need it to restore the trust they are slowly losing from their customers (especially small businesses and college students), they need to tell their employees what Microsoft is all about. Microsoft’s problem is not with revenue, they are doing well and will continue to do well the same way Yahoo is still making profit. Their problem is that their future is uncertain under the current leadership. Windows should no longer be the center of everything because Windows will no longer be relevant as it is. It needs to be redefined, re-imagined, and reimplemented. I am not sure if Windows 8 has done that. The Metro interface is really cool but it is not really redefining what Windows is the same way Apple redefined how tablets should work or what Music Players are (regardless if you agree with their definition or not). Microsoft needs to “Think Different” and Ballmer needs to define a clear vision for his employees and the general public. After that, he needs to leave, someone else with a more technical vision needs to lead. Bill Gates had a real passion for technology. In addition to building a very successful company, he had a passion for technology. Ballmer doesn’t have that passion, at least, it is not more important to him than simply reporting higher profit margins. A CEO’s job (in my opinion) is not to make money, it is to make money while making sure the company is innovating and building a solid future for itself.
OPCFW_CODE
""" A script which extracts the communication information from FurnMove evaluation episodes (using agents trained with SYNC policies and the CORDIAL loss) and saves this information into a TSV file. This TSV file can then be further analyzed (as we have done in our paper) to study _what_ the agents have learned to communicate with one another. By default, this script will also save visualizations of the communication between agents for episodes that are possibly interesting (e.g. not too short and not too long). If you wish turn off this behavior, change `create_communication_visualizations = True` to `create_communication_visualizations = False` below. """ import glob import json import os from collections import defaultdict from typing import Optional import numpy as np import pandas as pd from matplotlib import pyplot as plt from constants import ABS_PATH_TO_ANALYSIS_RESULTS_DIR, ABS_PATH_TO_DATA_DIR def visualize_talk_reply_probs(df: pd.DataFrame, save_path: Optional[str] = None): steps = list(range(df.shape[0])) a0_tv_visible = np.array(df["a0_tv_visible"]) a1_tv_visible = np.array(df["a1_tv_visible"]) a0_action_taken = np.array(df["a0_next_action"]) a1_action_taken = np.array(df["a1_next_action"]) a0_took_pass_action = np.logical_and(a0_action_taken == 3, a1_action_taken < 3) a1_took_pass_action = np.logical_and(a1_action_taken == 3, a0_action_taken < 3) a0_took_mwo_action = np.logical_and(4 <= a0_action_taken, a0_action_taken <= 7) a1_took_mwo_action = np.logical_and(4 <= a1_action_taken, a1_action_taken <= 7) fig, axs = plt.subplots(2, 1, sharex=True, figsize=(3, 3)) fig.subplots_adjust(hspace=0.1) # Plot 1 axs[0].set_ylim((-0.15, 1.15)) axs[0].set_ylabel("Talk Weight") axs[0].plot(steps, df["a00_talk_probs"], color="r", linewidth=0.75) axs[0].plot(steps, df["a10_talk_probs"], color="g", linewidth=0.75) a0_tv_visible_inds = np.argwhere(a0_tv_visible) if len(a0_tv_visible_inds) != 0: axs[0].scatter( a0_tv_visible_inds, [-0.05] * len(a0_tv_visible_inds), color="r", s=4, marker="|", ) a1_tv_visible_inds = np.argwhere(a1_tv_visible) if len(a1_tv_visible_inds) != 0: axs[0].scatter( a1_tv_visible_inds, [-0.1] * len(a1_tv_visible_inds), color="green", s=4, marker="|", ) # Plot 2 axs[1].set_ylim((-0.15, 1.15)) axs[1].set_ylabel("Reply Weight") axs[1].set_xlabel("Steps in Episode") axs[1].plot(steps, df["a00_reply_probs"], color="r", linewidth=0.75) axs[1].plot(steps, df["a10_reply_probs"], color="g", linewidth=0.75) a0_pass_action_steps = np.argwhere(a0_took_pass_action) if len(a0_pass_action_steps) != 0: axs[1].scatter( a0_pass_action_steps, [1.1] * len(a0_pass_action_steps), color="r", s=4, marker="|", ) a1_pass_action_steps = np.argwhere(a1_took_pass_action) if len(a1_pass_action_steps) != 0: axs[1].scatter( a1_pass_action_steps, [1.05] * len(a1_pass_action_steps), color="g", s=4, marker="|", ) a0_mwo_action = np.argwhere(a0_took_mwo_action) if len(a0_mwo_action) != 0: axs[1].scatter( a0_mwo_action, [-0.05] * len(a0_mwo_action), color="r", s=4, marker="|", ) a1_mwo_action = np.argwhere(a1_took_mwo_action) if len(a1_mwo_action) != 0: axs[1].scatter( a1_mwo_action, [-0.1] * len(a1_mwo_action), color="g", s=4, marker="|", ) if save_path is not None: fig.savefig(save_path, bbox_inches="tight") plt.close(fig) else: plt.show() if __name__ == "__main__": create_communication_visualizations = True dir = os.path.join( ABS_PATH_TO_DATA_DIR, "furnmove_evaluations__test/vision_mixture_cl_rot" ) id = dir.split("__")[-2] print() print(id) recorded = defaultdict(lambda: []) for i, p in enumerate(sorted(glob.glob(os.path.join(dir, "*.json")))): if i % 100 == 0: print(i) with open(p, "r") as f: result_dict = json.load(f) recorded["a00_talk_probs"].extend( [probs[0] for probs in result_dict["a0_talk_probs"]] ) recorded["a10_talk_probs"].extend( [probs[0] for probs in result_dict["a1_talk_probs"]] ) recorded["a00_reply_probs"].extend( [probs[0] for probs in result_dict["a0_reply_probs"]] ) recorded["a10_reply_probs"].extend( [probs[0] for probs in result_dict["a1_reply_probs"]] ) ar0, ar1 = result_dict["agent_action_info"] for j, srs in enumerate(result_dict["step_results"]): sr0, sr1 = srs before_loc0 = sr0["before_location"] before_loc1 = sr1["before_location"] recorded["from_a0_to_a1"].append( round((before_loc0["rotation"] - before_loc1["rotation"]) / 90) % 4 ) recorded["from_a1_to_a0"].append((-recorded["from_a0_to_a1"][-1]) % 4) recorded["a0_next_action"].append(sr0["action"]) recorded["a1_next_action"].append(sr1["action"]) recorded["a0_action_success"].append(1 * sr0["action_success"]) recorded["a1_action_success"].append(1 * sr1["action_success"]) if j == 0: recorded["a0_last_action_success"].append(1) recorded["a1_last_action_success"].append(1) else: old0, old1 = result_dict["step_results"][j - 1] recorded["a0_last_action_success"].append(1 * old0["action_success"]) recorded["a1_last_action_success"].append(1 * old1["action_success"]) e0 = sr0["extra_before_info"] e1 = sr1["extra_before_info"] recorded["a0_tv_visible"].append(1 * e0["tv_visible"]) recorded["a1_tv_visible"].append(1 * e1["tv_visible"]) recorded["a0_dresser_visible"].append(1 * e0["dresser_visible"]) recorded["a1_dresser_visible"].append(1 * e1["dresser_visible"]) recorded["index"].extend([i] * len(result_dict["a0_talk_probs"])) recorded = dict(recorded) df = pd.DataFrame(recorded) df.to_csv( os.path.join( ABS_PATH_TO_DATA_DIR, "furnmove_communication_analysis/furnmove_talk_reply_dataset.tsv", ), sep="\t", ) if create_communication_visualizations: print("Creating communication visualizations") k = 0 while True: print(k) k += 1 subdf = df.query("index == {}".format(k)) if 60 < subdf.shape[0] < 80 and subdf.shape[0] != 250: visualize_talk_reply_probs( subdf, save_path=os.path.join( ABS_PATH_TO_ANALYSIS_RESULTS_DIR, "plots/furnmove_communication/{}.pdf", ).format(k), )
STACK_EDU
Export Azure Resources via Powershell to CSV This post shows a Powershell script that connects to Azure and exports all resources from multiple subscriptions to a CSV file. It also shows how this script can be used inside of a scheduled task which creates the CSV file on a daily base. This should show you how you can download a file with Powershell. This is not a script or function you should use. It just is the the easyiest way to download a file with Powershell. If I have enough time I will create a function for downloading files In this article, I’ll provide a simple script that leverages Azure PowerShell to call this API and export usage data from your Azure subscription to a CSV file for further analysis … Export Azure Usage via PowerShell using Get-UsageAggregates. Before running this script, be sure to download and install the latest version of the Azure This video shows you how to install the Azure PowerShell cmdlets so you can start working with your Azure subscription using PowerShell like the cool kids instead of the portal. We look at how to 27 Mar 2017 Use the Kudu API to copy these files from a storage account. You can get your hands on these credentials by downloading the publish profile 27 Jul 2016 Azure blob storage is great for storing large amount of data that is. have a blob storage where we can upload and download files to and from. 14 May 2018 So, once you have downloaded the .vhd file from any account on to the local machine, try uploading From the Windows Azure Portal you can easily download the VHD. You can use following PowerShell commands to . 15 Dec 2017 Tips and tricks Inline Powershell task VSTS, download files into your You can run a PowerShell script on you agent or as Azure Powershell. 27 Jul 2016 Azure blob storage is great for storing large amount of data that is. have a blob storage where we can upload and download files to and from. 14 May 2018 So, once you have downloaded the .vhd file from any account on to the local machine, try uploading From the Windows Azure Portal you can easily download the VHD. You can use following PowerShell commands to . 12 Dec 2017 switch between Bash” and PowerShell in Azure Cloud Shell Note that you can download your scripts/files stored on your Cloud Drive from Step by step instructions to download Azure BLOB storage using Azure PowerShell. Azure resources are helpful for building automation scripts. Step by step instructions to download Azure BLOB storage using Azure PowerShell. Azure resources are helpful for building automation scripts. How To Upload A File To Amazon S3 Using AWS SDK In MVC. 05. In my previous post, I showed you how to set up PowerShell so you can use it to perform commands against blob storage. In this post, I’ll show you how to create a container in blob storage and then upload files from the local machine to blob storage, and how to download files from blob… In this blog post, I will show you how to upload and run PowerShell scripts from Microsoft Azure Cloud Shell. Cloud Shell Azure Cloud Shell gives us great flexibility because we can store our scripts and run them from a cloud location. Cloud Shell does not depend on the local machine, which is excellent for … Continue reading "Upload And Run PowerShell Script from Azure Cloud Shell" Upload File to Azure Blob Storage with PowerShell 04 April 2019. In one of my previous blogs, I've explained how to download file from Azure Blob storage… In this example shows you how to upload a file to Azure Blob Storage by just using the native REST API and a Shared Access Signature (SAS) I often just need the Remote Desktop file of a VM to work. For that it takes a longer time to go to the portal and download. Rather I prefer the PowerShell way of doing it. Summary Azure AD PowerShell V2.0 gives us all needed functionality to keep automating our license assignment in Azure AD. It might take you a bit longer to learn it since it is somewhat more “PowerShelly” with the different objects used to… In this blog series ‘Tips and Tricks for Inline Powershell’, I will show simple samples on how to get more out of your pipelines. This blog post: Download a file. VSTS Inline Powershell task The Inline PowerShell VSTS task enables you to execute PowerShell from a textbox within your build or release pipeline. 20 Oct 2019 Click this link to download the msi installer file. As mentioned above, we will install Azure PowerShell from PowerShell gallery. Ensure that your 10 May 2019 Azure File Sync brings the manageability of the cloud and the agent on your server using Invoke-WebRequest to download it from Microsoft. 15 Dec 2017 Tips and tricks Inline Powershell task VSTS, download files into your You can run a PowerShell script on you agent or as Azure Powershell. Because an Azure app service is just another IIS web site, we'll be using the MSDeploy tool (through the PSWebDeploy PowerShell module) to do all of the heavy liftings. To download all content from an Azure app service, we'll use the Sync-Website function that's in the PSWebDeploy module. This function acts as a wrapper to msdeploy. Downloading files from an Azure Blob Storage Container with PowerShell is very simple. There is no need to install any additional modules, you can just use the Blob Service REST API to get the files. This example is using a Shared Access Signature (SAS) as this gives a granular- and I don't believe this is related to Azure Automation. PowerShell DSC can be used without automation. The issue that is being experienced is the inability for Azure Files Shares to be accessed from a DSC resource, so I would say the storage forum is the right place. Quickstart: Upload, download, and list blobs with PowerShell. 12/04/2019; 4 minutes to read +3; In this article. Use the Azure PowerShell module to create and manage Azure resources. Creating or managing Azure resources can be done from the PowerShell command line or in scripts. Creating your own tools through PowerShell makes transferring content locally exponentially simpler. Here's how to do it yourself. Once an Azure app service has been created and all content uploaded, we sometimes need a way to download those files locally. 3 ways to download files with PowerShell. 3 Apr 2015 | Jourdan Templeton This method is also fully compatible on Server Core machines and 100% compatible with your Azure Automation runbooks. 3. Start-BitsTransfer This method is perfect for scenarios where you want to limit the bandwidth used in a file download or where time isn't a The Azure File Sync agent enables data on a Windows Server to be synchronized with an Azure File share. Breaking news from around the world Get the Bing PowerShell management cmdlets: PowerShell cmdlets for interacting with the Microsoft.StorageSync Azure Resource Provider. The cmdlets can be found at the following locations (by default): - cómo descargar archivos torrent por separado con una carpeta - descarga gratuita de creador de cómics - descargar major lazer peace es el álbum de la misión - descarga de pc calculadora bitcoin - descargar gta san andreas 700mb pc - descargar video trim para pc - lg g4 android 7.0 descargar
OPCFW_CODE
Instances not Starting One Instance don't get started. The Instance Scheduler role has the kms:CreateGrant permission and the Instance Scheduler can start and stop other Instances with or without encrypted EBS. This is the cloudwatch log 2021-02-17 - 13:45:20.503 - INFO : Handler SchedulerRequestHandler scheduling request for service(s) ec2, account(s)<PHONE_NUMBER>12, region(s) eu-central-1 at 2021-02-17 13:45:20.503507 2021-02-17 - 13:45:20.684 - INFO : Running EC2 scheduler for account xyz in region(s) eu-central-1 2021-02-17 - 13:45:21.284 - INFO : Fetching ec2 instances for account xyz in region eu-central-1 2021-02-17 - 13:45:21.583 - DEBUG : Selected ec2 instance i-02dbc978e2165b2c4 in state (stopped) 2021-02-17 - 13:45:21.583 - INFO : Number of fetched ec2 instances is 1, number of instances in a schedulable state is 1 2021-02-17 - 13:45:21.883 - DEBUG : [ Instance EC2:i-02dbc978e2165b2c4 (xxxx) ] 2021-02-17 - 13:45:21.883 - DEBUG : Current state is stopped, instance type is t3.medium, schedule is "working-days-7-20" 2021-02-17 - 13:45:21.884 - DEBUG : Time used to determine desired for instance is Wed Feb 17 14:45:19 2021 2021-02-17 - 13:45:21.884 - DEBUG : Checking conditions for period "working-days-7-20" 2021-02-17 - 13:45:21.884 - DEBUG : [running] Weekday "wed" in weekdays (mon-fri) 2021-02-17 - 13:45:21.884 - DEBUG : [running] Time 14:45:19 is within 07:00:00-20:00:00, returned state is running 2021-02-17 - 13:45:21.884 - DEBUG : Active period in schedule "working-days-7-20": "working-days-7-20" 2021-02-17 - 13:45:21.884 - DEBUG : Desired state for instance from schedule "working-days-7-20" is running, last desired state was running, actual state is stopped 2021-02-17 - 13:45:21.884 - INFO : Scheduler result {'xyz': {'started': {}, 'stopped': {}, 'resized': {}}} Hi @hniemann , Is this issue still reproducible? For further investigation, Can you please share the period and schedules from ConfigTable in dynamoDB. and instance state from the stateTable. Thanks, Praveen Please reopen this issue if you are able to provide more details.
GITHUB_ARCHIVE
On 8/23/2012 3:40 PM, Paul Wouters wrote: > On Wed, 22 Aug 2012, Bry8 Star wrote: > >> There are many other Root servers other than ICANN Root servers. For >> example: CesidianRoot (http://www.cesidianroot.net/), OpenNIC >> (http://www.opennicproject.org/), New Nations (New-Nations.net), >> Namecoin DNS (DotBIT project, bit DNS) (http://dot-bit.org), 42 >> (http://42registry.org/), OVH (http://ovh.co.uk/), i-DNS (MultiLingual >> DNS) (i-dns.net), Public-Root ( http://public-root.com), UnifiedRoot >> (unifiedroot.com), etc. > > And we had alternic, alternet, .bofh and many others. They all died. > and new ones are also starting up, you did not mentioned those ! On 8/23/2012 3:40 PM, Paul Wouters wrote: > On Wed, 22 Aug 2012, Bry8 Star wrote: > >> How can i integrate all into one Unbound or into a central Unbound ? to >> use their all TLDs, which are not found in default ICANN/IANA root >> servers. > > How are you going to deal with overlapping domain names? > it would be upto end-user like me to choose which one i want to reach, or, what technique i can apply to reach into both area. What do you suggest to solve a problem like this ? how can i reach both side ? could i re-map such one TLD onto another one or add '2' at end, and use ? How to do that on 'Unbound' ? On 8/23/2012 3:40 PM, Paul Wouters wrote: > On Wed, 22 Aug 2012, Bry8 Star wrote: > >> For example, i had to add these in unbound.conf/service.conf for '42' >> TLD: >> >> domain-insecure: "42" >> stub-zone: >> name: "42" >> stub-addr: 220.127.116.11 # 42Registry a.42tld-servers.net europe >> stub-addr: 18.104.22.168 # 42Registry b.42tld-servers.net europe >> stub-addr: 22.214.171.124 # 42Registry c.42tld-servers.net europe > > Try using forward zone? either in config or using: > > sudo unbound-control forward_add 42 126.96.36.199 188.8.131.52 > 184.108.40.206 > i'm not understanding your command, what will it do ? currently 42 is resolving fine, please see my other email. the mentioned IP addresses are their nameservers, aren't nameservers suppose to be added inside 'stub-zone' in unbound ? those are not able to resolve icann/iana root TLDs. and i dont remote control unbound. On 8/23/2012 3:40 PM, Paul Wouters wrote: > On Wed, 22 Aug 2012, Bry8 Star wrote: > >> if 42 TLD supports/has DNSSEC components, then how can i use them ? or >> how to enable DNSSEC for 42 TLD ? > > You can preload any dnssec key with trusted-keys-file: What you are > doing (at the root) is not much different from adding > "private views" higher up. So googling for "bind views" might help you > as well. > Thanks. Need an unbound config file commands/options. Please response using the other email on this. On 8/23/2012 3:40 PM, Paul Wouters wrote: > On Wed, 22 Aug 2012, Bry8 Star wrote: > >> by the way, your irc channel #unbound in irc.freenode.net is very >> in-active, and some users who did post some messages, instead of helping >> out, they question the 'question' ! or question the 'user' who is >> posting the question or asking for help ! instead of asking more about >> the problem itself, and what can be done to solve it ! very unfriendly >> attitudes. Most likely these users does not like to help others, or >> grumpy, or busy with something else, or expecting something else from >> users. > > What you are trying to accomplish is wrong. Scattering roots and losing > the global agreement on an address is just bad. I recommend you read: > > http://nohats.ca/wordpress/blog/2012/04/09/you-cant-p2p-the-dns-and-have-it-too/ > > > Paul Hello Paul, TRY to see what kind of mistake you are doing: you are saying me "What you are trying to accomplish is wrong" ! ... please direct that to alternative Root server operators or related person, and, also to icann/iana related person. Not an end user like me. End user like me who is trying to use 'Unbound' like DNS resolver (and not a DNS server) on end-user OS like Windows XP,7, will use what already exists. Probably, if you read carefully, you will see, my target is to integrate and use TLDs that are already in icann/iana/etc, AND also use other TLDs that are in other alternative root servers. 'Unbound' by default already uses ICANN/iana/etc, want to resolve/add more TLDs which they cannot resolve. I'm in mailing list, and started this email-thread, in the hope that there may be some people who are willing help on to get a working solution, not for discussing other issues.
OPCFW_CODE
<?php require __DIR__ . '/vendor/autoload.php'; require "oauth_vars.php"; require "db-connection.php"; include "generate-image.php"; use Abraham\TwitterOAuth\TwitterOAuth; $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_SECRET); date_default_timezone_set('America/New_York'); $now = new \Moment\Moment('now','America/New_York'); /* function timeCheck($time){ $convTime = DateTime::createFromFormat('n/j/Y g:i:u A',$time); date_timezone_set($convTime,new DateTimeZone('UTC')); $timeNow = new DateTime(); $difference = $timeNow->diff($convTime); // var_dump($timeNow); // var_dump($convTime); switch ($difference){ case $difference->a == 1: return $difference->format('in %a day %h hours %i minutes.'); break; case $difference->a > 1: return $difference->format('in %a days %h hours.'); break; case $difference->h == 1: return $difference->format('in %h hour %i minutes.'); break; case $difference->h > 0: return $difference->format('in %h hours %i minutes.'); break; default: return $difference->format('%i minutes.'); break; } } */ $getTheTweets= $db->prepare( 'SELECT * FROM INCIDENTS WHERE TYPE = 4 AND SEV >= 3 ORDER BY TWEETED ASC LIMIT 4 ' ); $updateTweeted = $db->prepare( 'UPDATE INCIDENTS set TWEETED = TWEETED + 1, LASTTWEET = :now where ID= :id' ); function sendTweet($item,$oAuth,$updateQuery){ $rowId =$item['ID']; $end = $item['END']; $endTimeMoment = new \Moment\Moment("@" . $end, 'America/New_York'); $now = new \Moment\Moment('now','America/New_York'); $nowComp = $now->format('U'); $fifteenAgo = $now->subtractMinutes(15); $fifteenAgoComp = $fifteenAgo->format('U'); $lastTweet = $item['LASTTWEET']; $lastTweetedMoment = new \Moment\Moment($lastTweeted); $lastTweetComp = $lastTweetedMoment->format('U'); $sev = $item['SEV']; $type = $item['TYPE']; $tweetString = ""; if ( $item['TWEETED'] == 0 && ($end > $nowComp) && ($lastTweetComp > $fifteenAgoComp) ) { //concatenate string for tweet $tweetString .= "NEW: "; $tweetString .= $item['SHORTDESC']; $tweetString .= '. Expected clear by '; $tweetString .= $endTimeMoment->format('h:i A.'); $tweetString .= ' #ATLTraffic'; //generate image, returns image file path $imgPath = generateImage($sev,$item['FULLDESC'],$rowId); //test output, uncomment next line // echo '<br/>' . $rowId. ' -- ' . $tweetString . '<br/>SEV: ' . $sev . 'TYPE: ' . $type . '<br/><br/>' . $nowComp. 'vs' . $end . '<br/><br/>'; // echo '<img style="width:512px" src= "./assets/images/_' . $rowId . '_img.png" />' ; //TODO: Does this return Tweet ID? If so, should check whether "latest" tweets can be in reply to "new" tweets if(file_exists($imgPath)){ $trafficImage = $oAuth->upload( 'media/upload', ['media' => $imgPath] ); $parameters = array( 'status' => $tweetString, 'media_ids' => $trafficImage->media_id_string, ); $tweeted = $oAuth->post('statuses/update', $parameters); if ($oAuth->getLastHttpCode() == 200) { echo "Tweet sent successfully. Text: " . $tweetString; $updateQuery->bindValue(':id', $item['ID']); $updateQuery->bindValue(':now', $nowComp); $updateQuery->execute(); } else { echo "<br/>something went wrong HTTP CODE". $oAuth->getLastHttpCode() . "<br/>"; } } } elseif( ($item['TWEETED'] > 0) && ($end < $nowComp) && ($lastTweetedMoment < $now->subtractMinutes(15)) ) { //concatenate string for tweet $tweetString .= "LATEST: "; $tweetString .= $item['SHORTDESC']; $tweetString .= '. Expected clear by '; $tweetString .= $endTimeMoment->format('h:i A.'); $tweetString .= ' #ATLTraffic'; //generate image, returns file path to image $imgPath = generateImage($sev,$item['FULLDESC'],$rowId); //test output, uncomment next line // echo '<br/>' . $rowId. ' -- ' . $tweetString . '<br/>'; // echo '<img style="width:512px" src= "./assets/images/_' . $rowId . '_img.png" />' ; //TODO: Does this return Tweet ID? If so, should check whether "update" tweets can be in reply to "new" tweets if(file_exists($imgPath)){ $trafficImage = $oAuth->upload('media/upload', ['media' => $imgPath]); $parameters = [ 'status' => $tweetString, 'media_ids' => $trafficImage->media_id_string ]; $tweeted = $oAuth->post('statuses/update', $parameters); if ($oAuth->getLastHttpCode() == 200) { echo "Tweet sent successfully. Text: " . $tweetString; $updateQuery->bindValue(':id', $item['ID']); $updateQuery->bindValue(':now', $nowComp); $updateQuery->execute(); } else { echo "<br/> something went wrong HTTP CODE". $oAuth->getLastHttpCode() . "<br/>"; } } } } $ret = $getTheTweets->execute(); $justHour = date('G'); if ( ( $justHour < 2 ) || ( $justHour > 5 ) ) { while($row = $ret->fetchArray(SQLITE3_ASSOC) ){ sendTweet($row,$connection,$updateTweeted); // $endTime = $row['END']; // $endTimeMoment = new \Moment\Moment($endTime); // echo '<br/>' . $row['ID']. ' -- ' . $row['SHORTDESC'] . 'SEV: ' . $row['SEV'] . 'TYPE: ' . $row['TYPE'] . ' END TIME: ' . $endTimeMoment->addHours(9)->format('h:i A.') . '<br/>'; } } else { echo "It\'s too late or too early?"; } echo "Operation done successfully\n"; $db->close(); unset($db); ?>
STACK_EDU
VOXXED Days BUCHAREST event will run on March 11th, 2016, in Bucharest, Romania. This developer conference will bring together popular speakers, core developers of popular open source technologies and professionals willing to share their knowledge and experiences. IPT CTO Trayan Iliev will have a talk on Reactive Java Robotics and IoT in Web & Mobile track. Presentation will introduce Java Functional Reactive Programming (FRP) as a novel way for implementing hot event streams processing directly on connected/embedded/robot devices using Spring Reactor and RxJava. It will be accompanied by live demo of custom developed Java robot called IPTPI (using Raspberry Pi 2 – ARM v7, quad core, 1GB RAM), running hot event streams processing and connected with a mobile client for monitoring and control. Voxxed interview with Trayan Iliev about the talk: Q. You’re speaking at Voxxed Days Bucharest in March. Tell us a bit about your session. A. Session introduces Java Functional Reactive Programming (FRP) as a novel way for building high-performance reactive streams processing for connected/embedded/robotics devices using Spring Reactor and RxJava libraries. It includes demo of running reactive hot event streams processing on custom developed Java robot called IPTPI (using Raspberry Pi 2, quad-core at 900MHz, 1 GB RAM): motor encoders, gyroscope, accelerometer, compass, and distance sensor events. More information about robots developed at IPT and RoboLearn hackathons is available at: http://robolearn.org/ Q. Why is the subject matter important? A. Internet of Things (IoT) and service robotics are emerging game changers for many industries including home automation and smart cities, smart vehicles, agriculture, retail, education and sport. Essential requirement for the emerging device/process/service ecosystems is effective, efficient, secure, scalable and reliable distributed event processing. Functional Reactive Programming (FRP) becomes popular paradigm for building distributed event processing systems, by providing easy to use and composable higher-level abstraction for high-performance computing, and hiding complexities of non-blocking concurrency implementations. Reactor and RxJava are complementary reactive event processing frameworks providing feature rich and efficient implementation of reactive programming paradigm in Java. Q. Who should attend your session? A. Software developers or just robotics/IoT hobbyists interested in reactive programming and its practical implementation for high-performance (hot) event streams processing in Java. Q. What are the key things attendees will take away from your session? A. Better understanding of functional reactive programming in general, and state-of-the-art reactive Java frameworks in particular – with emphasis on Reactor and RxJava. Practical “real robotic world” examples for functional hot event stream processing and (hopefully) amusement with IPTPI robot 🙂 Some background info and a lot of resources on java robotics and IoT. Q. Aside from speaking at Voxxed Days Bucharest, what else are you excited about for 2016? A. Practical IT education by building and programming Things, and sharing knowledge about it. High-performance FRP and its applications for distributed (big data) computing for IoT. Building own CNC router/3D printer/laser cutter for robot parts and IoT cases for all the friends around.
OPCFW_CODE
Minimum number of ways to write a string Consider the following question There are K magical pens (numbered 1 through K). You are given strings P1,P2,…,PK (each of which consists of characters from 'a', 'b', …, 't') ; for each valid i, the i-th pen can only write letters from the string Pi . You want to write a word S of length N. All the characters of S are between 'a' and 't' inclusive. This string must be written from left to right. To write it, you pick up some pen and start writing; after you've written some prefix of S, you can put down that pen, pick up another pen, continue writing S from the point where you put down the previous pen, later pick up another pen (any pen) and continue writing S with that pen, and so on until you write the whole string S You may pick up each pen any number of times, including zero. You have to find a way of writing the word S such that the number of times you change the pen (put down the pen you're currently writing with and pick up another) is the smallest possible. If there are multiple solutions, you may find any one. It is guaranteed that it is possible to write S with the given pens. I know that to represnt a pen as a number using bitset and solve it ,but how can i solve it faster. Link for the question https://www.codechef.com/ICPCIN19/problems/PENS Is there anything, that stops you from greedy solution? First of all, is it true that there always exists an optimal solution in which the first pen - is a pen with which you can write the longest prefix? @VladislavBezhentsev what is your greedy algorithm First observation: As first pen you can always take a pen with which you can write the longest prefix. Let's consider given pen-strings as bit stings. For each such bit string let's generate all substrings which correspond to subsets of an initial bit string and put them in hashtable. Moreover, let's generate them in descending (by nesting) order and when we generate some bit string which is already in hashtable, we terminate this branch of recursion. So each of $2^{20}$ possible bit strings will be generated at most twice. Precompute for each prefix of string $S$ the number of times $i$-th letter occurs in it. Then let's walk by string $S$ from left right and on each step let's check whether current prefix can be written with a single pen. We can do it in $O(1)$ using precomputed hashtable and prefixes. So the overall complexity is $O(2^\alpha + K + N\cdot\alpha)$, where $\alpha$ is an alphabet size ($20$ in our case). "Making it faster" by tweaking data representation won't buy much. It could speed it up by, say, a factor of 2 if you are lucky (or the original is very badly written). To get real speedup you need to use another approach (change basic algorithm). I suspect the obvious greedy algorithm succeeds: Take as the next pen those that allows you to write the longest prefix of what remains. There must always be one (as each symbol can be written by some pen), and no pen selection can block a later one (if you can write $\alpha \beta$, you certainly can write $\beta$ with that same pen). There's a simple $O(nk)$ greedy algorithm that assumes there is a solution: Start with all pens in an array. Eliminate all pens from your array that can not be used to write the next character. If this would remove all pens, instead output one of them and go back to 1. Write the next character and go back to 2 if there are more characters to write. Output one of the pens in your array and done. If there are instances without a solution you need to detect, just check in step 2 whether you are removing all pens from a freshly created array. Seems that $O(n \cdot k)$ is to slow for problem's constraints. your solution is very slow But obviously it is solvable in $O(n\log(n))$ also with greedy approach. @VladislavBezhentsev could you explain me
STACK_EXCHANGE
In this lesson you can learn how to prepare the on-premises environment to protect Hyper-V virtual machines using Azure Site Recovery, including setting the protection goal and preparing the source and target. - [Instructor] We are now ready to go ahead and prepare the target, and the target is Azure. We have already created the storage account and network. We did that a couple of lessons ago. If you didn't configure it then, you could do it now. Our first step in preparing our target is to select our Azure subscription and our failover deployment model. Most of you should be using the Azure resource manager at this point in time. If you are using the classic portal still, I'd highly recommend you look at moving your resources to the resource manager. Step two is to ensure that we have at least one compatible Azure storage account, and we do. Step three is to ensure that we have at least one compatible Azure virtual network, and again, we set these up in the previous lesson. I can go ahead and click OK. Step five is to prepare our replication settings. The first thing we need to do is associate our Hyper-V server with the replication policy. You'll notice here, I already have a policy in place, but I'd like to go ahead and create a new one. I'm going to go ahead, click on Create and Associate. You're going to enter in your policy name. I'm going to call mine RepPolicy, not very creative. Again, you'll want to name these appropriately. The next thing we can choose from is our copy frequency. We have 30 seconds, five minutes, and 15 minutes. You can go ahead and pick 15 minutes if you wish, but that time will be deprecated, and all of your 15 minute replications will then be changed to five minutes. For our purposes I'm going to go ahead and use 15 minutes. Next, you can go ahead and select a recovery point retention time in hours. In this case we're going to do it for two hours, as well as the app consistent snapshot frequency. Again, this is in hours. I'm going to leave the one. You can start the replication immediately, or you can set it to start a little bit later. That's all you need to do. Click OK. That policy will now be created. Once the policy has been created, then it'll need to be associated with your Hyper-V site. That's what's happening now. This will take a few moments. Once your replication policy has been associated, you can go ahead and click OK. And that is it for preparing our on premise infrastructure. It wasn't that hard, and it only took a few minutes. In the next lesson, we're going to go ahead and enable replication. - Creating a Recovery Services vault for Azure Backup - Protecting virtual machines, files and folders, databases, workloads, and file shares - Restoring virtual machines, files and folders, databases, workloads, and file shares - Azure Site Recovery scenarios - Running failover and failback tests - Replicating an Azure virtual machine
OPCFW_CODE
#include "Triangle.h" #include <SprueEngine/Math/MathDef.h> namespace SprueEngine { float point_segment_distance(const Vec3 &x0, const Vec3 &x1, const Vec3 &x2) { Vec3 dx(x2 - x1); float m2 = dx.LengthSq(); // find parameter value of closest point on segment float s12 = (float)((x2 - x0).Dot(dx) / m2); if (s12 < 0) s12 = 0; else if (s12 > 1) s12 = 1; // and find the distance return (x0 - (s12 * x1 + (1 - s12) * x2)).LengthSq(); } BoundingBox Triangle::GetBounds() const { const Vec3 min = Vec3::MinVector(a, Vec3::MinVector(b, c)); const Vec3 max = Vec3::MaxVector(a, Vec3::MaxVector(b, c)); return BoundingBox(min, max); } Vec3 Triangle::ClosestPoint(const Vec3 &p) const { /** The code for Triangle-float3 test is from Christer Ericson's Real-Time Collision Detection, pp. 141-142. */ // Check if P is in vertex region outside A. Vec3 ab = b - a; Vec3 ac = c - a; Vec3 ap = p - a; float d1 = ab.Dot(ap); float d2 = ac.Dot(ap); if (d1 <= 0.f && d2 <= 0.f) return a; // Barycentric coordinates are (1,0,0). // Check if P is in vertex region outside B. Vec3 bp = p - b; float d3 = ab.Dot(bp); float d4 = ac.Dot(bp); if (d3 >= 0.f && d4 <= d3) return b; // Barycentric coordinates are (0,1,0). // Check if P is in edge region of AB, and if so, return the projection of P onto AB. float vc = d1*d4 - d3*d2; if (vc <= 0.f && d1 >= 0.f && d3 <= 0.f) { float v = d1 / (d1 - d3); return a + v * ab; // The barycentric coordinates are (1-v, v, 0). } // Check if P is in vertex region outside C. Vec3 cp = p - c; float d5 = ab.Dot(cp); float d6 = ac.Dot(cp); if (d6 >= 0.f && d5 <= d6) return c; // The barycentric coordinates are (0,0,1). // Check if P is in edge region of AC, and if so, return the projection of P onto AC. float vb = d5*d2 - d1*d6; if (vb <= 0.f && d2 >= 0.f && d6 <= 0.f) { float w = d2 / (d2 - d6); return a + w * ac; // The barycentric coordinates are (1-w, 0, w). } // Check if P is in edge region of BC, and if so, return the projection of P onto BC. float va = d3*d6 - d5*d4; if (va <= 0.f && d4 - d3 >= 0.f && d5 - d6 >= 0.f) { float w = (d4 - d3) / (d4 - d3 + d5 - d6); return b + w * (c - b); // The barycentric coordinates are (0, 1-w, w). } // P must be inside the face region. Compute the closest point through its barycentric coordinates (u,v,w). float denom = 1.f / (va + vb + vc); float v = vb * denom; float w = vc * denom; return a + ab * v + ac * w; } bool Triangle::IntersectRay(const Ray& ray, float* dist, Vec3* hitPos, Vec3* outBary) const { Vec3 e1(b - a); Vec3 e2(c - a); Vec3 normal = e1.Cross(e2); if (normal.Normalized().Dot(ray.dir) > 0.0f) return false; Vec3 p(ray.dir.Cross(e2)); float det = e1.Dot(p); if (det < EPSILON) return false; Vec3 t(ray.pos - a); float u = t.Dot(p); if (u < 0.0f || u > det) return false; Vec3 q(t.Cross(e1)); float v = ray.dir.Dot(q); if (v < 0.0f || u + v > det) return false; float distance = e2.Dot(q) / det; if (distance >= 0.0f) { if (outBary) { *outBary = Vec3(1 - (u / det) - (v / det), u / det, v / det); if (hitPos) *hitPos = a * outBary->x + b * outBary->y + c * outBary->z; } if (dist) *dist = distance; return true; } return false; } bool Triangle::IntersectRayEitherSide(const Ray& ray, float* dist, Vec3* hitPos, Vec3* outBary) const { Vec3 e1(b - a); Vec3 e2(c - a); Vec3 normal = e1.Cross(e2); if (normal.Normalized().Dot(ray.dir) > 0.0f) { Flip(); e1 = (b - a); e2 = (c - a); //return false; } Vec3 p(ray.dir.Cross(e2)); float det = e1.Dot(p); if (det < EPSILON) return false; Vec3 t(ray.pos - a); float u = t.Dot(p); if (u < 0.0f || u > det) return false; Vec3 q(t.Cross(e1)); float v = ray.dir.Dot(q); if (v < 0.0f || u + v > det) return false; float distance = e2.Dot(q) / det; if (distance >= 0.0f) { if (outBary) { *outBary = Vec3(1 - (u / det) - (v / det), u / det, v / det); if (hitPos) *hitPos = a * outBary->x + b * outBary->y + c * outBary->z; } if (dist) *dist = distance; return true; } return false; } float Triangle::Distance(const Vec3& p) const { return sqrtf(Distance2(p)); } float Triangle::Distance2(const Vec3& p) const { Vec3 x13(a - c); Vec3 x23(b - c); Vec3 x03(p - c); const float m13 = x13.LengthSq(); const float m23 = x23.LengthSq(); const float d = x13.Dot(x23); const float invdet = 1.0f / SprueMax(m13 * m23 - d * d, 1e-30f); const float a = x13.Dot(x03); const float b = x23.Dot(x03); // the barycentric coordinates themselves float w23 = invdet * (m23 * a - d * b); float w31 = invdet * (m13 * b - d * a); float w12 = 1 - w23 - w31; if (w23 >= 0 && w31 >= 0 && w12 >= 0) { // if we're inside the triangle return (p - (this->a * w23 + this->b * w31 + this->c*w12)).LengthSq(); } else { // we have to clamp to one of the edges if (w23 > 0) // this rules out edge 2-3 for us return SprueMin(point_segment_distance(p, this->a, this->b), point_segment_distance(p, this->a, this->c)); else if (w31>0) // this rules out edge 1-3 return SprueMin(point_segment_distance(p, this->a, this->b), point_segment_distance(p, this->b, this->c)); else // w12 must be >0, ruling out edge 1-2 return SprueMin(point_segment_distance(p, this->a, this->c), point_segment_distance(p, this->b, this->c)); } } float Triangle::SignedDistance(const Vec3& p) const { float dist = Distance(p); const Vec3 center = (a + b + c) / 3.0f; const float sign = (p - c).Normalized().Dot(GetNormal()) > 0.0f ? 1.0f : -1.0f; return dist * sign; } float Triangle::SignDistance(const Vec3& p, const float inputDist) const { const Vec3 center = (a + b + c) / 3.0f; const float sign = (p - c).Normalized().Dot(GetNormal()) > 0.0f ? 1.0f : -1.0f; return inputDist * sign; } }
STACK_EDU
According to futurists such as Ray Kurzweil, around the year 2045 we will see an intelligence explosion in which artificial intelligence vastly exceeds human intelligence. This predicted time is known as the Singularity. Kurzweil assures that science and technology are progressing at an accelerating rate, and he implies that in light of such a conclusion it makes sense to think that we will see the Singularity by the year 2045. The main argument given for this technological singularity is based on the rapid rate of progress in hardware. For quite a few years, a law called Moore's Law has held true. It has been described as the law that every 18 months the number of transistors that can be put on a chip tends to double. Because of advances in hardware described by Moore's Law, your hand-held computing unit has more computing power than a refrigerator-sized computer decades ago. However, there are some reasons why we should be skeptical about using Moore's Law as a basis for concluding that a technological singularity is a few decades away. One reason is that Moore's Law may stop working in future years, as we reach limits of miniaturization. But a bigger reason is that a technological singularity would require not just our hardware but also our software to increase in power by a thousand-fold or more – and our software is not progressing at a rate remotely close to the very rapid rate described by Moore's Law. How fast is the rate of progress in software? It is relatively slow. We do not at all have anything like a situation where our software gets twice as good every 18 months. Software seems to progress at a rate that is only a small fraction of the rate at which hardware progresses. Let's look at the current state of software and software development. Are we seeing many breathtaking breakthroughs that revolutionize the field? No, we aren't. What's surprising is how relatively little things have changed in recent years. The most popular languages used for software and Ruby. All of these languages were developed in the 1990's and 1980's (although C# wasn't released until the year 2000). Not even 10 percent of programming is done in a language developed in the past 12 for data manipulation languages, everyone is still using mainly SQL, a miniature language developed more than 20 years ago. The process of creating a program has not changed much in 25 years: type some code, run it through a compiler, fix any errors, build the program, and see if it does what you were trying to make it do. Does this give you any reason for thinking that software is a few decades away from all the monumental breakthroughs needed for anything like a technological singularity? It shouldn't. To get some kind of idea about why hardware advances alone won't give us what we need for a technological singularity, let's imagine some software developers working in New York City. The developers are hired by a publishing firm that is tired of paying human editors to read manuscripts. The developers are told to develop software that will be able to read in a word processing file, and determine whether the manuscript is a likely best seller. Suppose that the publishing firm tells the developers that they will have access to basically unlimited computer speed, unlimited disk storage, and unlimited random-access memory. Would this hardware bonanza mean that the developers would be able to quickly accomplish their task? Not at all. In fact, the developers would hardly know where to begin to accomplish this task. After doing the easiest part (something that rejects manuscripts having many grammatical or spelling errors), the developers would be stuck. They would know that the work ahead of them would be almost like constructing the Great Pyramid of Cheops. First they would have to construct one layer of functionality that would be hard to create. Then they would have to build upon that layer another layer of functionality that would be even harder. Then there would be several other layers of functionality that would need to be built – each building upon the previous layer, and each layer far harder to create than the previous layer. That is how things works when you are accomplishing a difficult software task. All of this would take tons of time and manual labor. Having unlimited memory and super-fast hardware would help very little. As the developers reached the top of this pyramid, they would find themselves with insanely hard tasks to accomplish, even given all the previous layers that had been built. The following visual illustrates the difficulty of achieving a technological singularity in which machine intelligence exceeds that of human intelligence. For the singularity to happen, each pyramid must be built. But each pyramid is incredibly hard to build, mainly because of monumental software requirements. The bottom layer of each pyramid is hard to build; the second layer is much harder; and the third layer is incredibly hard to build. By the time you get up to the fourth and fifth layers, you have tasks that seems to require many decades or centuries to accomplish. Now some may say that things will get a lot faster, because computers will start generating software themselves. It is true that there are computer programs called code generators capable of creating other computer programs (I've written several such tools myself). But as a general rule code generators aren't good for doing really hard programming. Code generators tend to be good only for doing fairly easy programming that requires lots of repetitive grunt work (something like making various input forms after reading a database schema). Really hard programming requires a level of insight, imagination, and creativity that is almost impossible for a computer program to produce. If we ever produce artificial intelligence rivaling a human intelligence, then there will be a huge blossom of computers that create code for other computers. But it's hard to see how automatic code generation will help us very much in getting to such a level in the first place. There was a lot of interest several years back about using genetic algorithms to get computers programs to generate software. Not much has come from such an approach. Genetic algorithms have not proved to be a very fruitful way to squeeze creativity out of a computer. Will quantum computers help this “slow software progress” problem? Absolutely not. Quantum computers would just be lightning-fast hardware. To achieve a technological singularity, there would still be an ocean of software work to be done, even if you have infinitely fast computers. Will we be able to overcome this difficult by scanning the human brain and somehow figuring out how the software in our brain works, using that to create software for our machines? That's the longest of long shots. No one really has any clear idea of how thoughts or knowledge or rules are represented in the brain, and there is no reason to think we will be able to unravel that mystery in this century. To summarize, hardware is getting faster and faster, but software is still stumbling along pretty much as it has for the past few decades, without any dramatic change. To get to a singularity, you need for both hardware and software to make a kind of million mile journey. Hardware may well be able to make such a journey within a few decades. But I see little hope that software will be able to make such a journey within this If we ever get to the point where most software is developed in a fundamentally different way – some way vastly more productive than typing code line by line – then we may be able to say that we're a few decades away from a
OPCFW_CODE
I wanted to finish the thread that I stared in my last blog before I go back to more technical issues. I'm currently playing around with the preferences validator in JSR 168 and I think that will be interesting for my next entry. For now, let's complete this issue. It is generally accepted that business requirements should drive functional and some nonfunctional requirements. The business doesn't want to pay for functionality it doesn't need or want, and likewise it should get the most bang for its buck in terms of portability, scalability and all the other 'ilities that are needed. However there are still many nonfunctional or rather organizational or architectural requirements that need to be determined, either at a project or an organizational level. For these issues I'm always a bit wary of the driver behind them and the value they may bring. For example the use of Struts in portal sometimes concerns me. (Before the Stuts people start to feel picked on, let me explain.) There are solid business and financial drivers for using the Struts Portal Framework within WebSphere Portal. "Because it's cool", or "I want to learn it" is probably not one of them. During any project I have a very simple frame of reference. What is it going to take to allow me to deliver this application within the given time frame and within or even under budget? In many cases the team is new to WebSphere Portal and J2EE technologies, and reducing the complexity or layers within an application and taking advantage of what is available within the portal framework can help now and as the application evolves. I like the idea of creating standards across a project, or better yet, across your organization. The use of something like Struts can fit well within this ideal, but here again, there may be many exception cases. Is it going to cost you more in initial development, ongoing maintenance, and even future upgrade headaches to implement within these guidelines? Portability might be another reason to take this route, but it is probably not realistic to think your code will port seamlessly across platforms just because it's based on Struts. This will be more of a problem if you want to take advantage of specific portal functionality within your code. I have the same concerns around some of the methodology's that a team sometimes want to use within their project. Some teams get really excited about using the Rational Unified Process and all the tools that IBM Rational can provide to assist in this area, or maybe they want to go another way such as using EXtreme Programming. With anything of this nature, your first question should be, what is the experience of your team? Are they skilled in these areas or are you willing to make the investment to get them to the level they need to be? Secondly what are your immediate needs. If you have to deliver a portal very quickly then many short-cuts will be made. How will this affect your methodology strategy? Fortunately, many organizations are on the right track to seriously evaluate all the options and the value they bring. This provides the advantage that, either the requirement is really warranted, or the organization is willing to make the effort to ensure the team has the tools and training they need to do the job correctly. Don't think I'm against developers or teams learning new things, that's how we get better and continue to improve our processes and projects. However, learning at the expense of the project delivery date or capability should be weighed heavily. Trust me, build a few portal projects and the learning will come. : ) Development Requirements and Business Value
OPCFW_CODE
Who Should Read this Article? If you create Windows console programs and want to be able to print wide strings properly, this is something for you. More than the actual proficiency in C++, it is important that you understand what Unicode is and what wide strings are. It's hard to emphasize enough the importance of making Unicode aware applications. The novices in C/C++ should be taught from the beginning not to use printf, etc. It should be pointed out to them from the beginning that modern Windows systems internally work with 16-bit Unicode, aka wide strings. Therefore wprintf, etc. (or even better: the TCHAR paradigm) should be used instead. When new C++ projects are created in Visual Studio, they follow the TCHAR paradigm. It means that, instead of the above, _tprintf, etc. are used. They are typedefs that have different meaning depending on the character set chosen in the project settings. This paradigm is created so that the same code could be built for old (Windows 95, Windows 98) and new versions of Windows (NT, XP and newer). Since programming for these old Windows versions does not make sense any more, we could simply use the wide versions of functions. Yet, following the TCHAR paradigm still makes sense, because it can make the code more portable to operating systems that do not use wide strings, like Linux. All this works fine. The problem arises when you write a console application. The application can read wide command line arguments properly. I do not know if input of wide string via standard input works OK because I never needed to use it. But I needed to output them and it did not work. I tried CRT functions like wprintf and STL objects like wcout. Neither of them worked. I searched for a suitable solution and could not find it. I set up the cmd window to use Lucida Console font (and you should do it too, otherwise any attempt to see Unicode characters in it is bound to fail!). I realized that it is possible to print wide strings directly to the console using functions from conio.h ( _tcprintf, etc.). Very nice! Yet... When someone is using a console application, she/he expects to be able to redirect its output. It does not work if output goes directly to the console. It must go to It seems Microsoft was not consistent in this. While the whole system works with wide strings, the console output does not, and in .NET, the default output code page is UTF-8! But it gave me the idea. I also noticed that text files encoded in UTF-8 can be properly printed to the console (using `type` for example), provided the console code page is set to UTF-8 using the command `chcp 65001`. Now I wanted to use UTF-8 from C++. Using the Code Setting and Resetting the Codepage We must prepare the console for UTF-8. We first store the current console output codepage in a variable: UINT oldcp = GetConsoleOutputCP(); Then we change the console output codepage to UTF-8, which is the equivalent of `chcp 65001`: Before exiting the program, we must be polite and bring the console back to the state as it was before. We must: When We Want to Print Out Wide Strings in the Program, We Will Do it Like this Suppose we have a wide string containing Unicode characters, say: wchar_t s = L"èéøÞǽлљΣæča"; If you write that in Visual Studio, when you attempt to save the file you will be prompted to save it in some Unicode format. "Unicode - Codepage 1200" will be OK. We convert it to UTF-8: First we call WideCharToMultiByte with the 6th argument set to zero. That way, the function will tell us how many bytes it is going to need to store the converted string. int bufferSize = WideCharToMultiByte(CP_UTF8, 0, s, -1, NULL, 0, NULL, NULL); We allocate a buffer: char* m = new char[bufferSize]; The second call to WideCharToMultiByte does the actual conversion: WideCharToMultiByte(CP_UTF8, 0, s, -1, m, bufferSize, NULL, NULL); Print it to stdout. Notice the capital S. It tells the wprint function to expect narrow string: Release the buffer: Now the output goes to stdout. If redirected to a file, the file will be encoded as UTF-8. This Is It It is not a big deal and cannot be compared to the articles that require much more work. Yet I hope it can be useful because it tries to solve a problem that is widely neglected. Last time I checked, I could not find the solution for this problem in Java either. In my example code, I packed everything I spoke about here in small wostream overrides. They are not perfect and I'm pretty sure they could be coded better. I would do it if I knew more about iostream programming. Yet they can be useful for those who want the solution out of the box and easy to use. But it should be pointed out that they are not thread safe. There are more comments in the code. This article is completely rewritten, mainly because the comments of Member 2901525 made me understand that the code is not perfect enough to be offered without some more explanation. The article itself was very short, looked sketchy and earned some low marks. I forgot to mention that Lucida Console font must be used in the cmd window. Member 2901525 noticed a weak point in the code and I changed this. Otherwise there are no significant changes in the code.
OPCFW_CODE
Do you want to download Twitter videos? Downloading videos on Twitter is easy. Learn how to download videos from Twitter. In this video, you’ll learn how to download Twitter videos. If you want to download Twitter videos on iPhone, this video is for you. However, if you want to download Twitter videos on Android phone, this video is not for you. If you are an Android user and want to download Twitter videos on your Android smartphone, let me know and I’ll create another video demonstrating how to download videos from Twitter on Android. However, downloading Twitter videos on Mac or PC Computer is easy, simply copy the tweet link and paste in TwitterVideoDownloader.com to download videos from Twitter on your computer. Using the same method you can download Twitter videos on iPhone as well but that way you won’t be able to save downloaded Twitter videos in camera roll. However, in this video tutorial, I have also demonstrated a unique method for downloading Twitter videos and more importantly how to download Twitter videos to camera app on iPhone. Apps and websites helpful for downloading videos from Twitter: Documents by Readdle: Twitter Video Downloader: If you have any questions regarding this tutorial or need any help in downloading Twitter videos, let me know in the comments below and I’ll be happy to help. 👉 Enter the Giveaway: 🎁 👉 Join the TechReviewPro Facebook group: 🤝 If you found this video, “How to Download Twitter Videos on iPhone Camera Roll without Jailbreak?” helpful, then please like, share, and subscribe to TechReviewPro channel. 🔴Subscribe for More (It’s FREE): 🔴 👉Check out our video production kit and gears here: 👈 Thanks for watching! 🔥🔥TRP Hindi (Hindi Channel): 🔥🔥 ✅✅✅Business Email: email@example.com ✅✅✅ Let’s connect on social media: 🔥🔥Facebook VIP Group: 🔔🔔🔔🔔🔔🔔🔔🔔▂ ▂ ▂ ▂ ▂ ▂ ▂🔔🔔🔔🔔🔔🔔🔔🔔🔔 Music Credit: (Licensed under Creative Commons CC 3.0.) 👉TechReviewPro is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com. Also, this video contains other affiliate links and product recommendations. 👉If you purchase something from our affiliate links, we get a small commission with no extra cost to you. But our recommendation is always based on the merit of the products and not influenced by other factors. This supports us financially to purchase better gadgets and gears and this is the reason why we are able to bring more free information in the form of high-quality exclusive videos for you. Thank you! Disclosure: Xem thêm bài viết khác: https://sangoivon.com/giai-tri/
OPCFW_CODE
I was able to recover based on the information in Grub rescue - error: unknown filesystem. –Peter Mortensen Jun 19 at 11:59 add a comment| 6 Answers 6 active oldest votes In such a case edit /etc/grub.d/00_header and change value of timeout in line 236 (this line is in the make_timeout() function) set timeout=-1to the the value as described above. This eliminates the need to create a CD/DVD and allows bootable image files to be stored only on the hard drive. Check the screen shots below.Once done click ok and restart your system, your grub should work now. http://1procommerce.com/error-15/error-15-grub-ubuntu-10-04.php When I initially installed Ubuntu, it loaded the Grub on C:, giving me the option at boot. Review the "Configuring GRUB 2" section above for specific entry and formatting guidance. So I went to my Win 7 machine and went looking for answers...Put the cd back in, changed boot order to boot from livecd. grub.cfg is overwritten by certain Grub 2 package updates, whenever a kernel is added or removed, or when the user runs update-grub. https://ubuntuforums.org/showthread.php?t=763893 Thank You! Any OS can be booted in this manner from any USB or CD/DVD drive, circumventing BIOS restrictions. If not select it, the options are Reinstall Grub and unhide boot menu for 10 seconds. Locate the Ubuntu ISO file. Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULTand add reboot=bios to the end. So I did that command, and ended up getting a grub menu that only showed my Windows boot. For me, running the below commands temporarily allowed me to boot the correct partition. This tool is aimed entirely at those new to Ubuntu who want to get past their booting issues and enjoy using Linux. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the The problem is that after restart I realized that grub was installed on partition #2. Don't be discouraged! http://askubuntu.com/questions/42039/how-do-you-recover-from-formating-your-partition-with-grub Grub rescue from live Ubuntu CD Enable 3D desktop cube without crashing Unity in U... 3D cube effects in Ubuntu Linux ► May (6) Copyright © 2011 Opensource4beginners | Powered It certainly needs to be the same general architecture (x86 versus x64). For what it's worth, I had a similar problem with this machine. Why was Gilderoy Lockhart unable to be cured? File Structure GRUB 2 incorporates a totally revised directory and file hierarchy. What's its name? Die Bewertungsfunktion ist nach Ausleihen des Videos verfügbar. Can Tex make a footnote to the footnote of a footnote? Photoshop's color replacement tool changes to grey (instead of white) — how can I change a grey background to pure white? If it's not, it looks like your partitions are lined up correctly. navigate to this website Hochgeladen am 20.06.2011This is another way to fix the ''GRUB Error Rescue'' problem. You can get a list with sudo blkid like this, /dev/sda1: LABEL="Windows XP" UUID="96A4390DA438F0FB" TYPE="ntfs" /dev/sda3: LABEL="Ubuntu 11.04" UUID="b61fcae3-7744-45b4-95b9-7528d50a3652" TYPE="ext4" /dev/sda5: LABEL="Se7en" UUID="A2DC9D71DC9D4109" TYPE="ntfs" /dev/sda6: LABEL="Development" UUID="DEB455A1B4557CC9" TYPE="ntfs" /dev/sda7: LABEL="EXTRA" UUID="D8A04109A040F014" orangeharbour Prode Principiante Messaggi: 120Iscrizione: dicembre 2010 Torna su Vai giù Rispondi citando Re: installare ubuntu 11.04 senza fare avanzamento versione da giulux » giovedì 28 aprile 2011, 14:02 devi Very simple number line with points Why was Gilderoy Lockhart unable to be cured? Using the combinations of ls commands, locate the Ubuntu ISO image. ggs Dear Manivannan I re-installed ubuntu 12.04 LTS and it was successful. More about the author THX BinDelta I've meant a live cd image - not a suggestion on working system! The default entry is highlighted and other selections may be made by the user until the timeout expires. The timer continues until any key is pressed or the highlighted entry is selected by pressing ENTER. Edit /etc/grub.d/10_linux and change line 188 to set linux_gfx_mode=keepOnce again, run update-grub after the change has been made. Booting an ISO from a Menuentry Ubuntu ISOs, as well as many utility ISOs and some other Linux operating systems, can be booted from a hard drive via a GRUB menuentry. Anzeige Autoplay Wenn Autoplay aktiviert ist, wird die Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt. Wähle deine Sprache aus. Please visit the Grub2/Upgrading community documentation for more information and instructions. I'm a noob Linux user and you helped me after a unsuccessful upgrade Ubuntu 11.10 -> Mint 12Perfect tutorialThanks again Anonymous says: 5:27 AM Reply Amazing!!! You do beautiful work. The serial command in GRUB 2 uses the same syntax as its GRUB Legacy counterpart (documented here). http://1procommerce.com/error-15/error-15-ubuntu.php If I were to try to fix it, I'd boot off a LiveCD, chroot in and attempt grub installation much like it sounds like you've attempted. Anyhow, check your BIOS settings. Your article saved the day !!I copied straight from Firefox into Terminal and noticed that it appears to need a space after the "chroot".Thanks again,Warwick sidhartha says: 2:25 PM Reply Thanks actually i miss in step-3. Ubuntu grub rescue-1 Select Try Ubuntu. 2.In Live Desktop session open terminal. No changes are made to the GRUB 2 menu until the update-grub command is run as root. Jaymin Joshi Great post helped me our after installing ubuntu alongside windows 8 http://www.searchithow.com/2015/02/how-to-install-ubuntu-dual-boot.html Andrew D. Ubuntu grub rescue screenshot-2 The partation which have Linux under System column is your drive in which ubuntu linux is installed. Anonymous says: 8:34 AM Reply grub rescue in sweet and simple way. Now the boot-repair was successful. Can anyone please help me fix this? -1 GRUB booting error 1 GRUB Rescue Command Problem 0 Getting message “error: unknown filesysten grub recovery” after deleting ubuntu partition? 0 How do I got that SuperGrub CD burned on the older machine (first day with Mandriva Spring - still discovering) while reinstalling Ubuntu 7.10 on the laptop (third try). Just make sure that /dev/sda is your harddisk, which is the case most of the time. –palerdot Jul 11 '14 at 16:34 As I have /boot on other partition, God bless. :)-Binoy Mustapha Labbo says: 3:09 PM Reply yeah thank bro it work fine for me...... up vote 2 down vote favorite 1 I had my laptop's HDD separated in 3 partitions: NTFS - Win XP install (primary) NTFS - common space (extended) ext4 - Ubuntu 10.10
OPCFW_CODE
using Unity.Entities; using Unity.Mathematics; using Unity.Physics; using Unity.Transforms; namespace DOTSNET.Examples.Pong { public static class Utils { // check collision between two colliders in world space public static bool CheckCollision(PhysicsCollider a, RigidTransform aRT, PhysicsCollider b, RigidTransform bRT) { if (a.IsValid && b.IsValid) { Aabb boundsA = a.Value.Value.CalculateAabb(aRT); Aabb boundsB = b.Value.Value.CalculateAabb(bRT); return boundsA.Contains(boundsB) || boundsB.Contains(boundsA) || boundsA.Overlaps(boundsB) || boundsB.Overlaps(boundsA); } return false; } // for convenience: check collision between two Entities public static bool CheckCollision(World world, Entity a, Entity b) { Translation aTranslation = world.EntityManager.GetComponentData<Translation>(a); Rotation aRotation = world.EntityManager.GetComponentData<Rotation>(a); RigidTransform aRT = new RigidTransform(aRotation.Value, aTranslation.Value); PhysicsCollider aCollider = world.EntityManager.GetComponentData<PhysicsCollider>(a); Translation bTranslation = world.EntityManager.GetComponentData<Translation>(b); Rotation bRotation = world.EntityManager.GetComponentData<Rotation>(b); RigidTransform bRT = new RigidTransform(bRotation.Value, bTranslation.Value); PhysicsCollider bCollider = world.EntityManager.GetComponentData<PhysicsCollider>(b); return CheckCollision(aCollider, aRT, bCollider, bRT); } } }
STACK_EDU
While there are quite many nice alternatives for doing this and I would mention the RealVGA or the excellent OzVGA , here is another article discussing this subject. The reason for this is that it’s always good to know a little insight about the things we use. To change the Pocket PC screen resolution, you only need a registry editor. If you are comfortable using little tools to do that it’s ok, but if you don’t have those around, or want to do it yourself with “bare hands”, there is an easy solution for this. Those of you familiar with screen resolution changes, might know that this is the 192×192 “Logical resolution” or for my device, 640×480 pixels in landscape mode. To change it, you need to open your registry editor and modify the keys LogicalPixelsX and LogicalPixelsY located under: You can modify the default values to whatever suits you best. That’s right you can enter a wide range of values. After the new values are entered, you need to soft reset the device. On my device, entering 384 for both LogicalPixelsX and LogicalPixelsY, equals a resolution of 320×240 in landscape mode. Here is a screenshot with my registry editor, and then with the pocket pc new resolution: I’ve also tried to make the pixel resolution bigger. Putting 48 for both LogicalPixelsX and LogicalPixelsY, resulted in a lot of screen space, and tiny font / images 48 logical resolution, means 2560 * 1920. Wow, not even my desktop computer has such a huge resolution. How to calculate this? It’s easy. The original “logical resolution” was 192×192, that is 640×480 pixels. 384×384, is double the original one (192×192), but everything gets bigger. So the actual screen resolution, gets half of 640×480. That would be 640/2=320 x 480/2=240 where 2 = 384/192. For 48×48, we can follow the same idea. 48 / 192 = 1/4 . 640 / 1/4 = 640 * 4 = 2560 . 480 / 1/4 = 480 * 4 = 1920. You don’t need to enter equal values for LogicalPixelsX and LogicalPixelsY. If these values are not equal, the normal proportion between horizontal and vertical axis will change. Here’s a last sample with 512 for LogicalPixelsX and 128 for LogicalPixelsY. The screen resolution on landscape is 640 / (512 / 192) = 240 , 480 / (128/192) = 720. Remember to reset your device after changing the registry keys. It might be a good idea to power off the device by pressing the power button before pressing the reset, so the registry would be flushed (saved to storage). Be careful not to press Power button together with reset, as on some devices this can do a hard reset (first press power off, THEN release it, THEN do a reset).
OPCFW_CODE
Unless you've been living under a rock, you'll have noticed that this coming weekend we're organizing the Ubuntu Global Jam, a worldwide event where Ubuntu local community teams (LoCos) join in a get-together fest to have some fun while improving Ubuntu. As we're ramping up to a Long Term Support release, this is a particularly important UGJ and we need every hand on deck to ensure it not only meets but exceeds the standard of previous Ubuntu LTS releases. This is another article in the series of blog posts showcasing the events our community is organizing, brought to you by Andrej Znidarsic, from the Ubuntu Slovenian LoCo team. Tell us a bit about your LoCo team The Slovenian Ubuntu LoCo team was founded in 2005 and we try to spread Ubuntu mainly by translation work and help and support to Slovenian Ubuntu users who don't have the means (either language or technical knowledger barrier) to solve problems themselves. Slovenian has been among the top translated languages for a while, which is quite impressive considering there are only 2 million native speakers and we don't have a big pool to get translators from. We operate an IRC channel, website, forum, Facebook, Twitter and Google+ page. Offline we meet at monthly Ubuntu hours and we do Global Jams 😃 What kind of event are you organizing for the upcoming Ubuntu Global Jam (UGJ)? We are mostly going to focus on translations. This has traditionally been our strong point, as we exceeded 90% translation of Ubuntu about 2 years ago. Now we are focusing on translation quality and consistency. This time we want to put extra polish into translation for the LTS. In addition to that, a couple of people will focus on creating videos explaining how to perform basic tasks in Ubuntu (installing Ubuntu, Installing/removing software, Unity "tricks"...) and how to contribute to Ubuntu (how to start translating in Launchpad, how to report a bug, common translation mistakes in Slovenian). We will also be testdriving Ubuntu 12.04 LTS and report bugs we find on the way. More info can be found in our Ubuntu Global Jam announcement (in Slovenian only). Is this the first UGJ event you're organizing? Nope. We have already organized 3 Ubuntu Global Jams. The first one was online only and the last two have been organized offline. We are quite lucky to have Kiberpipa, which has kindly been providing us a great venue with a lot of space and internet access. So we mostly need to do marketing of the event, coordinate transport and grab some pizzas 😃. How do you think UGJ events help the Ubuntu community and Ubuntu? The results of previous UGJs have typically meant about 4,000 to 5,000 translated messages for us, which is amazing for one day. Slovenian has been among the top translated languages for a while, which is quite impressive considering there are only 2 million native speakers. Good translation coverage helps to grow Ubuntu usage in Slovenia. We have also managed to report a couple of bugs which improved overall quality. More importantly, in average about 15 people attend our global jam, so we meet and hang out with people we usually only see online. This vastly improves team cohesiveness. In addition there are always some newcomers, which is fantastic for community growth. Also, it's fun 😃. Join the party by registering your event at the Ubuntu LoCo Portal! ›
OPCFW_CODE
Average current instead of peak current for diode current calculation in SMPS In the switch-mode power supply circuit, I need to know why we use average current instead of peak current to estimate the maximum diode current. In addition, the writer for switch power dissipation uses the RMS value of the current but for this diode uses the average value. What is their operation difference? Here you can download the explanations: https://ufile.io/v1ha838p The key is in the title: Worst-case Diode Dissipation. I appreciate your help. Could you let me know why we don't use the RMS value instead of the DC value of the current? Stop and think about it: how do you calculate the power? You have V and I -- what formula do you use to calculate the power? Maybe the book/course/resource/etc has more hints as you read forward. Thank you for your comment, I think and read but nothing changed:) ( https://www.analog.com/en/analog-dialogue/raqs/raq-issue-177.html) regarding this page for a diode with time-varying voltage and current for calculating the power dissipation, we should use the RMS value. We can use the DC value instead of RMS value because in this case, RMS value will be almost equal to the average value. For example, if I_out = 0.5A and ΔI = 0.4A. During T_OFF time when the diode conducts current, the RMS value is 0.513A and the average is 0.5A. Thank you dear G36, but as I check the textbook it was a general rule for calculating the diode and switch power dissipation and the writer didn't assume or calculate the RMS and average in advance. I put the textbook in the updated post above. You can never assume the load will be constant nor the duty cycle or frequency, not that the inductor current will stay in constant conduction mode. But at max current with the smallest R and biggest damping load. the Idc will rise to a rated level and the AC current will be relatively smaller , if it is still regulated for reasonably low ripple.. But the average diode current could still be 2/3 of the rms diode current and 1/2 the peak diode current for example. I appreciate your help and your explanation It's all pretty simple, we are always looking for the average power which is the quantity that is related to our devices temperature rise. So, we take instantaneous power and average over some time period. $$ P=\frac{1}{T}\int_{T} v(t)i(t)\mathrm{d}t $$ Now this could be marked as a completed task, then it's up to you finding voltage and current relations and do the dirty job. Engineering comes now into play, we all love approximations which losing a little of accuracy return a clear picture of the problem Let's take two classes of devices: those that exhibit a rather constant voltage drop (diodes) and model them just as a constant voltage Vd those better described with a voltage drop proportional to the current and model them as a resistance Rd In the first case $$ P=\frac{1}{T}\int_{T} V_\mathrm{D}\,i(t)\mathrm{d}t=V_\mathrm{D}\;\underbrace{\frac{1}{T}\int_{T} i(t) \mathrm{d}t}_{I_\mathrm{AVG}}=V_\mathrm{D}I_\mathrm{AVG}$$ average current is needed to calculate the average power. While in the second one $$ P=\frac{1}{T}\int_{T} r_\mathrm{D}\,i^2(t)\mathrm{d}t=r_\mathrm{D}\;\underbrace{\frac{1}{T}\int_{T} i^2(t) \mathrm{d}t}_{I_\mathrm{RMS}^2}=r_\mathrm{D}I^2_\mathrm{RMS}$$ we happen to meet the definition of RMS current. So, in the case posted, diode is modelled as a constant voltage drop and average current is used while MOS is modelled by its rds(on) and RMS current gives the dissipated power. I appreciate it. Your point is very comprehensive. The reason peaks are used for pulses is that the RMS can be computed knowing the duty factor. The RMS value for a pulse is derived from the peak and it's d.f. Then Vrms * Irms = Pavg. There is no rms power, it's just a time interval average of instantaneous product of \$V(t)*I(t)\$. Power is measured by Average and this is estimated from the peak with some duty factor. \$Pavg=V(t)*I(t)*D\$ for t during ON time in 1 cycle. other info The package and copper layout determines the junction temperature rise from this average power. Conservative designs will derate this so that the maximum junction temp. does not reach 125'C but rather 85'C for longer life. Although there is no context or reference to this pasted text, I can't agree entirely with their conclusions as generalizations. The worst-case current is rarely at steady-state and usually occurs during startup with a load even if there is current limiting per cycle. The reason is that more energy is stored in the LC components than the worst-case load, so during startup, a soft-start is critical to the reliability and thermal stress. Thus max. power and temperature rise in all situations must be considered in every part. Thank you very much, Tony. Could you let me know why we don't use RMS value instead of DC value of the current? DC current is used when it is constant for some interval then factors like duty cycle extend the period to a cycle. Average current does not work when power is $P_d=I^2R$ but when the real-time v(t) I(t) product is computed over some interval, it can be averaged using D.. For the diode they used worst case average in a CCM model. That would not be accurate if it were DCM. I appreciate your comment; I've updated the post where the writer uses RMS current for calculating power dissipation of switch. Show a link to your text I appreciate your help. I placed the link/textbook at the bottom of the main topic. I don't know how familiar you are with ESR (C) , DCR (L), RdsOn and DSO's but you might learn nothing or a lot from my simulation. Note the efficiency you can see by the ratio of Pout (max) / Pin (-). There are much better Bucks than this simulation but here it is interactive and everything is adjustable with a mouse or menu. https://tinyurl.com/y6jnqwv5
STACK_EXCHANGE
M: Wolfram Alpha and hubristic user interfaces - blasdel http://unqualified-reservations.blogspot.com/2009/07/wolfram-alpha-and-hubristic-user.html R: DanielStraight Brilliant. Really. This article expresses perfectly everything that is wrong with Wolfram Alpha (which still can't tell me how much an elephant weighs, no matter how many ways I ask it). There are so many stories of users doing ridiculous things because of such hubristic user interfaces. Hang out with non-hackers long enough and you will clearly see how "they create an incomplete model of the giant electronic brain in their own, non-giant, non-electronic brains." Also, the phrase "non-solution to a non-problem" is the best thing I've read all week. R: GavinB [http://www82.wolframalpha.com/input/?i=weight+of+an+indian+e...](http://www82.wolframalpha.com/input/?i=weight+of+an+indian+elephant) "weight of a" always seems to trigger the weight query. The problem is that it doesn't have general data for "elephant" and doesn't know to link it to "indian elephant." It would be a lot easier if you could just browse the data. They could really have just used a good online retailer "drill-down" style interface ala newegg to make this easier. R: jimbokun Seems to indicate the need for a search interface, which was hinted at by the article. Maybe the tool selection could happen after the initial query? The results from the default tool guessed by WA, then choices of other possible matches. The key word "weight" could suggest the weight query, then key word match "elephant" to give choices like Indian elephant, African elephant, etc. and allow the user to click which one she had in mind. Maybe some keyword matches for elephant from other databases listed after that. The problem, though, is they don't have the amount of data Google can use to help predict what a user most likely wants from a given query. R: chibea Absolutely correct. I had great expectations when I heard from Wolfram Alpha. The things Mathematica is able to do backed by a giant structured data store seemed mind blowing. As the author said the visualization tools and the (still small) dataset _is_ impressive. I thought: We have all this data, all this visualization options and now we can create aggregate and munch all the facts of the world with an ad-hoc query language. But this simply does not work. There is hardly any functionality to really process data (aside from the predefined ways). And if you come to think about it: One cannot think of a way more complex transformations could be expressed with WA's natural language interface. For me a better interface would look like this: The basic interface is some kind of full-featured expression language to navigate the data hierarchy (perhaps similar to SQL). You could then build some GUI to interactively navigate/parametrize data/transformations to build more complex expressions. To make the system more easily accessible you could then - and only then - add the natural language recognition system on top of this, which tries to guess some expressions from your input string and gives you some suggestions to start from. R: iamwil As an aside, I don't think I agree with the sentiment of "because it's hard we shouldn't even try". Sometimes you get dead end fields. Sometimes, other useful or interesting things spring out of dead ends. That said, WA's interface leaves much to be desired. With Google (and its ilk), I can enter almost anything in, and get results. If they're not exactly what I want, I can refine it, little by little. It's like a gradient search in a sense. With WA, I can't enter almost anything in, and get some sort of results where I can continuously refine my search. Instead, I get deadends where it has no idea what I'm talking about, and I don't know what else to do to help it understand. In fact, it reminds me of playing the old text-based adventure games, where you have to guess what you can do in the "dingy old cabin with a door to the north." You go crazy asking the computer to "pick up mirror" and it replies, "thoust cannot pickth up the mirror" You really have no idea what you can do, and no hint out of the myriad of possibiliies. You end up playing a guessing game, instead of an adventure game. I think google squared has the right idea. You can enter in some, and it'll spit back some sort of results, usually with columns you don't want. Then you can proceed to refine those results, by adding columns you want and deleting ones you don't. R: jerf 'As an aside, I don't think I agree with the sentiment of "because it's hard we shouldn't even try".' I prefer to phrase it as "If thousands of smart people before you have tried something, you need to know _why_ they failed and have some reasonable reason to believe that your approach is better, or you will just be wasting your time." I use this most often in the context of someone popping up and declaring that they wish to create a "totally visual" language. I don't want to stop that one-in-a-million guy who might make it work, but just blithely letting someone waste their time isn't very nice either. (That's where the whole "encouragement at all costs" ideology falls down; encouragement is not free and the costs are paid by the _encouragee_ , not the encourager; think before you encourage somebody.) Usually I just see someone spout the same ideas that have been tried tens or hundreds of times before; the excited person should take the time to examine those efforts before continuing on, because the easy stuff has been tried and quite a bit of the hard stuff has been too. This goes for many things. R: hvs Good article (if a bit long). Wolfram Alpha has exactly the kind of interface that you would expect from a company founded by a person that wrote a book called "A New Kind of Science." Stephen Wolfram is a very intelligent person with a world-crushing, fire-breathing ego. This is unfortunate, because a more modest man could create a great tool like Alpha but then assume that he didn't know all of the ways that people would use it, and therefore not require interfacing with it through a broken natural language engine. Wolfram, on the other hand (like many bright people) assumes that only he knows the answers and doesn't bother to think that others might be able to do more with the tool than he could ever think of. R: encoderer "This is unfortunate, because a more modest man could create a great tool like Alpha but then assume that he didn't know all of the ways that people would use it, and therefore not require interfacing with it through a broken natural language engine. Wolfram, on the other hand (like many bright people) assumes that only he knows the answers and doesn't bother to think that others might be able to do more with the tool than he could ever think of." I promise I'm not trolling, but that sounds a lot like Steve Jobs. R: sp332 Steve Jobs only succeeded because he could really pitch his ideas. He could make people want his product ("reality distortion field"). Wolfram doesn't have that kind of charisma. R: jimbokun Steve Jobs also was not responsible for the Newton, it should be noted, but the direct control iPhone interface came out on his watch. The Mac offered far more direct control than the IBM PC. So I would suggest Jobs also has better instincts for interface design than Wolfram. R: briguy44 Summary to looooooooong article. Author feels that Wolfram should have an alternative direct interface that does not required AI to interpret the meaning/goal of your natural language text query. Kind of like how Grafiti did for the Palm. R: blasdel Hey, it's _only_ 4000 words on a single subject -- his usual style is to write a series of ten 40,000 word posts using self-invented neologisms about how the media & bureaucrats really control our democracy and advocating for its replacement with Jacobite neo-cameralism. R: johnnybgoode Just to clarify, blasdel didn't make that up. That really is his usual style. R: wglb Very nicely written article. Aside from the alpha discussion, an additional useful point is the concept of too-intelligent interfaces. This is related to two other observations. One is that for high-throughput data entry (also programming) a nice GUI is really not the thing you want. You want to be able to navigate entirely with the keyboard. The other is wsywig document processors. Serious documentation (such as that for a fighter aircraft, which when printed out weighs more than the aircraft itself, of for documentation required by the FDA for new drugs) is not really done with wsywig editors, but markup editors of different kinds. If you document is 300,000 pages, you want the pagination to be done in batch. R: schizoidboy "Give it up for the standardization of the screw." A memorable quote. On a related note, there was an interesting C-SPAN BookTv program recently where the author talked about the revolutionary standardization of international freight shipping containers: <http://tinyurl.com/mwmrwq> (booktv.org) R: jimbokun It was interesting reading the article thinking "yeah, like the Newton" before the author mentioned the Newton and then "yeah, like the way Google routes to different applications based on the kind of query" before the author mentioned how Google routes to different applications based on the kind of query. Great minds, I guess. :) R: scott_s If I could double up-vote this article, I would. Surprisingly astute, and I think I learned something about tools and interfaces. R: caffeine I think Wolfram put up their crazy interface in order to avoid just giving away free online Mathematica. Also to allow Stephen's ego to further blossom. _If_ you could use WA as a Mathematica console BUT with access to great built-in (crawled!) data and visualization tools, _then_ it would be useful. R: tommy_chheng Wordy article but I agree on his points. WA is trying to solve a non-existent problem for this particular use case. A person _wants_ to see a label rather than a graph. R: aswanson Spot on. If only this were written the week before we were deluged with Wolfram Alpha articles and the attendant hype, here and everywhere else. On to Chrome OS, then Mencius. We need another reality check on the latest planet-changer. R: stcredzero _We need another reality check on the latest planet-changer._ It's the same reality check as always: the majority of people don't understand it (and implications) well enough to really use it. The computer revolution hasn't happened yet. It's underway, and it's going to take time to overcome cultural inertia. R: fizx Even when I'm trying to be really obvious: [http://www01.wolframalpha.com/input/?i=calories+in+a+peanut+...](http://www01.wolframalpha.com/input/?i=calories+in+a+peanut+butter+and+jelly+sandwich)
HACKER_NEWS
"""List boot environments cli""" import click import pyzfscmds.system.agnostic import pyzfscmds.utility as zfs_utility import zedenv.lib.be import zedenv.lib.check from typing import Optional, List from zedenv.lib.logger import ZELogger def format_boot_environment(be_list_line: list, scripting: Optional[bool], widths: List[int]) -> str: """ Formats list into column separated string with tabs if scripting. """ if scripting: return "\t".join(be_list_line) else: fmt_line = ["{{: <{width}}}".format(width=(w + 1)) for w in widths] return " ".join(fmt_line).format(*be_list_line) def configure_boot_environment_list(be_root: str, columns: list, scripting: Optional[bool]) -> list: """ Converts a list of boot environments with their properties to be printed to a list of column separated strings. """ boot_environments = zedenv.lib.be.list_boot_environments(be_root, columns) """ Add an active column. The other columns were ZFS properties, and the active column is not, which is why they were added separately """ unformatted_boot_environments = [] for env in boot_environments: if not zfs_utility.is_snapshot(env['name']): # Add name column boot_environment_entry = [zfs_utility.dataset_child_name(env['name'])] # Add active column active = "" if pyzfscmds.system.agnostic.mountpoint_dataset("/") == env['name']: active = "N" if zedenv.lib.be.bootfs_for_pool( zedenv.lib.be.dataset_pool(env['name'])) == env['name']: active += "R" boot_environment_entry.append(active) # Add mountpoint dataset_mountpoint = pyzfscmds.system.agnostic.dataset_mountpoint(env['name']) if dataset_mountpoint: boot_environment_entry.append(dataset_mountpoint) else: boot_environment_entry.append("-") # Add origin column if 'origin' in env: origin_list = env['origin'].split("@") origin_ds_child = origin_list[0].rsplit('/', 1)[-1] if zfs_utility.is_snapshot(env['origin']): origin = f'{origin_ds_child}@{origin_list[1]}' else: origin = env['origin'] boot_environment_entry.append(origin) # Add creation if 'creation' in env: boot_environment_entry.append(env['creation']) unformatted_boot_environments.append(boot_environment_entry) columns.insert(1, 'active') columns.insert(2, 'mountpoint') # Set minimum column width to name of column plus one widths = [len(l) + 1 for l in columns] # Check for largest column entry and use as width. for ube in unformatted_boot_environments: for i, w in enumerate(ube): if len(w) > widths[i]: widths[i] = len(w) # Add titles formatted_boot_environments = [] if not scripting: titles = [t.title() for t in columns] formatted_boot_environments.append( format_boot_environment(titles, scripting, widths)) # Add entries formatted_boot_environments.extend( [format_boot_environment(b, scripting, widths) for b in unformatted_boot_environments]) return formatted_boot_environments def zedenv_list(verbose: Optional[bool], # alldatasets: Optional[bool], spaceused: Optional[bool], scripting: Optional[bool], # snapshots: Optional[bool], origin: Optional[bool], be_root: str): """ Main list command. Separate for testing. """ ZELogger.verbose_log({ "level": "INFO", "message": "Listing Boot Environments:\n" }, verbose) columns = ["name"] # TODO: Complete # if spaceused: # columns.extend(["used", "usedds", "usedbysnapshots", "usedrefreserv", "refer"]) """ TODO: if all_datasets: if snapshots: """ if origin: columns.append("origin") columns.append("creation") boot_environments = configure_boot_environment_list(be_root, columns, scripting) for list_output in boot_environments: ZELogger.log({"level": "INFO", "message": list_output}) @click.command(name="list", help="List all boot environments.") @click.option('--verbose', '-v', is_flag=True, help="Print verbose output.") # @click.option('--alldatasets', '-a', # is_flag=True, # help="Display all datasets.") @click.option('--spaceused', '-D', is_flag=True, help="Display the full space usage for each boot environment.") @click.option('--scripting', '-H', is_flag=True, help="Scripting output.") # @click.option('--snapshots', '-s', # is_flag=True, # help="Display snapshots.") @click.option('--origin', '-O', is_flag=True, help="Display origin.") def cli(verbose: Optional[bool], # alldatasets: Optional[bool], spaceused: Optional[bool], scripting: Optional[bool], # snapshots: Optional[bool], origin: Optional[bool]): try: zedenv.lib.check.startup_check() except RuntimeError as err: ZELogger.log({"level": "EXCEPTION", "message": err}, exit_on_error=True) zedenv_list(verbose, # alldatasets, spaceused, scripting, # snapshots, origin, zedenv.lib.be.root())
STACK_EDU
You can choose to install either MariaDB or MySQL, outlined in the following two sections. # pacman -S mariadb If you run the Btrfs filesystem, you should consider disabling copy-on-write for the database directory for performance reasons: # chattr +C /var/lib/mysql/ # mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql Start MariaDB, and make it start after every boot: # systemctl enable --now mariadb Complete recommended security measures. At the beginning, press ENTER for the current root database password, set a new root password, and press ENTER to answer yes on all further prompts. Although MariaDB is strongly recommended, you can alternatively install MySQL from the Arch Linux User Repository (AUR). Understand that AUR packages are not officially supported, may be updated less frequently, and because they are not necessarily submitted by a vetted Trusted User, their PKGBUILD/ETC should be reviewed for any suspect code. That said, as of early 2019, the current AUR maintainer for mysql is “Muflone”. Although not a vetted Trusted User who can publish to the official repositories, he has been a valuable contributor to Arch since 2011, maintains about 250 AUR packages (many of them popular) and has never done anything suspect. To install MySQL, compile and install the AUR package mysql. MariaDB and MySQL have very similar post-install steps. # mysqld --initialize --user=mysql --basedir=/usr --datadir=/var/lib/mysql Start MySQL, and make it start after every boot: # systemctl enable --now mysqld Complete recommended security measures. An automatically generated temporary root database password was shown by the previous command. Set a new root password. Respond with y on all further yes/no prompts, and select 2 for “STRONG” password validation policy. Note you cannot have MariaDB and MySQL installed on the same system, as MariaDB is made to be a drop-in replacement and has files of the same name. Also, when compiling with less than 4GB total RAM (physical RAM + swap), you may encounter a memory exhausted error while compiling. To connect to MariaDB or MySQL as the root database user, run the following: $ mysql -u root -p MariaDB [(none)]> quit You may want to consider configuring a firewall. By default, MariaDB will listen on port 3306, not only from localhost, but also from anywhere on your public IP address. By default, MariaDB will only approve incoming connections from localhost, but external attempts will still reach MariaDB and get an error: Host... is not allowed to connect to this MariaDB server. Although MariaDB is considered quite secure, it’s more secure to have a firewall not even give external packets to the MariaDB server, unless absolutely necessary. Even if direct remote access is desired, using a firewall to block the traffic and using a VPN would be more secure. Host... is not allowed to connect to this MariaDB server By default, pacman will upgrade MariaDB when new versions are released to the official Arch repositories, when you upgrade your entire Arch system by running the following: # pacman -Syu It is recommended to configure pacman to not automatically install upgrades to MariaDB. When an upgrade is released and you upgrade your entire Arch system, pacman will let you know a new version is available. Edit /etc/pacman.conf, and add the following: IgnorePkg = mariadb* It’s a good idea to backup your database before upgrading. When pacman shows you there is a MariaDB upgrade, force upgrading the packages: # pacman -S mariadb mariadb-clients mariadb-libs If you’re running the AUR MySQL package, pacman never automatically compiles and installs new versions from the AUR, so the above steps are unnecessary, but the ones below are still required. After an upgrade, the package’s .install script will alert you to perform the following steps, but blocking the automatic upgrade ensures you won’t miss it. Restart MariaDB, to load the new version: # systemctl restart mariadb Check and update your tables to conform with the new version: # mysql_upgrade -u root -p Powered by BetterDocs
OPCFW_CODE
Android Dev: Avoiding Internal Getters/Setters? I was reading this section in the Android Dev Guide : here and I was wondering what is a "Virtual method call" and what does it mean when it says "locally" using a getter/setter? I'm trying to figure out if what they're saying is avoid using methods EVER (for instance a method from an instanced object) or just inside a class you're already working in to get a variable? To sum it up basically, if I'm in a different class and I want to know the value of a variable in a different class will it be more expensive to do otherclass.getX() than to do otherclass.x? Or is it the same performance if it's not within the current class to do either a method or access a public variable directly? Using getters and setters is more expensive because first the VM will lookup the method from a virtual method table and then make the call. For faster access on Android directly accessing member variables reduces the overhead So it's just up to me to decide whether or not I want the convenience of a method or the performance gain of accessing a public member? In that article, they are referring internally accessing private members, and doing so with the field directly rather than calling getX() inside the same class. It is still recommended (and common) to make members private and provide public accessor methods for external use. HTH What the article is basically saying is to avoid the getter/setter patten when you can get away with it. In Java, all methods are Virtual that aren't marked with the private or final modifiers, so they are saying that if your code isn't interface to be implemented by other classes, just access the fields directly. Most likely the reason they point this out is because traditionally, the Java recommendation has been to always use the getter / setter pattern so that your variables can be kept private. However, in Android, you can take a pretty severy performance hit if you add this additional layer of abstraction. So, in summary. If you're creating an API that other classes will implement, then maybe it's worth it to take the performance hit of getters / setters. But, in your own classes that all interact with each and you're not enforcing a contract, just access the variables directly. External classes accessing your class will also experience the same performance gain by accessing the variable directly, but at some point you need to do a performance-to-maintainability assessment to see if you are comfortable making those variables public or if it's worth it to take the hit and use getter / setter methods There are MANY, MANY good reason to always use getters (and often use setters) in Java and it's still a great practice to adhere to even when writing code for 'Droid. While naively there is a higher cost with Dalvik using virtual methods (i.e. getters/setters) rather than instance field access - you can avoid this by using ProGuard to inline these calls at build time! In such a way you adhere to best practices when coding while avoiding any performance hit.
STACK_EXCHANGE
2024 % 4 == 0 Another revolution around the sun! This was a pretty fun and interesting year. I got to work on some interesting projects, and learned a lot. I am going to try and use my GitHub activity to recap. - I helped a friend modernize their Larvael codebase. Dockerized it for easier development, and added a CD pipeline. (Probably going to be released by end of this year) - I joined Hindsight Journal, a creative non-fiction club at CU as their "webmaster", and we moved away from Squarespace to our own custom static site generator. - I did some YunoHost stuff with listmonk, and audiobookshelf - I found out that the instructor for my astrophysics class was behind @ThreeBodyBot. For my final project, I ported the codebase to run in the browser itself. Ended up getting an A 😎 - Won HackCU, my first hackathon in a few years. We built a timeboxing app similar to Motion / Reclaim.AI. Cleaned up the codebase and published it to the App Store as TimeSlicerX, making it my first published app. - Got into Mountain Biking! Summer was more relaxing. I mainly worked on some maintenance patches for my projects, and did some more freelancing stuff. - Learned Tkinter for a client's project. Working with PyInstaller to create signed executables for both macOS and Windows was not fun. Also, the stock Tk look on Windows is terrible. - Continued working on a research project using Computer Vision in analysing a lateral flow assay. Tried porting it to use OpenCV.js, but it wasn't reliable enough. I might look into directly working with OpenCV/Vision Framework for an iOS app. - Won a couple more hackathons. I might summarize my hackathon experience in a different blog post. - I gave up being the "webmaster" for Hindsight, and decided to become the club's business manager. We moved to Wix. - Had fun re-learning all the reverse engineering stuff for my Systems class. - Tried Advent of Code. Will be back :~) - Created an alternative to Simplify.Jobs and worked on autofilling resumes without needing a browser extension (Current solution does require that you disable a few security flags for this to work). One solution might be to wrap our website as an Electron application. - Started working on swift-gopher - a swift library for both client/server implementations for the Gopher protocol. - Ended up using swift-gopher to build iGopherBrowser - a modern gopher client for iOS / macOS. This is my first publically availablle macOS app. After the end of the fall semester I ended up getting my wisdom tooth removed. Took me out for 10 days. I also did a ton of other stuff, but I am not sure how much I want to be sharing on my blog here. Maybe as I write more I will get more comfortable with sharing more information. So, what are my plans for 2024? Learn. Build. Ship. - Continue homebrewing - Learn assembly - Get better at designing stuff - Improve my handwriting - Do a deeper dive into the math of Machine/Deep Learning, before I get back into it
OPCFW_CODE
Deployment doesn't use RELAYHOST for delivery Hello. I'm using a helm chart for the deployment: repoURL: https://bokysan.github.io/docker-postfix/ chart: mail revision: v4.1.0 Among the environment variables used, I have set RELAYHOST as per the documentation to set the relay to use, which is set to [smtphm.sympatico.ca]:587 When the pod starts, I can see that the variable is seen and interpreted correctly, from some of the starting logs: NOTE Forwarding all emails to [smtphm.sympatico.ca]:587 using username<EMAIL_ADDRESS>and password (redacted). But later in the logs, I see that it tries to send mails to the MX record of the target domain (and timing out but that's not the point. It should not even try it): 2024-02-25T16:29:46.087863-05:00 INFO postfix/relay/smtp[879]: connect to servinfo-ca.mail.protection.outlook.com[<IP_ADDRESS>]:25: Connection timed out I don't see any outgoing emails trying to use the relay as expected. The image I'm using is boky/postfix:v4.1.0 (coming from the helm chart) Any parameter I've missed? What does your main.cf say? Could this be related to #168? relayhost isn't defined in main.cf I tried a different approach. Instead of setting the RELAYHOST environment variable, I have set relayhost in the config.postfix section to see if that changes anything But I'm seeing something in the logs that may interfere. After I see that my relayhost is being applied in the logs: ‣ INFO Applying custom postfix setting: relayhost=[smtphm.sympatico.ca]:587 I'm seeing that postfix is being updated: ‣ NOTE Executing any found custom scripts... running bash /docker-init.db/pcre-add.sh WARNING: apt does not have a stable CLI interface. Use with caution in scripts. Get:1 http://deb.debian.org/debian bookworm InRelease [151 kB] Get:2 http://deb.debian.org/debian bookworm-updates InRelease [55.4 kB] Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB] Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8786 kB] Get:5 http://deb.debian.org/debian bookworm-updates/main amd64 Packages [12.7 kB] Get:6 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [143 kB] Fetched 9197 kB in 3s (3498 kB/s) Reading package lists... Building dependency tree... Reading state information... 12 packages can be upgraded. Run 'apt list --upgradable' to see them. WARNING: apt does not have a stable CLI interface. Use with caution in scripts. Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: postfix postfix-lmdb Suggested packages: procmail postfix-mysql postfix-pgsql postfix-ldap postfix-sqlite resolvconf postfix-cdb mail-reader postfix-mta-sts-resolver ufw postfix-doc The following NEW packages will be installed: postfix-pcre The following packages will be upgraded: postfix postfix-lmdb 2 upgraded, 1 newly installed, 0 to remove and 10 not upgraded. Need to get 2183 kB of archives. After this operation, 403 kB of additional disk space will be used. Get:1 http://deb.debian.org/debian bookworm/main amd64 postfix-lmdb amd64 3.7.10-0+deb12u1 [337 kB] Get:2 http://deb.debian.org/debian bookworm/main amd64 postfix amd64 3.7.10-0+deb12u1 [1508 kB] Get:3 http://deb.debian.org/debian bookworm/main amd64 postfix-pcre amd64 3.7.10-0+deb12u1 [338 kB] debconf: delaying package configuration, since apt-utils is not installed Fetched 2183 kB in 0s (4628 kB/s) (Reading database ... 10928 files and directories currently installed.) Preparing to unpack .../postfix-lmdb_3.7.10-0+deb12u1_amd64.deb ... Removing lmdb map entry from /etc/postfix/dynamicmaps.cf Unpacking postfix-lmdb (3.7.10-0+deb12u1) over (3.7.9-0+deb12u1) ... Preparing to unpack .../postfix_3.7.10-0+deb12u1_amd64.deb ... debconf: unable to initialize frontend: Dialog debconf: (TERM is not set, so the dialog frontend is not usable.) debconf: falling back to frontend: Readline debconf: unable to initialize frontend: Readline debconf: (This frontend requires a controlling tty.) debconf: falling back to frontend: Teletype Postfix Configuration --------------------- Please select the mail server configuration type that best meets your needs. No configuration: Should be chosen to leave the current configuration unchanged. Internet site: Mail is sent and received directly using SMTP. Internet with smarthost: Mail is received directly using SMTP or by running a utility such as fetchmail. Outgoing mail is sent using a smarthost. Satellite system: All mail is sent to another machine, called a 'smarthost', for delivery. Local only: The only delivered mail is the mail for local users. There is no network. 1. No configuration 3. Internet with smarthost 5. Local only 2. Internet Site 4. Satellite system General mail configuration type: Use of uninitialized value $_[1] in join or string at /usr/share/perl5/Debconf/DbDriver/Stack.pm line 111. The 'mail name' is the domain name used to 'qualify' _ALL_ mail addresses without a domain name. This includes mail to and from <root>: please do not make your machine send out mail from<EMAIL_ADDRESS>unless<EMAIL_ADDRESS>has told you to. This name will also be used by other programs. It should be the single, fully qualified domain name (FQDN). ..... and there's more lines.... Maybe this is where the issue comes from ? Confirmed, this is related to the script addition that was basically adding postfix-pcre which started some time ago to update postfix, which updated the main.cf after the fact without proper configs. To avoid the issue and not having to manage a docker image myself, I changed the script to make sure an upcoming postfix upgrade would not cause an issue by setting: export DEBIAN_FRONTEND=noninteractive ( to avoid interactive questions on installs ) And then started my script by upgrading postfix with: apt install -y --no-install-recommends postfix ( to avoid main.cf updated from apt install postfix) THEN, I can add modules, such as: apt install postfix-pcre
GITHUB_ARCHIVE
Recent community posts I think much of the complains people often have with long games are about the grind. Without the meaningless repetition elements and if new things keep showing up then I think 40 hours is a good amount of gameplay for the genre. It is a typing infinite runner where you type commands to run from an unstable vampire. You tried befriending them but now they are here to unleash their wrath. Can I submit it here, or should I work on a different game. I made it yesterday. I was working on a lore but couldn't decide any particular story that already hasn't been done before. I also wanted to participate in ludum dare. I saw their theme and everything just clicked. The game is a typing game where the vampire you befriended has now turned unstable (for lore reasons) and is now running after you. Can I submit it here? My 4KB ludum dare game is an Infinite Runner with typing mechanics Type: Unstable Vampire is my entry to ludum dare 49. I am participating in compo despite the harsher criticisms it might bring, as the option in the forms always suggest. But so far I have only seen constructive suggestions on the site, hence I confidently present to you this 4KB typing game. With the theme being unstable I wanted to go the route of horror and running from someone unstable but also wanted it to be lighthearted and fun, hence the backstory you see here came into being. Ludum dare gives a great oppurtunity to learn time management and learn from other's works and this game is a result of that. I have probably spent hours checking out the posts everyone is making there. I'm further interested in introducing humour elements to the game, a background into who the player character is and why they decided to befriend a vampire could be an epic story. Ofcourse the description itself consists as a supplementary material to the game. Consider the presentation as an ode to retro arcade game description. One can't go wrong with more weapons and incremental difficulty in the game, as long as the commands to type doesn't increases. Thank you for checking out this log and the game, in case you haven't yet, did I till you it's just 4 kilobytes! You can find the game at https://dobryncat.itch.io/type This is the first time I am participating in #devtober. I always wanted to work on a project within a jam like this. I'd like to keep working on this game thanks to the motivation I get seeing everyone else's work here. I am specifically looking for feedback over what could be added to the project to keep it interesting while also adding up levels of achievement as you keep on moving forward and your score increases. I was thinking about something simple like a "bling!" noise as a wall of text won't be read at that speed, however a bling noise might only just become an annoyance and not feel like an achievement at all. Thanks for checking this out. All feedback is welcome. You befriended a vampire, but now the vampire has turned unstable. Escape for your life by typing into the terminal run, run, jump and more commands as the situation arises. But the vampire isn't alone, friends are coming. If only you had an arrow and a bow! This game works perfectly for a fast reflex and typing showdown. Challenge your friends and peers, even non gamers might enjoy it as a challenge. And by the way did I mention it's just 4KB. Try out Type: Unstable Vampire on https://dobryncat.itch.io/type today. I got this error when submitting to a jam - "We seem to be having server errors, please try again later." Problem is I also got this error in April I think, ignored it and just got sad that I can't submit my project. Please look into it, or suggest how to fix this. I hope this is the right place to post this.
OPCFW_CODE
OCamlMakefile is a generic Makefile that greatly facilitates the process of compiling complex OCaml projects. For a basic OCaml program or library that doesn't use any library besides the standard library, just copy OCamlMakefile to the current directory and create the following Makefile: RESULT = myprogram SOURCES = \ mymodule1.mli mymodule1.ml \ myparser.mli myparser.mly mylexer.mll mymodule2.ml \ mymainprogram.ml OCAMLMAKEFILE = OCamlMakefile include $(OCAMLMAKEFILE) This is already a fairly complex program which has 5 compilation units and uses ocamlyacc and ocamllex. Only the source files must be given, except for the .mli files that are produced by ocamlyacc, myparser.mli in the example. The included OCamlMakefile provides a variety of targets. For details please refer to the documentation of OCamlMakefile, but here are the main ones: nc make a native code executable bc make a bytecode executable ncl make a native code library bcl make a bytecode library libinstall install library with ocamlfind libuninstall uninstall library with ocamlfind top make a custom toplevel from all your modules clean remove everything that matches one of the files that could have been automatically created by OCamlMakefile The recommended tool for installing OCaml libraries is Findlib (ocamlfind command) since it knows where packages are installed, loads their dependencies and knows which file should be used in a given situation. If you do not use Findlib, loading a regular runtime library can be done by setting the LIBS and INCDIRS variable. LIBS is the list of the name of the library files (xxx.cma or xxx.cmxa) without the .cma or .cmxa extension: LIBS = str unix If you use non-standard libraries that are not installed in the same directory as the standard library, the INCDIRS variable must contain the list of these directories: INCDIRS = /path/to/somelibdirectory/ Usually this requires some preliminary configuration as it is traditionally performed with a configure script since the path can vary from one installation to another. An exception is when using standard directories which are not included in the search path by default such as /path/to/stdlib/camlp4. In this case, this should be enough and portable: INCDIRS = +camlp4 OK, but we prefer libraries that are installed with ocamlfind. To use them with OCamlMakefile, the PACKS variable must be set: PACKS = netstring num Many libraries are part of a standard OCaml installation, but are not part of the standard library, such as unix, str, and bigarray. These libraries are automatically considered as Findlib packages. Any package which is required by a given package (e.g. netstring requires unix and pcre) is automatically loaded. How about Camlp4 syntax extensions? Some packages may define syntax extensions, which are bytecode units that are loaded by the preprocessor. With OCamlMakefile, a preprocessor to be used can be defined in the first line of the file: So it could be something like: (*pp camlp4o -I /path/to/pa_infix pa_infix.cmo *) Well, this form is not very convenient, so we will use the same preprocessor for each file and store its value in the PP variable of the Makefile: PP = camlp4o -I /path/to/pa_infix pa_infix.cmo export PP So each OCaml file will start with: (*pp $PP *) This way of defining the preprocessor is still not satisfying: we would like to take advantage of ocamlfind to load the appropriate syntax extension files. For this, we will use the camlp4find script. Every package which we use will listed as usual in the PACKS variable, and camlp4find will call ocamlfind to know which syntax extensions to load: PACKS = unix micmatch_pcre \ pa_tryfinally pa_lettry pa_forin pa_forstep pa_repeat pa_arg PP = camlp4find $(PACKS) export PP Full example using ocamllex and the unix and micmatch_pcre libraries. The Makefile file would be: RESULT = myprogram SOURCES = mymodule1.mll mymodule2.mli mymodule2.ml mymainmodule.ml PACKS = unix micmatch_pcre PP = camlp4find $(PACKS) export PP CREATE_LIB = yes # ??? OCAMLMAKEFILE = OCamlMakefile include $(OCAMLMAKEFILE) And each .ml or .mli file starts with: (*pp $PP *)
OPCFW_CODE