Document
stringlengths
395
24.5k
Source
stringclasses
6 values
I'm running a test of Horizon View (5.2 build-987719) on a Dell R720 with an Nvidia GRID K2 card. The Nvidia driver is version 304.76. I've set up a pool on this node to test out the hardware accelerated graphics, but am encountering an odd issue. I'm toying with the molecular modeling package Schrodinger Maestro, loading and rotating 3d protein models. In a fresh session, the fps is initially very poor, to the point that the software would not be usable by our users (< 10fps). On the esxi host, 'nvidia-smi' shows low to no GPU utilization, in the 0-5% range. Using the PCoIP log viewer, I see low-ish PCoIP server utilization (<20%) and relatively low Image Tx bandwidth utilization (~5000 Kb/sec). At some point, and what triggers this I'm not entirely sure (it has happened a couple of times after I opened the task manager, though this seems an odd cause. It has happened on its own after 30 seconds or so), the GPU kicks in; 'nvidia-smi' shows 25-30% utilization, PCoIP server utilization jumps to ~90%, and Image Tx bandwidth bumps to ~20000Kb/sec. The protein rotation smooths out, and everything appears to work as it should from that point on. The VMs run with 6 cpus and 32 GB ram. I've run through the PCoIP optimization guide (and disabled 'build to lossless' as well as tuned audio bandwidth), and also ran through the Windows 7 View optimization guide (and made use of the recommended optimization script). I'm seeing this issue with only one View desktop running on the host. I have the exact same setup as you, any chance you could share your testing application so I can see if I get similar results? It sounds like the app is generating an extremely high FPS. Likely in the 1000's. There are some enhancements we are making for high FPS apps in our driver that should help in the future. If this is it...... try adding this registry key. In the following location: HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware SVGA DevTap Create the DWORD value Set a dword value that matches your PCoIP FPS. If for example you have the default of 30 fps. Set it to that. I don't think Schrodinger offers a trial version. In trying to find something else to test with, I discovered a couple of applications that simply crash on this setup when trying to render certain things (PyMOL, Cambridge Structural Database Mercury crashing when trying to render wire/tube representations of molecules); possibly a separate issue, but maybe not. I traced the openGL calls hoping something would stand out, but I lack the expertise to spot an issue. I'm going to give wponder's suggestions a go and see if it helps. Doesn't look like setting MaxAppFrameRate solved the issue; symptoms as before. I'd like to try this with vdga as well, though I've not had luck getting that to work just yet. Strange.. can you send a screenshot of the reg setting? I can take a look just to double check it. Sure, screenshot attached: Try setting it to "0" instead, that disables any throttling, I found that setting the pool to automatic makes everything behave like you describe, it takes around 30 seconds for gpu to kick in. However, when I changed the pool 3d setting to : Hardware then it goes instantly! Thanks for the tip, though this pool was already configured to use only hardware rendering. No luck with MaxAppFrameRate = 0. I'm starting to suspect the application may be doing something strange; it may also be that SVGA is simply not capable of handling workloads like this.
OPCFW_CODE
I have to admit that the genesis for this post was a conversation I had with some of my business-minded friends during the first half of the Super Bowl. One of my friends, who runs his own venture-backed company, asked the group whether we thought the classic Silicon Valley, venture-funded tech startup scene was the best place to be for the next 20 years (I’m paraphrasing – he actually did a much better job of posing the question). Earlier in the week, I read a post on Fred Wilson’s blog that actually got me thinking along these lines as well – that post was about the miserable margins for some of our web 2.0 darlings. Why is it that the tech industry has been floundering (in terms of building really big businesses with strong margins) since the last bubble? And what does that mean for people who want to work in technology? The Old Model Capital + People + Idea + Technology = Technology Product (chip, software application, communications system, etc) When I first got into the technology industry, it felt like there were a lot more companies who used technology inputs to product technology outputs. What does this mean? Well, there were a lot of companies who used technology as an input to produce new technology outputs. Communications systems companies built on existing chip IP to make new boxes / switches / gear that was better than the stuff before. Enterprise software companies did the same. The goal always appeared (to me at least) to move up the margin stack – take the stuff that was available or soon to be available, get some smart people and some money, and build a product where you could capture some margin. For this whole model to work, I think you have to have products that attack large markets AND generate real margins. Why do margins matter? Well, margins allow you to build big businesses (you have money to invest in R&D, you can get off the fundraising treadmill, and people can start looking at your business and comparing it to public competitors). Also, given the returns that investors needed to see, the amount of dilution company founders took, and the time to liquidity, this model worked really well for business that both get big from a revenue standpoint and have really great margins. The New(ish) Model Capital + People + Idea = Technology Service (consumer web service, e-commerce concept, etc) I think one of the biggest things I’ve seen that’s difference is that we build different stuff. Whether it was chips, enterprise software systems, internet infrastructure components, chips, radios, or antennas, the stuff that Silicon Valley made were things where it was pretty clear as to how you turned them into businesses that had the potential to become real breakout, standalone companies. Those were items where the end customer was both known, reasonably well-understood, and (most importantly) willing to pay for the end product that was created. Sure, there was technical risk, market risk, execution risk, and all of that – but if you could navigate those risks, there was a pretty good chance that you could build a business that was a threat to go public or get acquired at a nice multiple. Things today are a lot different. There are a lot more companies today where technology is an input and the actual output is a tech-enabled service. Look at Digg, Twitter, Facebook, Salesforce or just about any of your favorite web 2.0 company. For the most part, they take widely-available technology inputs and use those as the foundation for technology-enabled services. I don’t want to imply that those companies aren’t innovating, pushing the envelope, or otherwise creating new stuff that’s changing the landscape of the Internet. But their output is a service and services have very different margin structure and business trajectories than product companies. And that means a lot when you’re looking at how to finance and value these opportunities as businesses. We have had some past successes in creating web-based service companies that really work at scale. Amazon. eBay. Netflix. Zappos. And a few others. But if service-as-the-product is the new norm, we’re going to have to get much better at that way of doing business if we have sincere hopes of building another generation of great entrepreneurial web-based service companies. I think the biggest challenge facing the technomoneyplex (the combination of technology innovation and investment that powers the Silicon Valley) is what to do next. Do we stick to the high-margin product model and find a new target (maybe clean tech? maybe digital media? maybe some yet-to-be-identified market) where those rules still apply? Or do we get good at building, financing, and nurturing service businesses. It still feels to me like we’re trying to jam a square peg (the old model) into companies who are round holed (the new model). And this isn’t just a consumer web problem. Look at the economics of software-as-a-service businesses vs traditional packaged software. This is an issue that cuts across a lot of technology markets. I think there is a TON of value to be created by figuring out how to build, nurture, and grow web-based service companies. But the question is whether people want to and / or can do it. It’s hard to both love the old model (profitable as it was) and embrace the new unknown at the same time. Thoughts, comments, disagreeements, etc are all welcome.
OPCFW_CODE
[18:52] <anotheral> is this the canonical channel for cloud-init? [18:53] <utlemming> anotheral: yup [18:53] <anotheral> i was just wondering if there's any documentation of the order of operations that happens in a cloud-config yaml? [18:53] <anotheral> i have some stuff that needs to happen before some other stuff :) [18:54] <utlemming> anotheral: /etc/cloud.cfg controls what happens when [18:54] <anotheral> beautiful [18:54] <anotheral> ta [18:54] <utlemming> anotheral: if you want to change that in user-data, you can do that too [18:54] <utlemming> anotheral: i.e. put #cloud-config into user-data [18:54] <anotheral> so the order of items in that cloud.cfg file is the order of execution? [18:54] <anotheral> not the order in my yaml [18:55] <utlemming> anotheral: yes [18:56] <anotheral> ah, looks like write_files isn't even in 12.04 cloud-init [18:56] <utlemming> anotheral: if you want to change that on boot, then define cloud_init_modules or cloud_config_modules in your #cloud-config user-data [18:56] <anotheral> oh interesting - that's good to know! [18:56] <anotheral> thanks folk! LOOOOVE your product [20:52] <harlowja> yw! [22:48] <anotheral> if you don't mind, i do have one more question [22:49] <anotheral> i'm trying to create some files, but ubuntu 12.04 doesn't have the write_file capability [22:49] <anotheral> is there a decent workaround anyone is aware of? [23:26] <harlowja> anotheral send in a bash script that writes out the files [23:27] <harlowja> using bash syntax [23:27] <anotheral> ah right
UBUNTU_IRC
AOSD and work One of the responses to my post on the Stills Message Board was about AOSD and Work, and the difficulty some people have with communicating with their employers about the disease, and in fact with some health industry workers. I thought this was interesting because it’s similar to my experience so I thought I would write a bit about it. First off you need to be able to explain why Still’s affects your ability to work. At first I just tried to describe the pain, but in my case the pain is really not that extreme and others do manage to work with that level of pain. I am a pretty motivated guy normally so that got me thinking a bit more about why I struggle to work when I am in a flare and why even when things have stabilized I don’t have the stamina I used to have. This is what I came up with: - Although I suffer from Joint, Muscle and Tendon pain these in themselves I can put up with and generally work through provided my arms and hands are not too bad, (as I spend a lot of time typing) and I take a break every hour. - Every couple of days I will get extreme pain, often very localised, for example in my Jaw, Knee, Elbow, Finger etc. Normal pain killers have no effect. It’s very difficult to find work to do when in this sort of pain. I have found that only things that are very easy, or very absorbing (but not complex) work because they are distracting. In my job I try to watch recordings of technical conferences during these periods. - The main things that affect my ability to work are, the fatigue, difficulty concentrating and mental fog that seriously affects my mental acuity. These things don’t stop me dead, for example I am still able to read, have a discussion on the phone, type a couple of emails, but they limit the amount of time I can spend working, the intensity and the quality of my work. I tend to find the error rate in my typing increases 10 fold, that I struggle to find the rights words, or make difficult decisions. These symptoms rule out most of my work activities. - An associated problem is the depression, the combination of the pain, the mental affects, the lack of sleep, the fevers, the itching etc don’t make me that positive and the slightest issues seem to make me anxious and frustrated. When I feel like this I have to avoid certain situations, such as document reviews where the conclusions impact on people firmly held beliefs, as these discussions quickly degrade into arguments. - I have noticed that hard work tends to increase the likelihood of a flare and affects the duration of a flare, so my Doctor tends to encourage me to rest - I have pretty reliable evidence now that whenever I get an infection it triggers a flare, and this tends to wipe me out for about a week However I have found that home working combined with a carefully designed work mix makes this all a lot more bearable for the following reasons: - It’s much easier to take a real break when I get tired, because there are plenty of other things to do at home - I am less likely to get drawn into working for too long, or to get sucked into solving problems that are not my responsibility - I am able to work an extended day, lowering the intensity of my activity - I don’t have to worry about my ability to drive safely - I have more flexibility in what I do at particular times, for example if my arms are hurting I can sit in a comfy chair and do some reading, or go for a walk or a swim I am trying to develop a mix of work that is as flexible as my condition is variable, my ideal work mix goes something like this: - Work that does not have short term deliverables - Work that does not involve me being available at specific times during the day - A mix of work in the following proportions, 2 hours of research, 2 hours working on long term deliverables, 3.5 hours medium term project deliverables. - Even on bad days I can normally manage the 2 hours of research, on better days I try to work on the medium term project deliverables and on the best days I also work on the long term deliverables. - My current experience is that I have gradually increased my work from 3-4 hours average to 5-6 hours average. Whenever I have got beyond 6 I have had some sort of relapse for a whole variety of reasons, which may not be work related for example an infection or a dosage change. - I have also noticed (not suprisingly) that the worse I feel the more research I do and the less I work on deliverables. - On the plus side I have found that my grasp of concepts and my intuition are still very strong (probably stronger than they were before) as is my ability to review and help bring shape and structure to work. Finally how to help people understand: - I have kept extensive records, which have helped me understand much better! In particular I found keeping records of things like fatigue and mental acuity particularly useful - It’s key to think not in terms of symptoms in their own right but how these symptoms affect your ability to do things - Its key to think about how you feel, not just about the things that the doctors can measure - I have found it very difficult sometimes to understand why I struggle to get beyond 6 hours average work. What seems to happen is that hours just drift by in a mental fog. So I might sit at my computer for more than 6 hours but more than 6 hours of work does not get done. I often find for example that I have sat for half an hour reading a report and can not remember what I have read and have to go back and start again (when this happens I count that as 1/2 an hours work not an hour). - There is very little available to help Still’s patients but one of the most useful things I have found is the wealth of material to help sufferers of Fibromyalgia. This is particularly helpful because this disease has similar pain, fatigue and mental affects to AOSD so its impact on work is similar. Search the web and you will find loads of advice on disability, coping with fatigue etc. You can start here:
OPCFW_CODE
**NOTE: For new OAuth 2.0, please read this post** OAuth is a de facto standard for authorization. The protocol enables applications to share their private resources with other applications such as web, desktop, or mobile applications without having to share usernames or passwords. You can learn more about the protocol which has several versions from the following links: In this post, I wrote a simple metro style app that consumes Linkedin information. (I know that metro is now a bad word, but windows 8 UI style app sounds too verbose). LinkedIn is using a strict OAuth 1.0a and requires HTTP Header-based Authorization. The best resources that should help are LinkedIn OAuth: Zero to Hero slideshare by Taylor Singletary and simple oAuth C# working sample. Before we start coding, let’s talk briefly what our application (i.e., LinkedIn consumer) has to do to be able to consume LinkedIn information with user authorization. 1. Our application has to have a LinkedIn consumer key and consumer secret key. We can get it by registering your app thru LinkedIn Developer Network. 2. Our application starts talking with LinkedIn by asking LinkedIn for a request token (and request token secret key). In this step, our application has to give Linkedin, the consumer key and other OAuth information like nonce, signature method, version, and so on. The application also needs to provide a signature which is signed with the consumer secret key. 3. Beside the request token and the request token secret key, LinkedIn will also give us an authorize link which allows users to (authenticate himself/herself if necessary) authorize our application. 4. As our application is not a web application (as specified when we register our application), our application has to provide a way for user to enter a PIN code that LinkedIn gives the user in authorization process. 5. Now our application should have the request token, the request token secret key, the PIN code (oauth_verifier). Our application will use above information to request LinkedIn for an access token and an access token secret key. The application also needs to provide a signature which is signed with the consumer secret and request token secret keys. 6. Once our application has the access token and the access token secret key, then our application can make LinkedIn API call and get information from LinkedIn. If you don’t quite understand the process above, don’t worry. Look at the code should give you better understand of how the process works. First, let’s create a blank metro style app. I just create a one page app with several TextBox, TextBlock, and Button controls. I also include WebView control to let user authenticates and authorizes our application within our application. In the code-behind page, our application will do the work by assembling messages and communicate with LinkedIn. Below is the sample code for requesting request token from LinkedIn. private async void getRequestToken_Click_1(object sender, RoutedEventArgs e) string nonce = oAuthUtil.GetNonce(); string timeStamp = oAuthUtil.GetTimeStamp(); string sigBaseStringParams = "oauth_consumer_key=" + consumerKey.Text; sigBaseStringParams += "&" + "oauth_nonce=" + nonce; sigBaseStringParams += "&" + "oauth_signature_method=" + "HMAC-SHA1"; sigBaseStringParams += "&" + "oauth_timestamp=" + timeStamp; sigBaseStringParams += "&" + "oauth_version=1.0"; string sigBaseString = "POST&"; sigBaseString += Uri.EscapeDataString(_linkedInRequestTokenUrl) + "&" + Uri.EscapeDataString(sigBaseStringParams); string signature = oAuthUtil.GetSignature(sigBaseString, consumerSecretKey.Text); var responseText = await oAuthUtil.PostData(_linkedInRequestTokenUrl, sigBaseStringParams + "&oauth_signature=" + Uri.EscapeDataString(signature)); string oauth_token = null; string oauth_token_secret = null; string oauth_authorize_url = null; string keyValPairs = responseText.Split('&'); for (int i = 0; i < keyValPairs.Length; i++) String splits = keyValPairs[i].Split('='); oauth_token = splits; oauth_token_secret = splits; oauth_authorize_url = splits; requestToken.Text = oauth_token; requestTokenSecretKey.Text = oauth_token_secret; oAuthAuthorizeLink.Content = Uri.UnescapeDataString(oauth_authorize_url + "?oauth_token=" + oauth_token); Now, let’s see how our application works. First, we provide our consumer and consumer secret keys and click “Get Request Token” button. Next, LinkedIn should give us a request token and a request token secret key as well as an authorize url link. After the link is populated, we can click the link and use the WebView control showing the LinkedIn authentication and authorization pages. After our application has been granted, LinkedIn will give an oauth verifier PIN which we can just put it in oauth_verifier textbox. In the real application, you should open the link automatically and provide some instruction, so users will know what to do. Now we have everything that is required to request LinkedIn for an access token and access token secret key. Once we have the access token and access token secret keys, we can use them to access LinkedIn API and get information. Hope this sample application will help anyone who is going to write metro style application that needs to talk with LinkedIn. If you want to talk with other services, you can also look at the Web authentication broker sample which has twitter, facebook, flickr, and google services samples. I did reuse a lot of OAuth code from the above sample as well. You can download the full source code here. I also have a similar example for Twitter here.
OPCFW_CODE
How do I automatically re-load a file that gets modified by a bang (!) commandin Vim? I have set up a line of vimscript in my .vimrc file to pretty print JavaScript files: nnoremap <leader>p :!js-beautify -r -j %<cr> I want to just automatically reload the file instead of being prompted when vim comes back from the shell, is that possible? Thanks. :! can be used a) to execute arbitrary shell commands or b) as a filter. You are using it to run js-beautify against the file associated with the current buffer with the following consequences: You are forced to exit Vim. The file is modified outside of Vim so you get a prompt asking you if you want to reload or not. Hence the many seemingly pointless <CR>. What you actually want is to run js-beautify as a filter against the current buffer, which doesn't require you to exit Vim or press <CR>: nnoremap <leader>p :%!js-beautify -f - -j<cr> the special range % represents the whole buffer, it's the range on which we want to apply our filter -f - passes the content of the buffer via stdin That's it: no <CR>, no prompt, no mess. As a bonus, here is a custom command from my config (I didn't want a mapping for that): command! -buffer -range=% Format execute <line1> . "," . <line2> . "!js-beautify -f - -j -B -s " . &shiftwidth edit You can use context marks to return the cursor to its initial position: nnoremap <leader>p m`:%!js-beautify -f - -j<CR>`` Rockin'! This works well for me. Know of any way to keep the cursor in the same "neighborhood" as it was before, instead of jumping to the top of the buffer? Much better thought out than my own take. I will attribute my own lack of such Insightfulness to a failure to lack insufficient coffee @romainl Nice one! Hadn't even thought about using marks. Do you happen to know if there's an answer to the much more general question in the title: "How do I automatically re-load a file that gets modified by a bang (!) commandin Vim?" Some commands can't be invoked as filters. "Some commands can't be invoked as filters." Then don't use them as filters. As the help for :! notes: Vim redraws the screen after the command is finished, because it may have printed any text. This requires a hit-enter prompt, so that you can read any messages. To avoid this use: :silent !{cmd} The screen is not redrawn then, thus you have to use CTRL-L or ":redraw!" if the command did display something. So if you wanted to use the :redraw! approach, for example, you could do this using something like nnoremap <leader>p :silent !js-beautify -r -j %<cr>:e!<cr>:redraw!<cr> This doesn't seem to work, I still get the "Vim has changed on disk. Do you want to reload?" prompt.
STACK_EXCHANGE
from Output import * class AcceptanceChecker(object): """This is an implementation of acc_b from the paper.""" QUIESCENCE = Output(None) def __init__(self, boundary, input_distance, output_distance, standard): super(AcceptanceChecker, self).__init__() self.boundary = boundary self.input_distance = input_distance self.output_distance = output_distance self.standard = standard self.printed_trivial_passing_warning = False def check_output(self, history): # get the standard traces that are close (w.r.t. inputs) to the full history close_standard_traces = self.input_distance.get_relevant_standard_traces(self.standard, history, self.boundary) if len(close_standard_traces) == 0 and not self.printed_trivial_passing_warning: # If inputs deviate too much, the test passes independently of the output. # This is not desired, hence print a warning print('Warning: input deviates by more than kappa_i! (' + str(history) + ')') self.printed_trivial_passing_warning = True for standard_trace in close_standard_traces: # This line of code tries to find an output that satiesfies the second condition of the definition of robust cleanness satisfying_output = self.output_distance.find_close_output_for_standard_trace(standard_trace, close_standard_traces, history, self.input_distance) if satisfying_output == None: # When we do not find such an output for standard_trace, return the trace (it can be used as a counter example) return standard_trace # We found a satisfying output for every standard trace, i.e. return None to indicate that no standard trace violates the property return None def check_quiescence(self, history): # This method exists to allow special handling of quiescence in subclasses; here, we only forward the call to `check_output` return self.check_output(history)
STACK_EDU
Sunday, December 02, 2012 3:13 PM Is RemoteFX specifically designed to exclude redirection of USB HID devices like the mouse and kbd? Based on the articles I've read so far, I haven't seen anything the specifically discussing support for or against this. My use case is simple: I have one computer with a dual-headed graphics card, two mice and two keyboards attached. I'd like to have an RDP session open via the standard mstsc client in a window on the second monitor, so a second family member can easily sit down and check her email via the RDP session when another user is already using the main system. The second mouse and kbd should be dedicated to that session so that events are not shared with the console session. I've enabled the requisite GPOs per the article above, and I've set up the required registry key to expose these USB devices to mstsc. I am able to select those devices when configuring the client, however when I connect, no redirection actually occurs. No errors are displayed or logged to the event log. Over the past few years I've solved this problem in various cumbersome ways, all of which involve running some flavor of linux in a VM set up to capture the mouse and kbd. That's computationally expensive and it seems this should be so simple with RemoteFX.... what am I missing? Monday, December 03, 2012 4:20 AMModerator Windows Multipoint server is designed to do what you are attempting. Specifically, hook up a second (or third, fourth, etc.) keyboard/mouse/monitor and have it run as a separate session. Is the host that you are using Remote Desktop to connect to running Windows 7 Enterprise/Ultimate SP1 with RDP 8 enabled or Windows 8 Enterprise or Server 2012 with RDSH installed? If yes, are you able to successfully redirect other USB devices (besides keyboard/mouse) in the session? When you connect, do you get the little computer icon in the connection bar? Monday, December 03, 2012 2:15 PM Thanks for your reply. Both Multipoint Server and Userful Server (its competitor) are designed for larger classroom or multi-classroom deployments with 6-10 terminals per hw node, and require specialized hardware. The Multipoint SDK operates at the application, not session level. As the second user will pretty much exclusively use the web, I could try to write a custom browser using the SDK, but in the spirit of keeping it simple... I am running Server 2012 as both the host and client, but did not install the RDSH role as that appears designed for more complex multi-node scenarios and requires a domain controller. There is no computer icon in the connection bar, but there is a "Connection Info" button that tells me the "quality of the connection is excellent". Can RDSH run without a PDC, should I install it? Monday, December 03, 2012 3:47 PMModerator MP server does not require specialized hardware. At its most basic level you can take a desktop PC with a low-cost multi-monitor card, plug in monitors, and plug in keyboards and mice. Once the stations are mapped they function similar to individual computers. For example, one person could be editing an excel sheet in their session while the other person is browsing the web with IE in theirs. Under the covers MP server is an RDSH server with more flexibility for types of potential endpoints. RDSH Role Service is required for usb redirection to work. I do not know if what you are attempting to do will work, but if it will, RDSH will need to be installed. Generally you are correct, a domain controller is required, and 2012 RDS has been designed to make multi-server scenarios easier to manage. In your case I think your needs are very basic, so you could potentially get by with no dc. To try it you would install the RD Session Host and RD Licensing Role Services, no RDS deployment. After restarting you need to configure the licensing mode and specify the licensing server name (itself) via local group policy (gpedit.msc). If things work out you can use RD Licensing Manager to Activate your RD Licensing server and install purchased RDS CAL(s). If things do not work out you would uninstall RDSH and RDL without buying any RDS CALs. - Marked As Answer by Clarence ZhangModerator Monday, December 10, 2012 9:43 AM Friday, December 07, 2012 8:22 AMModerator I would like to confirm what is the current situation? If there is anything that I can do for you, please do not hesitate to let me know, and I will be happy to help. TechNet Subscriber Support Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Saturday, December 15, 2012 8:32 PM Thanks Clarence. I did try installing the RDSH server and RDL server. Upon connecting with mstsc, I can now hear the USB devices disconnect from the host, but they are not then "connected" to the "remote" desktop session either, at least I cannot use them in either session until I sign out of the remote desktop session. Based on this I am thinking this is simply not going to work. I am going to try and install an MP server, but I have to confess at this point it is not clear whether that is a different OS altogether or something I can add on to my Server 2012 instance. Monday, December 17, 2012 3:42 PM Ok, so I did install Windows MP Server 2012 Standard Edition, and was instantly excited to see upon bootup that each monitor presented a screen instructing the user to press a key on the keyboard that should be associated with that display. That excitement quickly turned to confusion and frustration however when I discovered that only the keyboard I had attached to a serial port was recognized, and the remaining USB HID devices (2 mice and 1 additional keyboard) seemed to be completely unrecognized. Upon logging in, as neither mouse was recognized, I was forced to navigate entirely using the keyboard attached to the serial port (kudos to MS in its attention to detail in ensuring universal keyboard navigability). Stumbling along with the keyboard over the course of the next hour I managed to determine that 1.) The USB hardware was all recognized and running properly according to Device Manager, and 2.) Nowhere on this system could I locate the "MultiPoint Manager" utility needed to further configure or diagnose the problem. I looked at Server Manager 'roles', Administrative Tools, Computer Management, Control Panel, C:\Windows\System32, searched for an MMC snap-in via mmc.exe and finally ran several searches of the filesystem for anything relating to "MultiPoint". I could find nothing in the system help (accessed via F1) or in any of the online MSDN documentation on how to locate this utility. So I must be missing something fundamental? This was an out-of-box installation of the product that I downloaded yesterday morning via my MSDN subscription. How does one configure the MP Service under MP Server 2012 Standard Edition? Thanks, I'd really like to see this thing work, and am willing to continue troubleshooting but at this point I've hit a brick wall. Friday, December 21, 2012 7:11 PMI'm interested in seeing what you work out for this. I have Server 2012 installed and am curious as to the limit of RemoteFX over RPD sessions - specifically for in-house gaming sessions. The concept of multi-boxing off a single host is nothing new ... multipoint control is nothing new... But the improvements in RemoteFX and graphical processing may primed to allow for the combination of the two concepts to work for the gaming enthusiast such as myself. Friday, December 21, 2012 8:41 PMGaming support is the primary reason I'm looking into this as well. Fwiw I've done this several ways since 2007, initially using event redirection to a nested X session in an all-Linux configuration. With the introduction of USB device capture under box and VMware it became possible to do this with Windows, albeit at the cost of running two OS instances. That's wasteful and eliminates the ability to use the native video drivers under both OS instances, two problems MP server should address. I am skeptical that RDP can support the framerates needed for a smooth gaming experience, and plan to do this in a side by side configuration if I can get this running at all given the issues I've run into thus far.
OPCFW_CODE
(s)hell oriented (p)ipe(line) tool for ci/cd Implementing coded pipelines with Jenkins you have to deal with Groovy, Jenkinsfile and a so called DSL which allows you interfacing with Jenkins and its plugins. However the whole setup is usually designed to run somewhere remote fetching a revision of your code and running the pipeline on it. Creating or extending such a pipeline locally running on your current code is - finally - not comfortable: that setup forces you a lot to split into Groovy and Bash scripts that allow you to run things locally which increases complexity even more. Also you are not flexible in terms of environments. You cannot run same pipeline in Travis CI (and such tools). Spline is a way to get out of this: You can run the whole pipeline via command line on your machine. Also you can run matrix builds and you can filter for certain tasks of your interest. The pipeline for the spline tool itself supporting a lot Python version can be defined in one file with roughly 170 lines of yaml code only. Integration into Jenkinsfile and/or Travis CI isn’t that hard anymore. Installation can be simply done with (optional with –upgrade for updating the installed version): pip install spline You require a pipeline definition file (Yaml). As an example feel free to do following: sudo pip install spline git clone https://github.com/Nachtfeuer/pipeline.git cd pipeline pipeline --definition=pipeline.yaml --matrix-tags=py36 When the file in your project is pipeline.yaml`you als can leave out the `–definition parameter. If you leave out the matrix tag filter then spline will run all python version as defined in the matrix (see badges too). - automatic schema validation for yaml file - matrix based pipeline - pipeline stages (named groups) - shell script execution: inline and file - environment variables merged across each level: matrix, pipeline, stage, and tasks - support for model data (a dictionary of anything you need) - cleanup hook - filtered execution via tags (matrix and/or tasks) - supporting Jinja templating in scripts (also nested inside model) - support for Docker containers and Docker images - support for the Packer tool - execution time on each level: pipeline, stage, tasks and shell (event logging) - usable by Jenkinsfile as well as by a .travis.yml (or other pipelines). - dry run and debug support - support for Python scripts - support for task variables - support for conditional tasks - enabled for code reuse: !include statement For further details about what you can do please read the documentation. You have two options: - spline: (s)hell oriented (p)ipe(line) - Nachtfeuer: A demon (finally) fighting for the good side in a great fantasy (https://www.amazon.de/dp/B00946NO6I). Release history Release notifications | RSS feed Download the file for your platform. If you're not sure which to choose, learn more about installing packages. |Filename, size||File type||Python version||Upload date||Hashes| |Filename, size spline-1.12-py2.py3-none-any.whl (64.0 kB)||File type Wheel||Python version py2.py3||Upload date||Hashes View| Hashes for spline-1.12-py2.py3-none-any.whl
OPCFW_CODE
package ec import ( "encoding/hex" "fmt" "math/big" ) // p is a prime number of secp256k1. // http://www.secg.org/sec2-v2.pdf var p, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F", 16) // Point is a coordinate of elliptic curve. type Point struct { X *big.Int Y *big.Int } // Infinite returns whether it is at infinity or not. func (point *Point) Infinite() bool { if point.X == nil || point.Y == nil { return true } return false } // Clone returns a copy of Point. func (point *Point) Clone() *Point { clone := &Point{} if point.Infinite() { return nil } clone.X = new(big.Int).SetBytes(point.X.Bytes()) clone.Y = new(big.Int).SetBytes(point.Y.Bytes()) return clone } // Compressed returns the compressed Point. func (point *Point) Compressed() []byte { if point.Infinite() { return nil } size := len(p.Bytes()) bs := new(big.Int).Mod(point.X, p).Bytes() for len(bs) != size { bs = append([]byte{0x00}, bs...) } if point.Y.Bit(0) == 0 { bs = append([]byte{0x02}, bs...) } else { bs = append([]byte{0x03}, bs...) } return bs } // Decode returns a Point from the bytes. func Decode(bs []byte) (*Point, error) { size := len(p.Bytes()) if len(bs) == 1+2*size { if bs[0] != 0x04 { return nil, fmt.Errorf("invalid format : %x", bs) } point := &Point{} point.X = new(big.Int).SetBytes(bs[1 : size+1]) point.Y = new(big.Int).SetBytes(bs[size+1:]) return point, nil } if len(bs) != 1+size { return nil, fmt.Errorf("invalid length : %x", bs) } if bs[0] != 0x02 && bs[0] != 0x03 { return nil, fmt.Errorf("invalid format : %x", bs) } point := &Point{} point.X = new(big.Int).SetBytes(bs[1:]) // (x^3 + 7)^((p + 1) / 4) point.Y = new(big.Int).Exp( new(big.Int).Add(new(big.Int).Exp(point.X, big.NewInt(3), p), big.NewInt(7)), new(big.Int).Div(new(big.Int).Add(p, big.NewInt(1)), big.NewInt(4)), p) if (bs[0] != 0x02 && point.Y.Bit(0) == 0) || (bs[0] != 0x03 && point.Y.Bit(0) == 1) { point.Y.Sub(p, point.Y) } return point, nil } // DecodeString returns a Point from the hexstring. func DecodeString(hexstring string) (*Point, error) { bs, err := hex.DecodeString(hexstring) if err != nil { return nil, err } return Decode(bs) } // G is the base point of secp256k1. var G, _ = DecodeString("0279BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798") // n is the order of G. var n, _ = new(big.Int).SetString("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", 16) // Add returns the addition of Points. func Add(P, Q *Point) *Point { if P.Infinite() { return Q.Clone() } if Q.Infinite() { return P.Clone() } if P.X.Cmp(Q.X) == 0 && Q.Y.Cmp(Q.Y) != 0 { return &Point{} } var s *big.Int if P.X.Cmp(Q.X) == 0 && P.Y.Cmp(Q.Y) == 0 { // (3xP^2) * (2 * yP)^(p - 2) mod p s = new(big.Int).Mod( new(big.Int).Mul( new(big.Int).Mul(new(big.Int).Mul(big.NewInt(3), P.X), P.X), new(big.Int).Exp( new(big.Int).Mul(big.NewInt(2), P.Y), new(big.Int).Sub(p, big.NewInt(2)), p)), p) } else { // (yP - yQ) * (xP - xQ)^(p - 2) mod p s = new(big.Int).Mod( new(big.Int).Mul( new(big.Int).Sub(P.Y, Q.Y), new(big.Int).Exp( new(big.Int).Sub(P.X, Q.X), new(big.Int).Sub(p, big.NewInt(2)), p)), p) } R := &Point{} // xR = s*s - (xP + xQ) mod p R.X = new(big.Int).Mod(new(big.Int).Sub(new(big.Int).Mul(s, s), new(big.Int).Add(P.X, Q.X)), p) // -yR = s*(xP - xR) - yP mod p R.Y = new(big.Int).Mod(new(big.Int).Sub(new(big.Int).Mul(s, new(big.Int).Sub(P.X, R.X)), P.Y), p) return R } // Mul is the multiple of Point. func Mul(x *big.Int, P *Point) *Point { R := &Point{} for i := 0; i < x.BitLen(); i++ { if x.Bit(i) == 1 { R = Add(R, P) } P = Add(P, P) } return R }
STACK_EDU
CTA has made the Warmest App Prototype, which allows you to view collected data about cultural heritage sites in three locations: It is a tool for monitoring changes in monuments. We would like to show all changes occurring in monuments on a timeline (photos, thermographic images). User is able to choose elements from the 3D scene to investigate. Based on 3D model, system masks everything except the pointed element. The process runs on all photos. On the end system return masked photos for pointed element ready to compere. Click the “START” button on Welcome Page to go to the map screen and select desired location, then click on any blue marker displayed on the map to see details about particular location. If you would like to see all data you should click “SHOW COLLECTED DATA” button to view the timeline of collected data at this site. Next page shows a 3D model and summary of the selected location. Objects that have viewable data are colored red when you hover your mouse on them. To view collected data for particular object, hover your mouse over the object. When the object color is changed to red, click Left Mouse Button to go to the timeline. Next page shows a timeline of the selected object. The timeline is divided into three categories: To select a date range, move the slider on the first timeline (with the name “Timeline”). Currently, the size of the slider corresponds to one year time span. The light-purple parts of the main timeline indicate that there is a data snapshot available at that point in time. When the slider overlaps the light-purple part, a marker associated with a data snapshot will appear on a timeline corresponding to it’s type. Click on the marker to see a preview of the snapshot that will appear at the bottom of the timeline panel. If you want to see the larger picture, or view the whole collection of data (if selected data snapshot contains more than one item), click on the magnifying glass in the upper left corner of the preview. The app allows you to quickly compare changes occurring in monuments, it’s an online tool, doesn’t require installation, and can be easily integrated with other tools. Yes it’s possible. Of course it depends on the tool which you want to integrate. For example, it is possible to integrate databases etc. Other systems can easily import other annotations. The app allows users to quickly compare changes occurring in monuments. It is a web application, which everyone can use regardless of their operating system. The Warmest App Prototype is attuned to aid cultural heritage practitioners, conservators, scholars, people who analyze changes in monuments, people involved in historic preservation, historians. First of all, we would like to thank you for your availability to collaborate in a study designed by researchers from the WARMEST´S project. Your participation will be essential for the success of this work. Specifically, we would like you to answer honestly a series of questions about your experience with your volunteer tourism program. The estimated duration of the survey is about 10 minutes. Remember that there are no correct answers and that your sincerity is essential to guarantee the rigor of the investigation. The information you voluntarily provide and the data collected is completely anonymous and will be analyzed in an aggregated manner to carry out statistical analysis for scientific research purposes. Kind regards and thanks again! This survey is anonymous. The record of your survey responses does not contain any identifying information about you, unless a specific survey question explicitly asked for it. If you used an identifying token to access this survey, please rest assured that this token will not be stored together with your responses. It is managed in a separate database and will only be updated to indicate whether you did (or did not) complete this survey. There is no way of matching identification tokens with survey responses.
OPCFW_CODE
Random access memory is also called main memory because it is the primary memory that the CPU uses when processing information. The electronic circuits used to construct this main internal RAM can be classified as dynamic RAM (DRAM), synchronized dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all involve different ways of using transistors and capacitors to store data. In DRAM or SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM places an electric charge on the capacitor. To store the binary value 0, DRAM removes all electric charge from the capacitor. The transistor is used to switch the charge onto the capacitor. When it is turned on, the transistor acts like a closed switch that allows electric current to flow into the capacitor and build up a charge. The transistor is then turned off, meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0, the charge is drained from the capacitor while the transistor is on, and then the transistor is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a detector circuit determines whether a charge is present or absent on the relevant capacitor. DRAM is called dynamic because it is continually refreshed. The memory chips themselves cannot hold values over long periods of time. Because capacitors are imperfect, the charge slowly leaks out of them, which results in loss of the stored data. Thus, a DRAM memory system contains additional circuitry that periodically reads and rewrites each data value. This replaces the charge on the capacitors, a process known as refreshing memory. The major difference between SDRAM and DRAM arises from the way in which refresh circuitry is created. DRAM contains separate, independent circuitry to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same hardware clock as the CPU. The hardware clock sends a constant stream of pulses through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock results in less duplication of electronics and better access coordination between the CPU and the refresh circuits. In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value without the need for refresh. The chief advantage of SRAM lies in its speed. A computer can access data in SRAM more quickly than it can access data in DRAM or SDRAM. However, the SRAM circuitry draws more power and generates more heat than DRAM or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is used when access speed is more important than large memory capacity or low power consumption. The time it takes the CPU to transfer data to or from memory is particularly important because it determines the overall performance of the computer. The time required to read or write one bit is known as the memory access time. Current DRAM and SDRAM access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access times are typically four times faster than DRAM. The internal RAM on a computer is divided into locations, each of which has a unique numerical address associated with it. In some computers a memory address refers directly to a single byte in memory, while in others, an address specifies a group of four bytes called a word. Computers also exist in which a word consists of two or eight bytes, or in which a byte consists of six or ten bits. When a computer performs an arithmetic operation, such as addition or multiplication, the numbers used in the operation can be found in memory. The instruction code that tells the computer which operation to perform also specifies which memory address or addresses to access. An address is sent from the CPU to the main memory (RAM) over a set of wires called an address bus. Control circuits in the memory use the address to select the bits at the specified location in RAM and send a copy of the data back to the CPU over another set of wires called a data bus. Inside the CPU, the data passes through circuits called the data path to the circuits that perform the arithmetic operation. The exact details depend on the model of the CPU. For example, some CPUs use an intermediate step in which the data is first loaded into a high-speed memory device within the CPU called a register.
OPCFW_CODE
Challenging CSS Topics for Assignments In web development, certain CSS topics pose challenges for students tackling assignments. These include mastering advanced layout techniques like flexbox and grid layouts, implementing smooth CSS animations and transitions, and ensuring cross-browser compatibility for consistent rendering across different browsers. These topics often require a deeper understanding of CSS concepts and practical application to excel in assignments. - Advanced Layout Techniques: This includes complex layout designs, such as multi-column layouts, flexbox, grid layouts, and positioning. Solving assignments related to intricate layouts might require a deeper understanding of CSS. - Responsive Web Design: Crafting responsive designs that adapt to various screen sizes and devices can be challenging. Assignments involving media queries and fluid layouts may require special attention. - CSS Animations and Transitions: Implementing smooth animations and transitions using CSS can be tricky. Handling keyframes, timing functions, and animation properties might be an area where students seek assistance. - CSS Preprocessors: While widely used, preprocessors like Sass, Less, or Stylus can add complexity to assignments. Knowing how to utilize variables, mixins, and nesting effectively might be beneficial. - Cross-Browser Compatibility: Ensuring CSS works consistently across different browsers and their versions can be a daunting task. Solving assignments that involve troubleshooting browser-specific issues could be challenging. - CSS Performance Optimization: Optimizing CSS for better load times and rendering efficiency is essential in modern web development. Handling large stylesheets and reducing redundancy might be challenging areas. - Customizing Form Styles: Styling HTML forms with CSS while maintaining accessibility and user-friendliness can be complex. - CSS Architecture: Creating scalable and maintainable CSS codebases with approaches like BEM (Block Element Modifier) or SMACSS (Scalable and Modular Architecture for CSS) might require specialized knowledge. - CSS Blend Modes and Filters: Utilizing advanced visual effects like blend modes and image filters can be less common and might require specific expertise. - Print Stylesheets: Crafting CSS styles specifically for printed documents can be challenging due to different considerations than screen-based designs. Services Offered by Our CSS Assignment Help Experts We specialize in providing technical assistance and guidance to students and professionals seeking support with their CSS-related assignments or projects. Our team consists of experienced CSS experts and web developers who possess in-depth knowledge of CSS principles, methodologies, and contemporary web development techniques. Here are the key functions and offerings of our CSS assignment help service: - CSS Assignment Solutions: Our primary objective is to deliver accurate and well-structured CSS solutions for various assignments or projects. From intricate layout techniques, and responsive design to CSS animations, we ensure the delivery of high-quality code tailored to your specific requirements. - Concept Clarification and CSS Explanations: We understand that CSS can be challenging, and you may have inquiries or uncertainties regarding specific concepts. Our experts excel at clarifying doubts and providing comprehensive explanations of CSS principles to bolster your understanding of the subject. - CSS Code Optimization: Apart from delivering solutions, we focus on optimizing your CSS code for enhanced performance and maintainability. This includes reducing redundancy, applying best practices, and adopting efficient coding approaches. - Cross-Browser CSS Compatibility: Ensuring your CSS code works uniformly across multiple browsers is crucial. We proficiently test and verify the compatibility of our solutions across different browser environments. - Responsive CSS Design: With the growing significance of responsive web design, we excel in crafting CSS styles that seamlessly adapt to various screen sizes and devices, providing optimal user experiences. - Integration of CSS Frameworks: If your assignments require the use of CSS frameworks such as Bootstrap or Foundation, we adeptly integrate these frameworks into the solutions, leveraging their capabilities effectively. - CSS Debugging and Troubleshooting: In case you encounter issues with your CSS code or face browser-specific problems, our experts provide debugging and troubleshooting services to identify and rectify the issues promptly. - Timely Delivery of CSS Solutions: We understand the importance of adhering to deadlines. Our CSS assignment help service prioritizes delivering solutions within the stipulated timeframe, ensuring punctuality and efficiency.
OPCFW_CODE
This was the first class I've taken in a long time, the very first on line course I've at any time taken, and my 1st statsics.com class. I discovered it quite simple to navigate and was pleasantly shocked at how associated the instructor was in the net discussion. Terrific class! This chapter concentrates on workflow. For project organizing and administration, we’ll use the DiagrammeR bundle. For project reporting we’ll deal with R Markdown and knitr that are bundled with RStudio (but may be installed independently if needed). Thanks for that opinions and mentioning the “motivated write-up” you observed. I’m gonna provide the person benefit of the question, In particular as it’s evident that he wrote his individual R code, and it’s in a bigger context of “Modern day Portfolio Theory. implementations on the Haversine formulation! This exhibits the significance of careful deal selection as there are sometimes several offers that do the exact same position, as we see in another part. This Observe briefly points out R Markdown with the un-initiated. R markdown is really a form of Markdown. Markdown is usually a pure text document format that has grown to be a normal for documentation for software. It's the default format for displaying text on GitHub. R Markdown makes it possible for the consumer to embed R code in a Markdown document. So I do think his programme is Completely wrong. I'm intending to use this myself, so I believed I'd solicit your watch! This is an extremely nicely well prepared study course. The instructor not only pointed out the proper guide for statisticians but also presented very handy lecture notes that helped a great deal The conversation Along with the lecturer was superior, the guide is great, the my company online guide material on software package is extremely helpful along with the lecturer put plenty of effort into find more info a synthesis with the textbooks contents every single 7 days. This was by far the very best study course I took at studies.com Project arranging and ‘visioning’ can be a Innovative procedure not constantly nicely-suited on the linear logic of computing, Even with latest advancements in project administration software program, several of which are outlined inside the bullet points beneath. of project you happen to be undertaking. The typology down below demonstrates the inbound links amongst project sort and project management prerequisites.ten Varsity Tutors connects learners with authorities. click over here Instructors are impartial contractors who tailor their providers to visit this site right here each customer, utilizing their very own design, This training course has long been an incredible help to me. It's going to produce a clearer understanding of the data, and to raised experiences and articles or blog posts. The moment your document has compiled it must appear on your screen while in the file format asked for. If a html file is generated (as would be the default), RStudio presents a element that means that you can place it up on the internet quickly. This can be carried out utilizing the rpubs Web site, a shop of a huge amount of dynamic documents (which may very well be a very good source of inspiration to your publications). . Below you are trying to explore datasets to find out something interesting/respond to some concerns. The emphasis is on pace of manipulating your information to generate attention-grabbing benefits. Formality is less significant in this kind of project.
OPCFW_CODE
The secret to social bookmarking sites list success is this: the more votes or bookmarks your website receives inside a certain bookmarking site, the more prominent it becomes within the community and helps you to drive more traffic towards your website. Social Bookmarking has flourished as a powerful technique for internet marketing as a result of its incredibly rapid development and the rising popularity of blogging. Aside from the straightforward retrieval of content, individuals are drawn to social bookmarking’s user-centred setup because it serves as an online community that brings together people with similar interests and viewpoints. Because so many people are now participating in these groups, submitting social bookmarks can be your key to effective internet marketing campaigns. Table of Contents - Open as many accounts as you can for social bookmarking. To enhance Page Rank and create one-way links to your website, do this. - Concentrate your networking efforts on a few well-known websites to ensure that your content is voted on and bookmarked as early as possible. The objective is to draw attention to yourself so that people would visit your website (we know, this is hard for some of us). - Business owners will eventually wish to include links on their websites or blogs. Social Bookmarking Sites List September |SL No||Bookmarking Sites List||Month| |1||https://travelthe.travelmithu.com/||Sep - 22| |2||https://webdot.samaysawara.com/||Sep - 22| |3||https://top.seobookmarks.co.in/||Sep - 22| |4||https://dofollow.seobookmarks.co.in/||Sep - 22| |5||https://link.seobookmarks.co.in/||Sep - 22| |6||https://a.socialbookmarkingwebsites.co.in/||Sep - 22| |7||https://best.socialbookmarkingwebsites.co.in/||Sep - 22| |8||https://traffic.socialbookmarkingwebsites.co.in/||Sep - 22| |9||https://seonet.seosocialnews.net/||Sep - 22| |10||https://travelnet.travelmithu.com/||Sep - 22| |11||https://submit.socialbookmarkingwebsites.co.in/||Sep - 22| |12||https://top.socialbookmarkingwebsites.co.in/||Sep - 22| |13||https://boost.newstore.com.co/||Sep - 22| |14||https://high.newstore.com.co/||Sep - 22| |15||https://top.newstore.com.co/||Sep - 22| |16||https://link.newstore.com.co/||Sep - 22| |17||https://dofollow.newstore.com.co/||Sep - 22| |18||https://bookmarking.co.in/||Sep - 22| |19||https://dmozbookmarks.in/||Sep - 22| |20||https://thesocialbookmarking.co.in/||Sep - 22|
OPCFW_CODE
Since the component class has access to the form control structure and the data model, you can push data model values into the form controls as well as pull values that have been changed by the user. Pick a topic: Dogs Tutorials Cars The value of myVar will be either dogs, tuts, or cars. We can now pass the config and group into our dynamically created component. Write the following code inside an app. In this example, we will not be using the built-in MatTableDataSource because its designed for filtering, sorting and pagination of a client-side data array. Note that for the full name and bio we preloaded some default data. However, the user will only see the errors once he moves to the next input. To get notified of upcoming posts on Angular Material and other Angular topics, I invite you to subscribe to our newsletter: Other Angular Material posts: Video Lessons Available on YouTube Have a look at the Angular University Youtube channel, we publish about 25% to a third of our video tutorials there, new videos are published all the time. We could also have done the call to the data source from inside a subscribe handler, but in this case, we have implemented that call using the pipeable version of the RxJs do operator called tap. This data object was retrieved from the backend at router navigation time using a router Data Resolver see an here. The following image is how our final angular 5 example will look like. However, I would still like to be able to use this in either template-driven or model-driven forms and allow the placeholders, validations etc. And because this is the final version, let's then display the complete template with all its features: pagination, sorting and also server-side filtering: Breaking down the Search Box implementation As we can see, the only new part in this final template version is the mat-input-container, containing the Material Input box where the user types the search query. This method is going to be called in response to multiple user actions pagination, sorting, filtering to load a given data page. I have tried that approach but it seems to get a little circular because the inner input element already has the MatInput directive already applied. Let's use this example as a starting point, and starting adding: a loading indicator, pagination, sorting, and filtering. We want to ensure that only this class can emit values for the lessons data. The most important thing is to avoid interrupting and annoying the user. Also, we have created a formControl attribute called name inside an app. However, built-in validators won't always match the exact use case of your application, so sometimes you will want to create a custom validator for your angular website. For example, in this case, matCellDef and matHeaderCellDef are being applied to plain divs with no styling, so this is why this table does not have a Material design yet. Now, we will create a form inside app. We now have a complete solution for how to implement an Angular Material Data Table with server-side pagination, sorting and filtering. This observable will emit a new value every time that the user clicks on the paginator navigation buttons or the page size dropdown. For example, this is how we can detect if a given data row was clicked: When a row is clicked, we will call the onRowClicked component method, that will then log the row data to the console: If we now click on the first row of our data table, here is what the result will look like on the console: As we can see the data for the first row is being printed to the console, as expected! Now, we can check the result: Conclusion Excellent. Interacting with a given table data row We can even use the element identified by the matRowDef directive to interact with a given data row. Another look at that code: Tab 1 Tab 2 Now when you refresh the page, Angular Ui-Router will set the State, and based on a matching State, the active Tab will display with an accent highlight at the bottom of the Tab. Name: Email: Message: Send And a sprinkle of the md-button directive for good luck. We could also expand on our implementation to allow the validation to be configured, for example. On this angular 5 forms tutorial we will be using Reactive Forms. Working With Forms Forms are almost always present in any website or application. We then add these dynamically created controls to the form group, ready for consumption by our dynamic fields. FormControl: it tracks the value and validity status of an angular form control. The disconnect method implementation Let's now break down the implementation of the disconnect method: This method is called once by the data table at component destruction time. The context is bound when we pass in the two Input bindings that our directive needs - the configuration object for that field, and the form group for the form. Adding Sortable Material Headers In order to add sortable headers to our Data Table, we will need to annotate it with the matSort directive. We added the following code in a new username. I hope that this post helps with getting started with the Angular Material Dialog and that you enjoyed it! Angular Forms Fundamentals Template Driven Forms vs Angular Reactive Forms When it comes to form-building, Angular offers two technologies: reactive forms and template driven forms. However, by also validating via frontend, you can improve the user experience and perceived response time. MaterialModule has therefore been deprecated in favor of defining a project-specific custom material module where you import and export only the needed components. To do this, we can utilise a property inside our NgModule configuration - entryComponents. I have tried general approaches recommended for Angular but they do not take account of the various requirements of Angular Material. It disables any default browser validation. In order for the component to be usable as a dialog body, we need to declare it as an entryComponent as well, otherwise, we will get the following error while opening the dialog: Error: No component factory found for CourseDialogComponent. The main difference is that its data gets serialized as an array, as opposed to being serialized as an object in case of FormGroup. For instance, we could add ngOnChanges to keep the dynamic component in-sync with the config and group passed down to DynamicFieldDirective. To fetch this result, we need to subscribe to the afterClosed function. Moreover, we need to place the dialog components inside the entryComponents array because we are not going to use routing nor app selector to call these components. It looked something like this: accent has been changed to yellow here The team have mentioned that Tabs have been heavily re-factored in the upcoming version. You can choose between writing your own validator functions and using some of the Angular built-in validators. We are going to cover many of the most common use cases that revolve around the Angular Material Data Table component, such as: server-side pagination, sorting, and filtering. Create a Contact Form Bring back the contact-form. For example, in this design, the Data Source is not aware of the data table or at which moment the Data Table will require the data. We will get to the data source in a moment, right now let's continue exploring the rest of the template. One of the most important components in Angular Material is the input component.
OPCFW_CODE
<?php namespace Phapi\Di\Validator; use Phapi\Contract\Di\Container; use Phapi\Contract\Di\Validator; use Psr\Log\LoggerInterface; use Psr\Log\NullLogger; /** * Class Log * * @category Phapi * @package Phapi\Di\Validator * @author Peter Ahinko <peter@ahinko.se> * @license MIT (http://opensource.org/licenses/MIT) * @link https://github.com/phapi/log */ class Log implements Validator { /** * Dependency Injector Container * * @var Container */ protected $container; public function __construct(Container $container) { $this->container = $container; } /** * Validates the configured logger. If no logger is configured or if the configured * logger isn't PSR-3 compliant an instance of NullLogger will be used instead. * * The PSR-3 package includes a NullLogger that does not do anything with * the input but it also prevents the application from failing. * * This simplifies the development since we don't have to check if there * actually are a valid cache to use. We can just ask the Cache (even * if its a NullCache) and we will get a response. * * @param $logger * @return callable */ public function validate($logger) { $original = $logger; if (is_callable($logger)) { $logger = $logger($this->container); } // Check if logger is an instance of the PSR-3 logger interface if ($logger instanceof LoggerInterface) { return $original; } // A PSR-3 compatible log writer hasn't been configured so we don't know if it is // compatible with Phapi. Therefore we create an instance of the NullLogger instead return function ($app) { return new NullLogger(); }; } }
STACK_EDU
#!/Users/subby/.virtualenvs/olympus/bin/python import warnings import os, sys os.environ["LANG"] = "C.UTF-8" with warnings.catch_warnings(): warnings.simplefilter("ignore") from prettytable import PrettyTable from .storage import storage from .database import db from .adapters import Adapter from .adapters.utils import get_adapter_by_framework from .utils import generate_random_model_name, convert_dt_to_epoch from .models import load_all_models from .api import app import click, os, datetime supported_frameworks = ['keras'] custom_adapters = [] def run_sanity_checks(): storage.create_storage_dir(override=False) @click.group() def cli(): run_sanity_checks() print('\n\n') @cli.command() @click.option('--host', default='localhost') @click.option('--port', default=7878) @click.option('--debug/--no-debug', default=True) def up(host='localhost', port=7878, debug=False, log=True): """ Start the API model server. """ start_server(host, port, debug, log) @cli.command() def list(): """ List all deployed models. """ models = db.get_all_models() if not models: print('No models have been deployed yet.') return print('Your models:\n') table = PrettyTable(['Name', 'Version', 'Last Deployed', 'Activated?']) for model in models: table.add_row([model['name'], model['version'], model['last_deployed'], model['activated']]) print(table) @cli.command() @click.option('--name', default=lambda : generate_random_model_name(), prompt=True) @click.argument('path', type=click.Path(exists=True, resolve_path=True)) @click.option('--version', default=1, help='The version number of this model instance', prompt=True) @click.option('--framework', default='keras', help='The framework used to train & save the model', type=click.Choice(supported_frameworks), prompt=True) def deploy(name, path, version=1, framework='keras'): """ Deploy a model. """ global custom_adapters # Check for valid framework if framework not in supported_frameworks: print('The specified framework must be one of: ' + ", ".join(supported_frameworks)) return # Check if the model/version already exists if db.does_model_exist(name, version): # This model/version already exists. Alert user and abort print(""" Oops! A model with the same name/version already exists. To upload a new version of this model, either: (1) specify the version using the '--version' option, or (2) delete this model version and retry. """, name, version) sys.exit(-2) adapter = get_adapter_by_framework(framework, custom_adapters = custom_adapters) is_validation_ok, validation_extra = adapter.validate_model_files(path) if not is_validation_ok: print('The given model path could not be successfully validated for the %s framework:' % framework) print(validation_extra) sys.exit(-1) model = validation_extra[1] # Validation is OK, so copy the necessary model's files at this path to Olympus's internal model storage. adapter.copy_model_files_to_internal_storage(name, version) # Save the model to the database adapter.save_model_to_db(name = name, version = version, activated=True, last_deployed = convert_dt_to_epoch(datetime.datetime.now())) print('\n\nYour "%s" model (version: %s) has been successfully deployed.' % (name, version)) print('You can now access it at the following endpoint:') print('\n\n\t\t/models/' + name + '/v' + str(version) + '/predict\n\n') sys.exit(0) @cli.command() @click.argument('name') @click.option('--version', default=1) @click.confirmation_option(help="Are you sure you want to expose this model via the API?") def activate(name, version=1): """ Expose a model via the API model server. """ if not db.does_model_exist(name, version): print('The specified model/version doesn\'t exist!') return db.update_model(name, version, {'activated' : True}) print('Successfully activated the model\'s API.\nPlease restart Olympus for these changes to take effect.') @cli.command() @click.argument('name') @click.option('--version', default=1) @click.confirmation_option(help="Are you sure you want to hide this model from the API?") def deactivate(name, version=1): """ Remove a model from the API model server. """ if not db.does_model_exist(name, version): print('The specified model/version doesn\'t exist!') return db.update_model(name, version, {'activated' : False}) print('Successfully deactivated the model\'s API.\nPlease restart Olympus for these changes to take effect.') @cli.command() @click.argument('name') @click.option('--version', default=1) @click.confirmation_option(help="Are you sure you want to delete this model version's instance?") def delete(name, version=1): """ Delete a specific model version. """ if not db.does_model_exist(name, version): print('The specified model/version doesn\'t exist!') return # delete the model from the db db.delete_model_from_db(name, version) # delete the model from the file storage storage.delete_model_storage(name, version) print('The model (v%d) and its files were successfully deleted.' % version) # Olympus library extension methods # TODO: Implement way to use the olympus library to deploy a model instance directly from code! def add_adapter(adapter): global supported_frameworks, custom_adapters if adapter.name not in supported_frameworks: supported_frameworks.append(adapter.name) custom_adapters.append(adapter) def start_server(host='localhost', port=7878, debug_server=False, log=True): if log: print('Loading models from disk...\t', end=' ') load_all_models(custom_adapters) if log: print('OK.\n') print('\nStarting Olympus server at %s:%d\n' % (host, port)) app.run(host=host, port=port, debug=debug_server) if __name__ == '__main__': cli()
STACK_EDU
Update Error: Navigation property Violation of PRIMARY KEY constraint Violation of PRIMARY KEY constraint 'PK_dbo.SongCategories'. Cannot insert duplicate key in object 'dbo.SongCategories'. The duplicate key value is (2, 3). The statement has been terminated. I am trying to let users edit the song and the navigation properties. public class Song { //AudioName, Artist etc ... public virtual ICollection<Category> Categories { get; set; } } public class Category { ... public virtual ICollection<Song> Songs { get; set; } } EF creates a SongCategories table from this with the columns Song_Id and Category_Id. Controller: [HttpPost] public ActionResult RequestEdit(EditSongDto editSongDto) { var categories = _categoryService.GetCategories().Where(x => editSongDto.SelectedCategoryIds.Any(z => z == x.Id)); _songService.Edit(_songService.GetSong(editSongDto.Song.Id), editSongDto.AudioName, editSongDto.ArtistName, categories); } Service: public void Edit(Song song, string audioName, string artistName, IEnumerable<Category> categories) { song.AudioName = audioName; song.ArtistName = artistName; song.Categories = categories.ToList(); _repository.Edit(song); } Repository: public virtual void Edit(T entity) { _context.Entry(entity).State = EntityState.Modified; Save(); } This happens when the and the same one gets passed in again. How can I update the navigation properties properly? I have looked at other threads but still do not know the answer. Have you tried? song.Categories.Clear(); song.Categories = categories.ToList(); @robinet Can't believe it was that simple. Thanks a lot. If you want points I will accept an answer. I presumed I didn't need to clear because the assignment would overwrite it. I guess it has something to do with proxy tracking still. I've been having some headaches with EF lately :D. I'll add an answer if you want to mark it as such, no worries anyway, I've come here for help like you :) EF is tricky with Many-to-Many relationships. I believe it doesn't have a change to lazy load song.Categories before you overwrite it with categories.ToList() so it's unaware of existing song-Category relationships By calling song.Categories.Clear(), you force lazy loading and explicitly state that existing song-Category relationships are to be deleted for that particular song. Next you assign the new relationships by overwriting the Categories collection and EF "fixes up" everything to generate the correct INSERT/DELETE statements public void Edit(Song song, string audioName, string artistName, IEnumerable<Category> categories) { song.AudioName = audioName; song.ArtistName = artistName; song.Categories.Clear(); song.Categories = categories.ToList(); _repository.Edit(song); } Maybe a better way is to iterate over song.Categories removing what is not contained in the new category list and then adding from the category list what is not in the song.Categories but I haven't tested it myself
STACK_EXCHANGE
Combine 2 input fields from a list of inputs in php There is a list of names and investment in two separate text fields, name 1 investment 1 name 2 investment 2 name 3 investment 3 name 4 investment 4 name 5 investment 5 ...... im having a database fields as user1, user2, user3... So when a user submits the form I want to save it as “name 1, investment 2….” in each field. Eg: John, 1200 Peter, 200 I can simply concatenate each input by assigning them to different variable but it will make my code very long because there are around 40 input elements. Can someone suggest an affective way to achieve this? I’m using CodeIgniter. Where is "name 1 investment 1"? Are they name of HTML input fields on your page? yep, each name inputs are named as name1,name2,name3... and the investment inputs are saved as investment1, investment2,investment3... If I understand this correctly, your front-end looks like this: <form action="action.php" method=POST> <input type="text" name="name1"> <input type="text" name="investment1"> <BR /> <input type="text" name="name2"> <input type="text" name="investment2"> <BR /> ... </form> And you want a simple way to read all the fields. If so, I suggest this: $i = 1; while (isset($_POST['name'.$i]) AND isset($_POST['investment'.$i])) { ${'user'.$i} = $_POST['name'.$i] . ', ' . $_POST['investment'.$i]; $i = $i + 1; } You will then end up with a bunch of variables called $user1, $user2, $user3... which you can then use in a standard INSERT statement. However, your database probably needs a serious redesign. UPDATE: The design is not normalized, and you are trying to store two different types of data in one field. It is hard to give you a good example of a better database design, because I don't know exactly what you are trying to do. If I had to hazard a guess, it would probably look something like this: CREATE TABLE investments { investment_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, investor VARCHAR, amount FLOAT }; You would typically do one INSERT for each line on your page: INSERT INTO investments (investor, amount) VALUES ('Peter', 200.00); If you somehow group your investments, say by page, or by product, then the design would look more like this: CREATE TABLE pages { page_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY }; CREATE TABLE investments { investment_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, page_id INT REFERENCES pages(page_id), investor VARCHAR, amount FLOAT }; This means that you create a new page for each set of investments. An even better design would be something like this: CREATE TABLE pages { page_id INT AUTO_INCREMENT PRIMARY KEY, title VARCHAR }; CREATE TABLE investors { investor_id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR }; CREATE TABLE investments { page_id INT REFERENCES pages(page_id), investor_id INT REFERENCES investor(investor_id), amount FLOAT, PRIMARY KEY (page_id, investor_id) }; Of course, these changes make your queries more complex, but they also make your database more flexible and efficient. You don't have to get it right the first time, but see if you can find some other good databases out there, and see what they look like. Here are some links that might help: Database Normalization And Design Techniques MySQL Query examples How to Use MySQL Foreign Keys for Quicker Database Development I agree when you say that the database probably needs a serious redesign. Im bit new for php and sql, if possible could you please give a small example what would be the effective db design? Added to my answer. Hope it helps. Thanks alot... Got an idea! :) If you name you inputs like so: <input name="name[]" type="text">, you should get an array in php which you can loop through $result = array(); foreach($_POST['name'] as $i => $name) { if (isset($_POST['investment'][$i]) { $result[] = $name . ', ' . $_POST['investment'][$i]; } }
STACK_EXCHANGE
IPF-Algorithm to create adjustment survey weights ipfweight varlist [if exp], generate(newvar) values(numlist) maxiter(#) [startwgt(varname) tolerance(#) upthreshold(#) lothreshold(#) misrep] ipfweight is based on the iterative proportional fitting algorithm (also known as raking) first proposed by Deming and Stephan (1940). Like Nick Winter's survwgt rake it performs a stepwise adjustment of survey sampling weights to achieve known population margins (e.g. sex, education, age etc.) but offers some additional features. The adjustment process is repeated until the difference between the weighted margins of the variables listed in varlist and the known population margins specified in values() is smaller than a tolerance value specified in tolerance() or the maximum number of iterations specified in maxiter() is obtained. generate(newvar) creates a new variable containing the final weighting factors. It is required. values(numlist) contains the known population margins. The order of the specified population margins in numlist has to correspond to the values of each variable in varlist. maxiter(#) defines the maximum number of iterations. # has to be larger than 1. startwgt(varname) uses the values of varname as starting weights. For example, a variable containing design weights that transform a sample of households into a sample of individuals can be used here. If startwgt() is not specified, each case gets a starting weight of 1. tolerance(#) specifies the maximum deviation between the weighted margins of the variables listed in varlist and the known population margins specified in values() that is tolerated. If tolerance() is not specified, the iterative process is repeated # times as specified in maxiter(#). upthreshold(#) specifies an upper threshold for the final weighting factors. If a weighting factor exceeds this threshold, it is trimmed to # before the iterative process is continued. An upper threshold of about 5 is suggested (DeBell et al. 2009: 31). lothreshold(#) specifies a lower threshold for the final weighting factors. If a weighting factor falls below this threshold, it is trimmed to # before the iterative process is continued. misrep replaces missing values in varlist with a weighting factor of 1 before the iteration process is continued. If misrep is not specified, weighting factors for all cases with at least one missing value in varlist cannot be computed. However, a more promising solution is to multiple impute missing values before using ipfweight. . ipfweight sex educ, gen(wgt) val(48.3 51.7 43.7 30.7 25.6) maxit(10) . ipfweight sex educ region, gen(wgt) val(48.3 51.7 43.7 30.7 25.6 78.0 22.0) maxit(25) st(designwgt) tol(.1) up(5) lo(.2) mis DeBell, Matthew/Jon A. Krosnick/Arthur Lupia/Caroline Roberts. 2009. Userís Guide to the Advance Release of the 2008-2009 ANES Panel Study. Palo Alto, CA and Ann Arbor, MI: Stanford University and University of Michigan. Deming, W. Edwards/Frederick F. Stephan. 1940. On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals Are Known, in: The Annals of Mathematical Statistics 11 (4): 427-444. Michael Bergmann, University of Mannheim, email@example.com Manual: [R] weight On-line: help for weight; survwgt
OPCFW_CODE
Buy Waveme – Music Platform WordPress Theme best used for wordpress/entertainment/music-and-bands and audio,audio theme,icecast,music platform,radio,shoutcast,youtube Waveme, a WordPress music theme, has many features that allow you to create professional music websites. It targets audio publishers and DJs. An audio social network for music fans. You can create any kind of loop content using the block editor page builder and loop plugin You can loop through pages, users, and categories, and then sort it using custom templates or many filters. You can use the Play Block plugin to create singles, playlists or album music on any post Play Block makes it simple to make audio, music, or radio stations. You can use a front-end upload page to allow users to upload stations. Front-end submissions. User library. You can upload media streams to the front-end and take waveform data from users. Create playlists or albums. Both light and dark themes available with fully customisable features. Dark and light themes in primary colors. Drag and Drop to modify menus, and other options to personalize the website. - Improvements to loop/playblock plugin ajax. Moved to REST API speed for more efficiency. - Change embedded iframe text colour in dark mode - Fix admin Search - Improved player for removing mute from ios. - Reconnect live after pause/play - Add ajax to the notification link - Upload form: Increase duration - Improve menu styling on mobile devices - Add range filter on loop block - Ajax Pull Notification and Add New Dot Icon to the New Notification - Upload upload file to upload form - Post list to user profile page - Add custom sql support to the loop block plugin - Filter with improved loop block - Enhance user profiles - Add multi-class filter on loop block - One-click playback, improve video quality on iOS - Play video fullscreen with iosNative iOS - Update artwork for Azuracast - Actions of Merge David - Increase EDD - Improve loop query - Option to disallow uploading of online streams - Add list to embed - Small screens: Improve video quality - Improve duration input - Import soundcloud and heartthis data - Add support to import youtube title/description/duration/artwork - AzuraCast server support for play and playback updates - You can incorporate filters and actions taken from David. - Support EDD with additional support - Tag albums - Download link: Add login modal - Shortcodes to improve wp_register_form, wp_login_form, and wp_lostpassword_form - Fix login mode - Share mode with embedded functionality - Use transitions in your content - Add find_in_set in loop block meta comparison - Playlist/Album duration can be increased - Add the “Featured” option to a site - Option to add ad interval - Copyright added to frontend upload form - Form for improvement suggestions - Fixes on the login page - David security fix. - Get Woocommerce support - Menu items can be made without ajax - No player added to spec page
OPCFW_CODE
Recently in A First Slice of KeyLimePie I introduced a bit of Mass Effect fan fiction as a simple example of a KeyLimePie conversation. In this post I'm going to breakdown the actual script of the conversation, and then compare it to largely equivalent scripts in ChoiceScript and Ren'Py, two of KeyLimePie's nearest neighbors. There are two key places to start when discussing KeyLimePie conversations in comparison to the other formats: - Choice lists/menus are constrained to 10 directions: the 8 compass directions, a center direction (which I consider the "nevermind" button), and the "next" psuedo-direction (roughly equivalent to a jump/goto). - KeyLimePie's conversations doesn't use a single proscribed scripting language, it's a data model. As a data model, it can be (and usefully is) expressed in any of a handful of markup languages. Both languages in the comparison have different (with a few similarities) procedural scripting languages. Current KeyLimePie formats include JSON and YAML, with YAML the preferred for writing conversations in (which shares indentation-based formatting with both ChoiceScript and Ren'Py). By current convention, there is a tiny bit of embedded Python that KeyLimePie allows, sharing that with Ren'Py's scripting language, but the Python could be replaced with any embeddable language. The Shepard-Blastos YAML script is actually the first version of the script, but it consequently has some typos that were corrected in later versions. The next major format change for the script was the rewrite of it as my testbed for Celtx import, resulting in the Celtx-formatted Shepard-Blastos script. (I'll be writing my next few conversations directly in the Celtx format.) Today I wrote an actual exporter from the KeyLimePie data model to ChoiceScript and Ren'Py, so that I could directly point to a comparison of the three formats. (I found it more interesting to write a somewhat generally useful exporter than to manually rewrite, particularly because I knew it would be a quick "day hack".) I have had to do a tiny bit of massaging of the exports, of course, but probably 98% or so of the process is automated. ChoiceScript needs the most massaging, simply because of the embedded Python, which Ren'Py supports directly. - Ren'Py version of Shepard-Blastos - ChoiceScript export currently uses a file per named node: opening.txt, blastos.txt, investigate.txt, and join.txt. I could see some future version of the KeyLimePie data model specification as something of an intermediate format for cooperation between the engines. Certainly the export tool I built works pretty well for the current demo. If I ever get around to building the "Visual KeyLimePie" editor that I proposed in an earlier blog post, I could imagine that would be potentially quite useful to both ChoiceScript/Ren'Py. Some of the noteworthy differences between the formats: - Neither ChoiceScript nor Ren'Py support KeyLimePie's pie menus, so directions are added to choice labels, and are obviously harder to play with when the directions are useful/important clues. - Neither ChoiceScript nor Ren'Py seem to support the concept of "unavailable" choice. (In the Silverlight KeyLimePie engine, when there is no available node (based on pre-conditions) in a given direction the choice will be disabled/grayed out, using the label of the first unavailable node in that direction.) In ChoiceScript the choice can be removed from the list with a surrounding "if" for the precondition. Ren'Py doesn't even support surrounding an "if" statement around a choice in a menu. - Ren'Py has an available "jump stack" (call and return) that allows for conversation memory. This is something that is planned for KeyLimePie (it's in the "spec in my head"), but not yet implemented in any tool or engine, because it's primarily a useful state machine tool for interaction between conversations. - ChoiceScript, for obvious reasons I assume, doesn't have direct support for conversation styling. - "Fall through" works subtly (and potentially dangerously, if one were relying solely on automatic exports) different in all three systems. I'm sure there are other things that I'm forgetting, but all of the more obvious aesthetic differences should be obvious if you peruse the documents linked above. This has been an interesting experiment today. I really liked working with Ren'Py, which was new to me when I started, and would love to see, and may eventually build, an extension to support KeyLimePie-style conversations. Probably the big lesson at the end of the day is that all three projects are probably much more similar than different. It was also a further proof for the flexibility of my "data model" approach.
OPCFW_CODE
A brand new game for Windows, Linux, Mac, HTML5, 3DS Homebrew and more Scrabble Solving Methodology 4th May, 2019 The general method is as such. 1. The AI randomly picks a spot on the board, and scans to check that a word will fit in the space provided. It looks for a space that will fit about 3 or 4 letters and will connect to at least one other tile on the board. 2. It gathers up "letters in the way" to add to an anagram shuffle. 3. It searches for anagrams with the full rack of letters, plus any letters "in the way". It repeats this task, using fewer and fewer random letters from the rack, until it finds a nice anagram or two that might fit into the space. 4. It places the letters down onto the "Playing" array, making sure the "Letters in the way" fit correctly, and then the default word checker hecks and scores. Thus far, I haven't scanned for adjacent tiles, so the word checker is the part that does that part of the checking. 5. Assuming it's a valid placement, the score is used to determine the "Best word so far", and the placement is stored into the "Best word placement array". 6. We repeat this over and over, once per frame, over a maximum of 300 frames. In the video I posted last night, you should be able to see the lower left four values, one for each AI, which say how many frames it's taken for that player to pick a word. If the counter reaches >300, then it gives up trying and swaps a few letters. ..I haven't yet added a "This player gave up and swapped some letters" notification!! This isn't "The best" Scrabble Solver method, since it doesn't yet account for Premium Tiles, but I should be able to add those fairly easy I think. But .. It works, and that's alright by me! When I first created Stringy Things (May 2003!) the addition of the Blank Tile made everything much harder than it should've, as far as quick word-checking was concerned. Add to that, the fact that BlitzBasic was "a bit pants" at string comparisons, and everything chugged to a halt when attempting such things. Back then, I did attempt to work with Binary Trees and the like, to help speed things up, but in the end, I found it was much better to simply scan the entire list, each and every time, simply because of those Blanks. The blanks really do break all manner of Binary Trees. In addition, I changed the entire wordlist from being words, to being numbers. BlitzBasic could compare two sets of numbers infinitely quicker than it could two sets of words. I considered adding that to this framework, too, but couldn't think of a simple way to store those numbers, without the file growing exponentially. In Blitz it was raw data, but many browsers have security issues with accessing raw data, so the alternative would be plonking thousands of great big numerical values into a giant script. It's one issue after another. Suffice to say, this method does at least work, and it seems to be handling it rather well. .. I think!! I should probably have another look at Tree Stuff, though. It HAS been a long time since I last did that!
OPCFW_CODE
I am using Ubuntu 9.04 I have installed the following package versions: unixodbc and unixodbc-dev: 2.2.11-16build3 tdsodbc: 0.82-4 libsybdb5: 0.82-4 freetds-common and freetds-dev: 0.82-4 I have configured /etc/unixodbc.ini like this: [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/odbc/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = UsageCount = 2 I have configured /etc/freetds/freetds.conf like this: [global] tds version = 8.0 client charset = UTF-8 I have grabbed pyodbc revision http://github.com/mkleehammer/pyodbc and installed it using " python setup.py install" I have a windows machine with Microsoft SQL Server 2000 installed on my local network, up and listening on the local ip address 10.32.42.69. I have an empty database created with name "Common". I have the user "sa" with password "secret" with full priviledges. I am using the following python code to setup the connection: import pyodbc odbcstring = "SERVER=10.32.42.69;UID=sa;PWD=secret;DATABASE=Common;DRIVER=FreeTDS" con = pyodbc.connect(s) cur = con.cursor() cur.execute(''' CREATE TABLE testing ( id INTEGER NOT NULL IDENTITY(1,1), name NVARCHAR(200) NULL, PRIMARY KEY (id) ) ''') con.commit() Everything WORKS up to this point. I have used SQLServer's Enterprise Manager on the server and the new table is there. Now I want to insert some data on the table. cur = con.cursor() cur.execute('INSERT INTO testing (name) VALUES (?)', (u'something',)) That fails!! Here's the error I get: pyodbc.Error: ('HY004', '[HY004] [FreeTDS][SQL Server]Invalid data type (0) (SQLBindParameter)' Since my client is configured to use UTF-8 I thought I could solve by encoding data to UTF-8. That works, but then I get back strange data: cur = con.cursor() cur.execute('DELETE FROM testing') cur.execute('INSERT INTO testing (name) VALUES (?)', (u'somé string'.encode('utf-8'),)) con.commit() # fetching data back cur = con.cursor() cur.execute('SELECT name FROM testing') data = cur.fetchone() print type(data), data That gives no error, but the data returned is not the same data sent! I get: <type 'unicode'> somé string That is, pyodbc won't accept an unicode object directly, but it returns unicode objects back to me! And the encoding is being mixed up! Now for the question: I want code to insert unicode data in a NVARCHAR and/or NTEXT field. When I query back, I want the same data I inserted back. That can be by configuring the system differently, or by using a wrapper function able to convert the data correctly to/from unicode when inserting or retrieving That's not asking much, is it?
OPCFW_CODE
View Full Version : Creating a unique RF link 01-28-2010, 12:46 AM I was wondering if anyone has had any experience with using the BS2 and either the Parallax Tx Rx RF modules, or any one of the XBee modules to create a unique RF link. Say for instance I want to build a garage door opener with a remote but I don't want that remote to open every door I manufacture, just it's own paired or specific door. I don't think this would be very difficult at all but on the receiving end I don't want to have a stamp or any kind of microprocessor, just some stand alone chips i.e. counters, shift registers, etc. This is just for demonstration purposes so ideally I don't need more then three different id's since it is very easy to create more with larger decoders etc. 01-28-2010, 12:59 AM If you use a pair of xBee modules, there's already a unique link. Every xBee module has a permanent unique address and the module that's initiating the link has to provide the address of the module at the other end of the link. The receive end has a status line (command or data mode) that switches to data mode when a link is established. You could use that as a trigger. With the Tx/Rx modules, you'd have to have some kind of simple microcontroller at both ends since the modules have no address, no error checking. The microcontrollers would have to talk to each other to establish "pairing". 01-28-2010, 01:13 AM In fact XBee modules have three or four separate ways of keeping separate addresses. First of all, each XBee module has a "PAN ID", a number between 0 and 0xFFFF that sets its Personal Area Network ID. The transmitting and receiving modules must be set to the same PAN ID or they will not communicate. That's over 65,000 different addresses, so that alone could do what you needed. Incidentally, if you use 0xFFFF for the module's PAN ID it will override this and broadcast to units regardless of their PAN ID (but the other addressing methods still "count"). Secondly, each XBee module has a source address, and a destination address.ˇOnly modules whose source address matches the destination address of a sending module will receive the information. There are approximately a gajillion different addresses here, so you won't run out. And again, a destination of 0xFFFF will broadcast to all modules. Thirdly, each XBee is set to a specific channel (0x0B through 0x1A for regular XBees, and 0x0C to 0x17 for XBee Pros), so you could accomplish something with that as well, though it's not the normal way of addressing. Finally, each module can be assigned a "Node Identifier" - a regular text descriptive string (like "remote 1" or "base station", for example), that can be used with a destination node command for addressing. 01-28-2010, 02:13 AM So, from what I understand I will not need a microprocessor on the receiving end? I have a microprocessor on the transmitting side and I am assuming that there is already a processor on the modules to let me set the "PAN ID" for the receiver so i will not need a second processor. Does this come with software for me to plug into my PC and set a specific "PAN ID"? 01-28-2010, 02:21 AM You need hardware to connect an xBee to a USB port on a PC.ˇ Parallax sells an adapter board for this purpose.ˇ Once you have it connected, you can use any terminal program like HyperTerm or Parallax's Propeller Terminal program to configure the xBee. Alternatively, you could write a simple program for the Stamp that would configure the xBee.ˇ Look at the xBee documentation for details. 01-28-2010, 02:28 AM The new Parallax USB board is certainly going to be the easiest hardware to use for programming the XBees. Digi does have a piece of free software called XCTU that you can download from their site to set your XBee settings and to update the firmware. In my experience, XBees aren't shipped with the latest firmware, so having a way to update it is important. The good news is that besides the XBee itself, all you need it the $25 Parallax board and the free software. You can change the settings from a regular terminal program, as Mike points out. I don't believe that you can update firmware that way, though. You are correct that there is a processor on the XBee where the address settings are made. Whether or not you need a processor on the receiving end depends on the rest of your hardware.
OPCFW_CODE
Map courtesy of Mike D Friday I ran Uz again for the first time in several months. I've decided to run a minicampaign in which the PCs explore the undercity beneath Uz's acropolis. For this session, I had only one "point completed," and I let them know that upfront. Luckily, they didn't find any of the level's exits so it was a non-issue. Anyway, here are the good bits from the session: - There were two expeditions: one really short one and then one longer one. - The first expedition saw the party capture a large number of slaves - forcing me to reevaluate how I'll be giving xp for treasure in the future. One of the characters that had participated in the previous Uz sessions managed to level up. - The short one ended when the PCs were trying to get their captives to the surface. They were accosted by a small party of men and decided to just climb up the rope rather than deal with them. One of the PCs threw a hireling down the shaft to buy them some time. They then heard a his, a strangely electrical noise, and then the shriek of the hireling. - Nundar tried to silence the captives by breaking their jaw so that they couldn't defend themselves in court. This resulted in several of the slaves' deaths. These were thrown down the shaft. - Many of the PCs spent the lucre they gained from the sale of the captives on carousing. Those that lost money did so due to gambling debts. Must've been a wild night at the Wine House of Barahm-Sin. - The second expedition saw the party enter with quite a few more hirelings including Antiochus the peltast and "Dave" the slinger. - When they returned to the dungeon, they didn't find any of the captives' corpses, but they did find their old hireling - entirely skinned! - After exploring around what seemed to be a strangely well preserved city street, they bumped into a nest of Maggot Men! This continues my habit of trying to kill people with monsters they told me about (or drew, in this case). - One of the PCs tried to light the maggot men on fire but only succeeded in lighting up his boots. He was then killed when Nundar misaimed a rock and hit him in the back of the head. Alas, poor Balzac, we hardly knew you. - The party then started exploring an area which Eshimur quickly realized was the Temple-Tomb of Uz of Uz himself! The whole thing was made of a strange, red rock unfamiliar to all present. - They saw pictures of pickup trucks, martians, and glyphs telling them to CONSUME and OBEY. - The last encounter came when they were investigating a lesser tomb that branched off the inner hall of the Temple-Tomb. This was filled with dead bodies, and the bodies they soon found were filled with corpse worms. They ended up fleeing from these rather than fighting them. Anyway, I had fun and I hope the players did too. This session has made me rethink some (fairly minor) things about my presentation of Uz, and I think that's a good thing. Hopefully I'll be running another session next Friday.
OPCFW_CODE
import { modelLogger as logger } from '../configuration/LoggerConfig' /** * Stroke symbol * @typedef {Object} Stroke * @property {String} type=stroke Symbol type, 'stroke' for stroke * @property {String} pointerType=undefined Pointer type * @property {Number} pointerId=undefined Pointer id * @property {Array<Number>} x=[] X coordinates * @property {Array<Number>} y=[] Y coordinates * @property {Array<Number>} t=[] Timestamps matching x,y coordinates * @property {Array<Number>} p=[] Pressure * @property {Array<Number>} l=[] Length from origin * @property {Number} width=0 (for rendering) Pen/brush width * @property {String} color=undefined (for rendering) Pen/brush color */ /** * pointerEvents symbol * @typedef {Object} pointerEvents * @property {String} type=pointerEvents Symbol type, 'pointerEvents' for pointerEvents * @property {Boolean} processGestures=False indicates if the gestures have to be processed * @property {Array<Stroke>} events=[] the events to process */ function computeDistance (x, y, xArray, yArray, lastIndexPoint) { const distance = Math.sqrt(Math.pow((y - yArray[lastIndexPoint - 1]), 2) + Math.pow((x - xArray[lastIndexPoint - 1]), 2)) return isNaN(distance) ? 0 : distance } function computeLength (x, y, xArray, yArray, lArray, lastIndexPoint) { const length = lArray[lastIndexPoint - 1] + computeDistance(x, y, xArray, yArray, lastIndexPoint) return isNaN(length) ? 0 : length } function computePressure (x, y, xArray, yArray, lArray, lastIndexPoint) { let ratio = 1.0 const distance = computeDistance(x, y, xArray, yArray, lastIndexPoint) const length = computeLength(x, y, xArray, yArray, lArray, lastIndexPoint) if (length === 0) { ratio = 0.5 } else if (distance === length) { ratio = 1.0 } else if (distance < 10) { ratio = 0.2 + Math.pow(0.1 * distance, 0.4) } else if (distance > length - 10) { ratio = 0.2 + Math.pow(0.1 * (length - distance), 0.4) } const pressure = ratio * Math.max(0.1, 1.0 - (0.1 * Math.sqrt(distance))) return isNaN(parseFloat(pressure)) ? 0.5 : pressure } function filterPointByAcquisitionDelta (x, y, xArray, yArray, width) { const delta = (2 + (width / 4)) let ret = false if (xArray.length === 0 || yArray.length === 0 || Math.abs(xArray[xArray.length - 1] - x) >= delta || Math.abs(yArray[yArray.length - 1] - y) >= delta) { ret = true } return ret } /** * Create a new stroke * @param {Object} properties Properties to be applied to the stroke. * @return {Stroke} New stroke with properties for quadratics draw */ export function createStrokeComponent (properties) { const defaultStroke = { type: 'stroke', x: [], y: [], t: [], p: [], l: [], width: 0 } return Object.assign({}, defaultStroke, properties) } /** * Get a JSON copy of a stroke by filtering its properties * @param {Stroke} stroke Current stroke * @return {{x: Array<Number>, y: Array<Number>, t: Array<Number>}} Simplified stroke object */ export function toJSON (stroke) { return { x: stroke.x, y: stroke.y, t: stroke.t } } /** * Mutate a stroke by adding a point to it. * @param {Stroke} stroke Current stroke * @param {{x: Number, y: Number, t: Number}} point Point to add * @return {Stroke} Updated stroke */ export function addPoint (stroke, point) { const strokeReference = stroke if (filterPointByAcquisitionDelta(point.x, point.y, strokeReference.x, strokeReference.y, strokeReference.width)) { strokeReference.x.push(point.x) strokeReference.y.push(point.y) strokeReference.t.push(point.t) strokeReference.p.push(computePressure(point.x, point.y, strokeReference.x, strokeReference.y, strokeReference.l, strokeReference.x.length - 1)) strokeReference.l.push(computeLength(point.x, point.y, strokeReference.x, strokeReference.y, strokeReference.l, strokeReference.x.length - 1)) } else { logger.trace('ignore filtered point', point) } return strokeReference } /** * Slice a stroke and return the sliced part of it * @param {Stroke} stroke Current stroke * @param {Number} [start=0] Zero-based index at which to begin extraction * @param {Number} [end=length] Zero-based index at which to end extraction * @return {Stroke} Sliced stroke */ export function slice (stroke, start = 0, end = stroke.x.length) { const slicedStroke = createStrokeComponent({ color: stroke.color, width: stroke.width }) for (let i = start; i < end; i++) { addPoint(slicedStroke, { x: stroke.x[i], y: stroke.y[i], t: stroke.t[i] }) } return slicedStroke } /** * Extract point by index * @param {Stroke} stroke Current stroke * @param {Number} index Zero-based index * @return {{x: Number, y: Number, t: Number, p: Number, l: Number}} Point with properties for quadratics draw */ export function getPointByIndex (stroke, index) { let point if (index !== undefined && index >= 0 && index < stroke.x.length) { point = { x: stroke.x[index], y: stroke.y[index], t: stroke.t[index], p: stroke.p[index], l: stroke.l[index] } } return point }
STACK_EDU
One of the most mindblowing things I learnt while I was doing my undergrad in Computer Science and Engineering was Lempel-Ziv-Welch (LZW) compression. It’s one of the standard compression algorithms used everywhere nowadays. The reason I remember this is twofold – firstly, I remember implementing this as part of an assignment (our CSE program at IITM was full of those), and feeling happy to be coding in C rather than in the dreaded Java (which we had to use for most other assignments). The other is that this is one of those algorithms that I “internalised” while doing something totally different – in this case I was having coffee/ tea with a classmate in our hostel mess. I won’t go into the algorithm here. However, the basic concept is that as and when we see a new pattern, we give it a code, and every subsequent occurrence of that pattern is replaced by its corresponding code. And the beauty of it is that you don’t need to ship a separate dictionary -the compressed code itself encapsulates it. Anyway, in practical terms, the more the same kind of patterns are repeated in the original file, the more the file can be compressed. In some sense, the more the repetition of patterns, the less the overall “information” that the original file can carry – but that discussion is for another day. I’ve been thinking of compression in general and LZW compression in particular when I think of stereotyping. The whole idea of stereotyping is that we are fundamentally lazy, and want to “classify” or categorise or pigeon-hole people using the fewest number of bits necessary. And so, we use lazy heuristics – gender, caste, race, degrees, employers, height, even names, etc. to make our assumptions of what people are going to be like. This is fundamentally lazy, but also effective – in a sense, we have evolved to stereotype people (and objects and animals) because that allows our brain to be efficient; to internalise more data by using fewer bits. And for this precise reason, to some extent, stereotyping is rational. However, the problem with stereotypes is that they can frequently be wrong. We might see a name and assume something about a person, and they might turn out to be completely different. The rational response to this is not to beat oneself for stereotyping in the first place – it is to update one’s priors with the new information that one has learnt about this person. So, you might have used a combination of pre-known features of a person to categorise him/her. The moment you realise that this categorisation is wrong, you ought to invest additional bits in your brain to classify this person so that the stereotype doesn’t remain any more. The more idiosyncratic and interesting you are, the more the number of bits that will be required to describe you. You are very very different from any of the stereotypes that can possibly be used to describe you, and this means people will need to make that effort to try and understand you. One of the downsides of being idiosyncratic, though, is that most people are lazy and won’t make the effort to use the additional bits required to know you, and so will grossly mischaracterise you using one of the standard stereotypes. On yet another tangential note, getting to know someone is a Bayesian process. You make your first impressions of them based on whatever you find out about them, and go on building a picture of them incrementally based on the information you find out about them. It is like loading a picture on a website using a bad internet connection – first the picture appears grainy, and then the more idiosyncratic features can be seen. The problem with refusing to use stereotypes, or demonising stereotypes, is that you fail to use the grainy pictures when that is the best available, and instead infinitely wait to get better pictures. On the other hand, failing to see beyond stereotypes means that you end up using grainy pictures when more clear ones are available. And both of these approaches is suboptimal. PS: I’ve sometimes wondered why I find it so hard to remember certain people’s faces. And I realise that it’s usually because they are highly idiosyncratic and not easy to stereotype / compress (both are the same thing). And so it takes more effort to remember them, and if I don’t really need to remember them so much, I just don’t bother.
OPCFW_CODE
About a year ago, I purchased a Dell Inspiron 3000 series 15" laptop. It's been working fine, but now it has a strange issue when shutting down or restarting. When restarting, Windows finished its usual shutdown, and then, when normally the Dell boot logo would appear, a black screen sat for about 10 seconds, followed by the brutal sound of my hard drive spinning down like it does when I force shut down. Then the drive span back up and the reboot continued as normal. This happens about 1 in every 3 times I restart. I also encountered a strange issue today when I shut down. I shut down as normal, left my house for a few hours, and when I came back, my laptop wouldn't turn on. Pressing the power button did nothing. I could feel heat on the underside of the laptop, like the CPU was still running. I tried holding down the power button, and pressing it to turn the laptop on again. It turned on this time. Turns out the hard drive and screen turned off, but the rest of the hardware stayed on, doing nothing. What's going on? I would suggest that you run the diagnostics on the computer after the above steps by following the steps mentioned in the video below and check if there is any issue with the hardware on your system. Also, I would suggest you to reinstall the Video card driver on the system to fix the issue with the shutdown. Please enter your service tag # on the link below, select the OS, then download the video card driver from Video section onto the system and install it. Also, you may try shutting down the system by following the steps below. Start > Run > type ‘cmd’ and in the command prompt box type "shutdown -s" In addition, I would suggest you to not to keep the AC adapter connected to the system all the time. Please let me know if this helps. for the battery shut down issue go to control panel, uninstall a program intel rapid storage intel security assist intel engine management also for ur booting problem, i would go to bios setting and set to original default. i am not sure if this will help ur hard drive, but, it is worth a try I ran a diagnostic. No problems. Reinstalled video card driver, no difference. Ran shutdown -s, no difference. Shouldn't have AC adapter connected all the time? I'll keep that in mind. Thanks for replying though! I was thinking those Intel apps were the culprit! I noticed that these issues only started cropping up after those were installed by the chipset drivers (from the Dell site mind you) I downloaded after I fresh installed Windows. I'll try uninstalling them and see if the issues stop. I would suggest you to try flashing the BIOS on the system. Please enter your service tag # on the link below, select the operating system, and then download the BIOS from the BIOS section onto the system and install it. Also, I would suggest you to uninstall and reinstall the video card driver from Dell website as these drivers are tested on the Dell system. Please follow the steps mentioned below to uninstall the video card. Open the charms bar by moving the mouse to right top corner of the screen and in the search box, Start typing devmgmt.msc and then press enter. Select the Display adapters listed and right click on it. Now select properties. In the properties window, under Driver tab, click on Uninstall button. Check “Delete the driver software for this device.” You want to delete the driver click OK. After the uninstall finishes restart the system. Then reinstall the Intel video card driver first and then Nvidia card driver (If system is configured with discrete video card) on the system for issue resolution. Please enter your service tag # on the link above, select the OS, then download the video card drivers from Video section onto the system and install it.
OPCFW_CODE
website.models.DoesNotExist: TaskResult matching query does not exist. When running make run, one out of three times I got the following error messages instead (received every 3s): [2016/12/18 07:54:24] HTTP GET /check_task_state 500 [0.04, <IP_ADDRESS>:52612] Internal Server Error: /check_task_state Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/django/core/handlers/exception.py", line 39, in inner response = get_response(request) File "/usr/local/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.5/site-packages/channels/handler.py", line 228, in process_exception_by_middleware return super(AsgiHandler, self).process_exception_by_middleware(exception, request) File "/usr/local/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vagrant/tellina_task_interface/website/views.py", line 82, in check_task_state return HttpResponse(task_manager.check_task_state(task_id)) File "/home/vagrant/tellina_task_interface/website/models.py", line 340, in check_task_state state = TaskResult.objects.filter(task_manager_id=self.id).get(task_id=task_id).state File "/usr/local/lib/python3.5/site-packages/django/db/models/query.py", line 385, in get self.model._meta.object_name website.models.DoesNotExist: TaskResult matching query does not exist. [2016/12/18 07:54:27] HTTP GET /check_task_state 500 [0.06, <IP_ADDRESS>:52612] Internal Server Error: /check_task_state Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/django/core/handlers/exception.py", line 39, in inner response = get_response(request) File "/usr/local/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.5/site-packages/channels/handler.py", line 228, in process_exception_by_middleware return super(AsgiHandler, self).process_exception_by_middleware(exception, request) File "/usr/local/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vagrant/tellina_task_interface/website/views.py", line 82, in check_task_state return HttpResponse(task_manager.check_task_state(task_id)) File "/home/vagrant/tellina_task_interface/website/models.py", line 340, in check_task_state state = TaskResult.objects.filter(task_manager_id=self.id).get(task_id=task_id).state File "/usr/local/lib/python3.5/site-packages/django/db/models/query.py", line 385, in get self.model._meta.object_name website.models.DoesNotExist: TaskResult matching query does not exist. [2016/12/18 07:54:30] HTTP GET /check_task_state 500 [0.05, <IP_ADDRESS>:52612] Internal Server Error: /check_task_state Traceback (most recent call last): File "/usr/local/lib/python3.5/site-packages/django/core/handlers/exception.py", line 39, in inner response = get_response(request) File "/usr/local/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.5/site-packages/channels/handler.py", line 228, in process_exception_by_middleware return super(AsgiHandler, self).process_exception_by_middleware(exception, request) File "/usr/local/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vagrant/tellina_task_interface/website/views.py", line 82, in check_task_state return HttpResponse(task_manager.check_task_state(task_id)) File "/home/vagrant/tellina_task_interface/website/models.py", line 340, in check_task_state state = TaskResult.objects.filter(task_manager_id=self.id).get(task_id=task_id).state File "/usr/local/lib/python3.5/site-packages/django/db/models/query.py", line 385, in get self.model._meta.object_name website.models.DoesNotExist: TaskResult matching query does not exist. When this error happens, I was able to input access code and run the rest of the demo function, but the platform cannot verify "echo 'hello world'" is the correct command. At first glance the error seems to be saying that we forget to provide the expected task result somewhere. However, the fact that it happens non-deterministically seems to indicate that something at the system level has gone wrong. This could be caused by one of the following The client code is issuing a /check_task_state call with an invalid task_id, such as /check_task_state?task_id=-1. Check what task_id is being sent by viewing the JS console. See https://github.com/TellinaTool/tellina_task_interface/blob/master/website/static/html/task.html#L187. The server's database does not have an entry for that task_id. For example, the client requests task_id=3 but the server only has entries for task_id's 1 and 2. Make sure that your config.json defines the tasks you want. Check the task table in db.sqlite3 and see if it has the tasks you expect. Resolved by refactoring backend logic.
GITHUB_ARCHIVE
Easy Excel Tweaks April 9, 2019 Make easy changes to the Excel status bar, and macros too, in this week's Excel news. Visit my Excel website for more tips, tutorials and videos, and check the index for past issues of this newsletter. Note: For some products mentioned below, I earn a commission on sales. That helps support the free tutorials on my site. Do you customize the Excel status bar, or leave it with the default settings? Num Lock used to be on the Status Bar by default (2003?), but now it's not, so I customized by Status Bar, to show that setting, and a few others. This Microsoft article shows the list of options, and which ones are selected by default. It's easy to add or remove the Status Bar optional items: I also turned on the Numerical Count option, to see how many numbers are selected. The Count option counts all kinds of data, including text. There are lots of sample macros on my website and blogs, and there are instructions on how to copy them into your workbook, if you aren't sure how to do that. To help you make minor changes to those macros, I've put notes on my pivot table blog. For example, if a macro refers to the first pivot table on the active sheet: Set pt = ActiveSheet.PivotTables(1) You could change that to a pivot table name instead: Set pt = ActiveSheet.PivotTables("SalesPivot") Free Webinar: If you want to learn more about Excel macros, Jon Acampora is running free webinars this week that shows you how to get started with Excel macros, and save time on some of your tasks. There's a full course too - VBA Pro - if you decide to learn even more about Excel programming. Here are a couple of Excel articles that you might find useful or interesting. Filter Challenge - Bill Jelen couldn't find a quick way to filter a list of account codes (video) that start with numbers from 6 to 9. How would you solve it? Jonathan put a good solution in the video comments. (Level - Intermediate) Humour - This comedy video is a parody of gamers, and definitely won't appeal to everyone (language warning). Instead of playing a game, the actor works in Excel, and rants about the latest updates. Too bad BallmerCon isn't a real event though - it would be awesome! (Level - All) There's still a bit of snow in our yard, but we can make it look like spring indoors. My mom is in the hospital, and she's received several lovely flowering plants, to brighten up her windowsill. I think the pink flower is a hydrangea, and the yellow one is a Begonia - but I've been wrong before! P. S. We're spending lots of time with Mom, so there could be delays in my email replies. Thanks for your patience! NOTE: To read this newsletter online, paste this URL into your web browser: https://www.contextures.com/newsletter/excelnews2019/20190409ctx.html I'll also post any article updates or corrections there. That's it for this week! If you have any comments or questions, send me an email. Last updated: April 8, 2019 10:01 AM
OPCFW_CODE
import { format } from 'date-fns'; import { parseFromTimeZone } from 'date-fns-timezone'; import WeatherData from '~/models/WeatherData'; export function weatherDataFromResponseData(responseData) { return WeatherData({ date: formatDate(responseData.dt_txt), temp: Math.round(responseData.main.temp), description: responseData.weather[0].description, humidity: responseData.main.humidity, clouds: responseData.clouds ? responseData.clouds.all : 0, precipitation: responseData.rain && responseData.rain['3h'] ? responseData.rain['3h'] : (responseData.snow && responseData.snow['3h'] ? responseData.snow['3h'] : 0) }); } function formatDate(utcDateString: string) { if (!utcDateString) { return ''; } const date = parseFromTimeZone(utcDateString, 'YYYY-MM-DD HH:mm:ss', { timeZone: 'Etc/UTC' }); return format(date, 'dd.MM.yyyy'); }
STACK_EDU
The ACA 2023 Annual Conference is fast approaching! The ACA blog, In the Field, is featuring the profile of a few members who will be presenting at the 2023 conference. Today we are featuring the profile of Madelynn Dickerson, Head of Digital Scholarship Services at the University of California, Irvine and Christine Kim, OAC/Calisphere Service & Outreach Manager at the California Digital Library. Q: What is the title of your conference presentation? Tell us about it in 1 or 2 sentences. Madelynn & Christine: Our presentation is called “Aggregation and Curation in Digital Collections: Identifying Inclusive Practices and Partnerships with Community-Based Archives.” We will be sharing information about a research assessment project that we are doing as part of “Community-Centered Archives Practice: Transforming Education, Archives, and Community History” (C-CAP TEACH), a Mellon-funded initiative at the University of California. The assessment project aims to identify and describe best practices for the development of ethical and inclusive digital collections and exhibitions, while understanding barriers that community organizations may face in contributing collections to aggregators. Q: Can you walk us through your academic and professional path? Christine: Since starting my archival journey, I’ve been lucky enough to explore the many moving components in the ecosystem of library services. My prior experiences include processing (and digitizing) archival collections and leading student engagement activities at the University of California, Irvine, as well as coordinating community engagement for ArchivesSpace, an open source application used to manage and describe archival collection material. Since 2019, I have been at the California Digital Library supporting the Online Archive of California and Calisphere – two services that provide broad, public access to digital collections contributed by libraries, archives, museums, and other cultural heritage organizations throughout California. Madelynn: I have an academic background in literature and art history, and transitioned from full time contract teaching to librarianship about ten years ago. My first full time job was as evening circulation supervisor in a small academic library and from there I was pretty ambitious about gaining as much experience as I could across different areas of librarianship while simultaneously enrolled in an online MLIS program. I have been at the University of California, Irvine since 2018, where I started as the Research Librarian for Digital Humanities and History. I have been the Head of Digital Scholarship Services since 2020. A snapshot of the Calisphere website, the statewide aggregation of digital collections contributed by libraries, archives, and museums throughout California. Calisphere provides free access to unique and historically important artifacts for research, teaching, and curious exploration. Q: What brought you to the field of archival studies and practice? Christine: As an undergraduate, I was a student in both film & media studies and history programs, with my interests intersecting at the representation of historical narratives in the media. So when I started my first internship at an archives, it was as if my interests glass slipper-ed into an actual tangible career path, but now adapted to explore how archival practices – and the visibility of firsthand accounts – influence the historical record. Madelynn: Archival studies was always something that was interesting to me, and I managed to pursue professional development and projects related to archives early in my career even though it wasn’t always directly related to my job at the time. For example, when I was working the night shift in circulation about 10 years ago, I wrote a CLIR Hidden Collections grant proposal after finding some materials stored in an old chicken coop on campus. The proposal made it to the final stages and ultimately wasn’t funded, but it was a great learning experience. Now as a Head of Digital Scholarship Services, I have formal responsibilities around digital stewardship in many areas, including digital collections. Q: What does the theme of the ACA 2023 conference, “Belonging: Considering archival bonds and disconnects,” mean to you in terms of overall archival orientations and practice? Madelynn & Christine: The conference theme speaks to the importance of representation, particularly narratives that are often misrepresented or excluded in the mainstream historical record. Aggregation aims to increase discoverability of historical resources, and web usage analytics demonstrate that digital collection aggregation can amplify the visibility of records; however, whose stories do aggregation service models privilege, and whose stories are absent? How do we ensure aggregated materials maintain appropriate cultural context? Our assessment project and research are centred in identifying strategies to mitigate the disconnect, particularly with a focus on surfacing and shifting change to address the priorities expressed by community-based archives in an effort towards representative and inclusive aggregation and exhibition practices. Q: Can you tell us about your research approach and perspectives? Madelynn & Christine: Our assessment project is rooted in identifying pathways to support a representative aggregation of digital collections, surfacing the barriers to participation, and defining actionable strategies to responsibly surface historically excluded narratives. By extension, we are concerned with sustainable and inclusive approaches to digital exhibitions. We are working with a consultant to conduct this assessment and develop a guide for effective and meaningful collaboration with community-centred archives. Our approach involves data analysis of current participation in aggregation services, website analysis of aggregation service scope and policies, environmental scans, and interviews and surveys with community-based archives. Q: What are you most looking forward to at this year’s conference? Madelynn & Christine: We look forward to learning from colleagues about initiatives they are embarking on to facilitate belonging and inclusion in their archival practices! We are grateful for the opportunity to share a progress update on our research assessments and welcome ideas and feedback. Suite 1912-130 Albert Street Ottawa, Ontario K1P 5G4 The ACA office is located on the unceded, unsurrendered Territory of the Anishinaabe Algonquin Nation whose presence here reaches back to time immemorial. Privacy & Confidentiality - Code of Ethics & Professional Conduct Copyright © 2022 - The Association of Canadian Archivists
OPCFW_CODE
Let's build a huge distributed audio timeline There's something I would like to try building, but it requires a nontrivial amount of help from other people. This one needs other people because I can't realistically travel to many parts of the world in a relatively short amount of time. I also don't have the kind of local knowledge for all of those areas. Residents of distant lands are already there and know the area. I'm talking about a massive radio aggregator which works on a per-call basis. Right now, I just run a relatively tiny thing which has one full-time data source: my USRP, decoding the Santa Clara city system. Once in a while, I will stand up an analog (!) feed to fill in some coverage from another system. While that's running, it'll intermix with Sunnyvale, Mountain View, San Jose, or whatever else happens to apply. I want to do this but much much bigger. I want someone to set up one of these systems in Oakland so we can hear what's going on with the Occupy people. I want all of the South Bay cities to be fully represented, even the dozen or more channels just for San Jose's police force. When something happens which spills over into another jurisdiction, I want people to be able to just "snap in" another feed to their timeline and keep on going. Oh, Santa Clara PD is chasing someone onto a freeway? Add CHP. Hey, they got off the freeway in San Jose and they're joining in now? Add that area's channel to the mix. Trouble is, there are far more systems than I can possibly log myself. It just collectively takes up too much CPU time and disk space. It really needs to be distributed to work properly. The good news is that since the last time I thought about this, the cost of hardware has come down quite a bit. Instead of spending $1000 on a USRP and appropriate daughterboard, you can now spend $20 on a specific type of TV tuner (the so-called "rtlsdr") instead. It'll give you a nice 2 MHz swath of spectrum over USB. You'll still need a relatively fast machine to keep up with that raw data if you want to do anything complicated with it. Decoding just one channel is easy, but if you want to split them out and record a bunch at a time, it's going to need some more horsepower. After that, it's a matter of getting the call off the logger and out to a point where other people can hear it. The logging systems would need to stream out calls to the aggregator(s) as they receive them along with the pertinent metadata. There, they'd become sources for the timelines of whoever wanted to join in. So then we come to the matter of money. The backend would consume a fair amount of bandwidth by pushing audio streams out to everyone. One way to handle this would be to require paid subscriptions to listen. Another way would be to require paid subscriptions to push data to the server. Still a third way would be to charge both sources and listeners. I don't think ads would work. Now, yes, some people reading this are thinking about sites which already do this. Well, they do, and they don't. You can listen to scanner audio on them, sure, but I have yet to see one which handles things in terms of calls. They're all just glorified Internet radio stations. Most of them are where someone took the line-out jack (if you're lucky) or headphone jack (if you're not) from a regular scanner and hooked it into the line-in (again, lucky) or mic-in (not) port on their computer. Then they started up the modern-day equivalent of Icecast to push a stream to the site's broadcasting servers. This gives you exactly what the scanner hears. This really stinks when listening to archives, since there's tons of dead air in there. In other words, it'll take you an hour to listen to an hour of traffic, even if the units only spoke for 2 minutes total in there. Guess what. On my system, if they open the squelch for 2 minutes, then it takes 2 minutes of your time to hear it. Then it goes on to the next call in the timeline, which could have been minutes or hours later. You don't have to sit there and listen to white noise and MP3 artifacts in between. This dead air problem is also bad when listening live to multiple streams, because there's no logical point to switch. You just have to turn both of them on and hope you can keep up with that. Good luck! Then there's the whole problem of using traditional one-channel-at-a-time scanners to follow a world which is inherently parallel. While you're getting channel A, channels B and C have good stuff happening which you will never hear. I've ranted about this before, so check out that post for more on this topic. Also, how do you say "stop playing me traffic from the electric company and sewer people"? On a stream, you can't. It just isn't possible. With mine? Click the little filter button and say goodbye to that traffic. How about this? You're listening to a bunch of calls and this one is boring. It's yet another license and registration check. You don't care. There's better stuff going on. Do what I do and hit the right-arrow key to jump to the next call. Life's too short to waste it on uninteresting calls! This requires a shift in the way people think about this field. To do it and do it well, you need to abandon traditional line-in analog streaming and think in terms of actual calls. Then you need to abandon traditional round-robin channel recording and think in terms of parallel recording. Then you need to switch from a "click here to play a stream" mindset to a "here is your timeline with everything you requested" one. Think Twitter's timeline or Facebook's feed or that other thing which is only crickets. Only then will this sort of thing move into the future. All of this is possible right now. I've done my part. Who's next?
OPCFW_CODE
Why can solar cells be made of indirect and direct semiconductors? (Comparison between some pn-junction devices) Various textbooks mention, but not go into detail, how semiconductor devices are optimized for their particular function. E-k space is trascendental to understand this, given that it depends on the direct and indirect nature of the semiconductor. Yet, I am confused regarding this part. I am specially interested in optoelectronic devices (photodiode, LED, solar cell, and semiconductor laser). LEDs are made of direct semiconductors, because electron hole recombination can occur without phonon participation. Solar cells can be made of both. In solar cells you dont want any type of recombination. How does the directness or indirectness of the material play a role here? Would [electronics.se] be a better home for this question? I'm unclear what exactly is being asked for. Since E-k space was mentioned I'm not sure this is a device question, but what physics enlightenment is wanted is not clear. @JonCuster: e.g. on which grounds is a certain material chosen for a certain device. It basically has to do with recombination. A LED should have a lot of radiative recombination and not the other types. In a solar you don't want any type of recombination. How do you manage to do that? I know that silicon cells are very thick because the material is indirect (reduced absorption coefficient due to phonon participation in eh-pair generation). But wouldn't that also mean that thin film cells, which are direct, would also be bad, because recombination is also easy? @nomadStack you should add these details to your question to make it specific. E.g. "why do direct band gap materials make good LEDs" or "why can solar cells be made from both direct and indirect materials" are both excellent questions. The way it's written at the moment make it hard to answer without a very general rambling post. The main design parameters (at least on a conceptual level) for solar cells are the band gap energy and the minority carrier diffusion length. The former determines at which point in the solar spectrum the semiconductor starts absorbing light, the latter determine how far minority carriers diffuse before recombining. The goal of a solar cell is to have the photogenerated minority carriers cross the junction before they recombine. Direct band gap materials have strong optical transitions between the valence and conduction band. However indirect materials have fairly weak optical transitions. This is because absorption and emission of a photon must occur with the simultaneous absorption or emission of a phonon (thus conversing momentum). If you compare the design of a GaAs (direct material) solar cell to a Si (indirect material) then you will find that Silicon cells are much thicker: on the order of hundreds of microns. This is done to compensate for much weaker absorption. Moreover, because Silicon is a poor absorber of light, simply having a greater thickness means that you can absorb nearly all of incoming photons. On the surface this answers your questions. However there is another level of detail. Considering only optical properties, it is clearly advantageous to have a thick active layer. However, if you made a GaAs or Silicon solar cell much thicker the efficiency, counterintuitively, would decrease! This is because of the minority carrier diffusion length. The minority diffusion length of carriers in Silicon is very long, mean carriers can move hundreds of microns before spontaneously recombining. This it is possible to get a good balance of optical generation and carrier collection with a thick active layer. However, the minority carrier diffusion length in GaAs is very short, on the order of tens of microns. By good fortune, GaAs has a large absorption coefficient and so cells only have to be several microns thick to achieve a good balance between absorption and carrier collection. In summary, it's all about balancing optical absorption, by changing thickness, and carrier collection, by making sure the thickness is smaller than the minority carrier diffusion length. Provided you can achieve this balance you can make solar cells from direct or indirect materials. Would this imply that if a direct semiconductor that is thicker than carrier diffusion length would work as a solar cell that emits light? In a LED you want radiative recombination. In a solar cell you don't want any type of recombination. How do you control which recombination mechanism occurs? SRH is pretty straightforward (make a crystal as perfect as possible). But how about inducing radiative over Auger? These are all good questions but will take time to discuss and don't lead themselves well to the comment box. Why not synthesis this into a new question about recombination? @boyfarell: Done: http://physics.stackexchange.com/questions/162931/device-design-regarding-recombination-mechanisms @nomadStack OK, nice question! Typed you something over my lunch break. Upvotes and accepted answers are appreciated.
STACK_EXCHANGE
The form you build is for people. People use different devices. Some use a mouse, some a touch device, some the keyboard, some a device controlled by eye movements. Some use a screen reader, some a small screen, some use text enlargement software. Everybody wants to use your form. Learn how to make your form accessible and usable for everyone. Ensure users understand the purpose of a form field There are many form controls you can choose from. What do they all have in common? Every form control must have an associated <label> element describes the purpose of a form control. <label> text is visually associated with the form control, and read out by screen readers. In addition, tapping or clicking the <label> focuses the associated form control, making it a larger target. Use meaningful HTML to access built-in browser features In theory, you could build a form using only You can even make it look like a native What's the problem with using Built-in form elements provide a lot of built-in features. Let's have a look at an example. <input> (the first one in the example) and the <div> look the same. You can even insert text for both, as the <div> has a There are lots of differences, though, between using an appropriate <input> element and a looking like an A screen reader user doesn't recognize the <div> as an input element, and isn't able to complete the form. All the screen reader user hears is 'Name', with no indication that the element is a form control for adding text. <div>Name</div> doesn't focus the <div> that goes with it, whereas the <input> are connected by using the After submitting the form, the data entered in the <div> isn't included in the request. <input> does that by default. Built-in form elements have other features. For example, with appropriate form elements and the correct an on-screen keyboard shows appropriate characters. Using the inputmode attribute on a cannot do that. Ensure users are aware of the expected data format You can define various validation rules for a form control. For example, say a form field should always have at least eight characters. You use the minlength attribute, indicating the validation rule to browsers. How can you ensure users also know about the validation rule? Tell them. Add information about the expected format directly beneath the form control. To make it clear for assistive devices, aria-describedby attribute on the form control, id on the error message with the same value, to connect both. Help users find the error message for a form control In a previous module about validation, you learned how to show error messages in case of invalid data entry. <input type="text" name="name" id="name" required> For example, if a field has a required attribute, and invalid data is entered, the browser shows an error message next to the form control when the form is submitted. Screen readers also announce the error message. You can also define your own error message: This example needs more changes to connect the error message to the form control. A simple approach is to use the attribute on the form control that matches the id on the error message element. aria-live="assertive" for the error message. ARIA live regions announce an error to screen reader users the moment the error is shown. The problem with this approach for forms with multiple fields, aria-live will usually only announce the first error in the case of multiple errors. As explained in this article about multiple aria-live announcements on the same action you could create a single message by concatenating all the errors. Another approach would be to announce that there are errors, then announce individual errors when the field is focused. Ensure users recognize errors Sometimes designers color the invalid state red, However, to communicate an error or success, you should never rely only on color. For people with red-green color blindness, a green and a red border look almost the same. It's impossible to see if the message is related to an error. In addition to color, use an icon, or prefix your error messages with the error type. <strong>Error:</strong>Please use at least eight characters. Help users to navigate your form You can change the visual order of form controls with CSS. A disconnect between visual order and keyboard navigation (DOM order) is problematic for screen reader and keyboard users. Learn more about how to ensure visual order on the page follows DOM order. Help users to identify the currently focused form control Use your keyboard to navigate through Did you recognize that the styling of the form controls changed once they were active? This is the default focus style. You can override it with the :focus CSS pseudo-class. Whatever styles you use inside always make sure the visual difference between the default state and the focus state is recognizable. Learn more about designing focus indicators. Ensure your form is usable You can identify many common issues by filling out your form with different devices. Use only your keyboard, use a screen reader (such as NVDA on Windows or VoiceOver on Mac), or zoom the page to 200%. Always test your forms on different platforms, especially devices or settings you don't use every day. Do you know someone using a screen reader, or someone using text enlargement software? Ask them to fill out your form. Accessibility reviews are great, testing with real users is even better.
OPCFW_CODE
- STAFF PICKS - GIFTS + GIFT CARDS - SELL BOOKS - FIND A STORE New Trade Paper Ships in 1 to 3 days available for shipping or prepaid pickup only Available for In-store Pickup in 7 to 12 days Restful .Netby Jon Flanders Synopses & Reviews RESTful .NET is the first book that teaches Windows developers to build RESTful web services using the latest Microsoft tools. Written by Windows Communication Foundation (WFC) expert Jon Flanders, this hands-on tutorial demonstrates how you can use WCF and other components of the .NET 3.5 Framework to build, deploy and use REST-based web services in a variety of application scenarios. RESTful .NET introduces you to the ideas of REST and RESTful architecture, and includes a detailed discussion of how the Web/REST model plugs into the WCF architecture. If you develop with .NET, it's time to jump on the RESTful bandwagon. This book explains how. Book News Annotation: Flanders specializes in training web developers on .NET frameworks, and he has written this guide on RESTful web services for programmers and developers who need use the WCF REST model over such alternatives as SOAP, SOA and WS-* stack. The author provides detailed instructions on how to program and host Read-Only (GET) services, READ/WRITE services and REST feeds and how to use HTTP and ADO.NET services as well. WCF 3.5 SP1, which was introduced just prior to the publication of this book, is also discussed. A free online version of this book is available for 45 days after purchase. Annotation ©2009 Book News, Inc., Portland, OR (booknews.com) This work teaches Windows developers to build RESTful Web services using the latest Microsoft tools. Written by a Windows Communication Foundation (WCF) expert, this hands-on tutorial demonstrates how readers can use WCF and other components of the .NET 3.5 Framework. About the Author Although Jon Flanders spent the first few years of his professional life as an attorney, he quickly found chasing bits more interesting than chasing ambulances. After working with ASP and COM, he made the move to .NET. Jon is most at home spelunking, trying to figure out exactly how .NET (specifically ASP.NET and Visual Studio .NET) works. Deducing the details and disseminating that information to other developers is his passion. Table of Contents ForewordPrefaceChapter 1: REST BasicsChapter 2: WCF RESTful Programming ModelChapter 3: Programming Read-Only ServicesChapter 4: Programming Read/Write ServicesChapter 5: Hosting WCF RESTful ServicesChapter 6: Programming FeedsChapter 7: Programming Ajax and Silverlight ClientsChapter 8: Securing REST EndpointsChapter 9: Using Workflow to Deliver REST ServicesChapter 10: Consuming RESTful XML Services Using WCFChapter 11: Working with HTTPWCF 3.5 SP1ADO.NET Data ServicesADO.NET Entity Framework WalkthroughColophon What Our Readers Are Saying Computers and Internet » Internet » General
OPCFW_CODE
Shalom guys! I hope your day is nice however higher watching my boyfriend reacts to Style Nova swimsuit haul and break up prank! It’s summer season so it’s time for a swimsuit strive on haul. I strive on Style Nova swimsuits for my boyfriend and my bff. My boyfriend and bff react to and fee my swimsuits they usually aren’t very good! To spice issues up, my boyfriend and I resolve to play a prank on my bff by making her suppose that we break up! Watch to see how my bff reacts to the pranks and the way my boyfriend reacts to my swimsuits! It is a very humorous video that you’ll love to observe. (WANT SOME FASHION NOVA OF YOU OWN?) #boyfriend #react #prank vvvv====My NEW video=====vvvv We Pretended We Went Lacking Prank on BOYFRIENDS! **EMOTIONAL REACTION**💔 Observe Piper Rockelle: Well-known Birthdays: https://www.famousbirthdays.com/individuals/piper-rock-elle.html Watch my different Style nova movies! My Boyfriend REACTS to INSTAGRAM Mannequin **Who Wore It Higher Problem** 🔐❤️ My BOYFRIEND REACTS to FASHION NOVA Outfits 💜 KIDS REACT to my FASHION NOVA Outfits **TRY ON HAUL** 👠👗 Editor, Filmer, Director: Hunter: https://www.youtube.com/channel/UCo2uAMyB_Oqlmr-2SQ6qDPg WATCH MORE Piper Rockelle: Response Movies: https://www.youtube.com/playlist?record=PLz20a0pZfe4WG5Lg0KPhXX-zAOeQXlUB3&disable_polymer=true Latest Movies: https://www.youtube.com/playlist?record=PLz20a0pZfe4UwPfQMEF6_YjNnnwTPA0Tz Most In style: https://www.youtube.com/playlist?record=PLz20a0pZfe4X3GMk4C8xWrFJQW-VOB3H9 About Piper Rockelle: Welcome to the official Piper Rockelle YouTube channel! My life is sort of distinctive and I take pleasure in creating humorous, enjoyable,household movies together with 24 hour challenges, style hauls, trending and typically messy challenges, prank, make-up tutorials, dancing, diys and easy on a regular basis vlogs. I hope to make you snicker or cheer you up for those who’re having a nasty day, with my final to challenges, and pranks. These movies are for ladies and boys of all ages. Sit again and revel in!
OPCFW_CODE
The projection consists of selecting the name of the column(s) of the table(s) you wish to be displayed in the response. If you wish to display all columns, "*" must be used. Column names are inserted next to the SELECT clause. -Display students' names and gender codes. SELECT Nometu, Cdsexe FROM ETUDIANT; -Display the contents of the ETUDIANT table SELECT * FROM ETUDIANT; Test queries examples Q01: Display the name, number and date of birth of students. The operation of selection involves selecting rows (tuples) of one (or several) table(s) which meet certain conditions. Conditions are specified after the WHERE clause. -List all male students. SELECT * FROM ETUDIANT WHERE Cdsexe='H'; Q02: List teachers with more than two years' seniority in their rank. Projection and Selection It is clear that the projection and selection operations can be used in the same SQL query. -Show number and name of students born in 1980. SELECT Numetu, Nometu FROM ETUDIANT WHERE Dtnaiss >= '1980-01-01' AND Dtnaiss <= '1980-12-31'; Converting data and processing dates. A given date must be expressed in the American format (yyyy-mm-dd) and not in the French format (dd-mm-yyyy). An individual date may be considered: -As a string. For instance '1998-06-25' (25 June 1998). In that case, no calculations can be made, but comparisons can be made. -As a date. To compute the difference, in days, between two dates, use the DATEDIFF function. Syntax of DATEDIFF function: Date2 must be greater than Date1 The CURRENT_DATE function is used to get the system date. To find out the age of the students at the current date, enter: DATEDIFF(CURRENT_DATE,Dtnaiss)/365. Q03: Names of female students who were born after 1980. ORDER BY clause Using the ORDER BY clause, you can categorize the results of a query based on the value of certain attributes (columns). - Display the list of teachers by rank and in descending order of name. SELECT Grade, Nomens FROM ENSEIGNANT ORDER BY Grade, Nomens DESC; The previous query, which uses the column names, is equivalent to the following query, which uses the column numbers used to order the results. SELECT Grade, Nomens FROM ENSEIGNANT ORDER BY 1, 2 DESC; Test queries examples
OPCFW_CODE
Discover the most important announcements from Microsoft at the SharePoint Conference held in May 2018. We summarize and evaluate the new features that will become available in SharePoint Online and Microsoft Office 365 in the near future. Subscribe to our blog From the 21st to 23rd of May 2018, Microsoft organized the SharePoint Conference North America in Las Vegas. The event gathered speakers and professionals from all over the world to share news and knowledge about SharePoint and the Office 365 platform. Microsoft seized this opportunity to detail on the things to come on these platforms. During keynotes and conference sessions, presentations from enthusiastic speakers were welcomed with cheers from a passionate crowd. This year's SharePoint Conference was special as it was the first one since 2014. Although in the past years there were SharePoint updates coming from more general Microsoft events, as SharePoint and Office 365 are booming business – 400,000 companies are using SharePoint, with a 70% increase of users in 2017 –, Microsoft decided to grant these stars of productivity systems their own conference again. Of course, this came with some major announcements! Follow through as we bring you up-to-date with a rundown of all things collaboration and intelligence in the world of Microsoft. During the keynote session, Microsoft surprised us with a radical innovation: SharePoint Spaces. As a mixed reality rendition of SharePoint sites, a site can be shown in a three-dimensional space with its news, lists and libraries. With users being able to explore the rendition via any browser or with a 3D headset, this new feature will certainly improve engagement, as employees are literally sucked into the data environment. We are particularly excited about this news, as AMPLEXOR is actively working on the topic of augmented reality. With SharePoint Spaces, we can extend a business AR application to the collaboration environment in Office 365. This means, for example, a development team working in AR mode on a new product could access documents without leaving the AR space. Security & compliance features On the Office 365 front, the most important announcements were about platform improvements, notably two security and compliance features we highlight below: # GDPR dashboards May 25 2018 will go down in history as Europe’s own Y2K event. The new privacy regulations have created a lot of excitement and concerns across all organizations that somehow store user data in their databases. Microsoft lends a hand via its Office 365 Security & Compliance Center with GDPR dashboards: various tools allow you to track personal user data in the Office 365 system, collect all personal data of a user in a report and manage compliance from a single place. Figure 1: GDPR dashboards in the Office 365 Security & Compliance Center # OneDrive Files Restore In 2017, the Wannacry ransomware attack wreaked havoc on hard drives in hundreds of companies across the globe. Microsoft saw this as an opportunity to greatly improve OneDrive for Business with an impressive backup service. If you suspect that a file synced to OneDrive has been compromised, corrupted or accidentally deleted, you can go back in time and restore it to any second in the last 30 days. This means that you can recover all your files from a ransomware attack in just a few clicks – of course after thoroughly cleaning your workstation. Figure 2: OneDrive File Restore The Office 365 ecosystem is becoming more intelligent with the help of Artificial Intelligence (AI). Since everyone is able to find relevant content more quickly and easily, this leads to improved productivity for all users within the organization. # Improved search experience The Office 365 search engine is one of the best-known fundamentals of the ecosystem, and it will provide you with an even better search experience soon. With the search engine expanded with artificial intelligence, as soon as you indicate that you will be using search, e.g. by selecting a search box, you will see results coming up. The results are composed of relevant documents, SharePoint sites and people from your organization that are deemed relevant for you by the AI engine, based on who you work with, trending documents, etc. Figure 3: Discover relevant content through search even before entering a query. # Microsoft Stream Microsoft Stream, a replacement for O365 Video, is an internal YouTube-like platform enriched with intelligence. Apart from being nicely integrated in many Office 365 applications like SharePoint, PowerPoint or Microsoft Teams, all videos uploaded to the platform are automatically transcribed by an AI engine. As a result, your videos can display closed captions and let you find terms and phrases pronounced in the video via a regular search. # Image analysis All images uploaded into Office 365 are automatically processed by an AI engine. After deskewing and auto-cropping the uploaded picture, Office 365 identifies the type of image, e.g. a receipt, portrait of people, a whiteboard with notes, etc. The identification results, together with the geolocation data, are stored as metadata. Furthermore, the AI engine will also perform optical character recognition (OCR) on the image and extract any relevant text it can find and store the results in SharePoint. This image intelligence allows an organization to start business processes with nothing more than image upload! Figure 4: A picture of a receipt in Office 365 with the automatically added metadata: subject of the picture, location and extracted text. About the author Alexander Ernon is ECM Business Consultant at Amplexor. Alexander has over 13 years’ experience in business analysis, software consulting and project management, specializing in SharePoint and Office 365 collaborative platforms. He's a Prosci Certified Practitioner in Change Management and certified by Microsoft Microsoft 365 Teamwork administration and deployment.
OPCFW_CODE
/* * MathFunctions.cpp * * Created on: Mar 4, 2015 * Author: mgazzola */ #include "MathFunctions.h" // Returns fmod shifted into [0,r] REAL posMod(const REAL a, const REAL r) { REAL v = fmod(a, r); if (v < 0) v += r; return v; } // Calls the normal acos func, but if x>1 returns 0, and if x<-1 returns PI, // so as to get around precision errors making acos(cos(0)) = nan REAL arcCos(const REAL x) { const REAL argumentACosClamped = std::max((REAL)-1.0, (REAL)std::min(x, (REAL)1.0)); return acos(argumentACosClamped); } // Gaussian random normal generator without trigonometric calls using Box-Muller // transform double randn_notrig(const double mu, const double sigma) { double var1 = 0.0; double var2 = 0.0; double rsquared = 0.0; // If no deviate has been stored, the polar Box-Muller transformation is // performed, producing two independent normally-distributed random // deviates. One is stored for the next round, and one is returned. // Choose pairs of uniformly distributed deviates, discarding those // that don't fall within the unit circle do { var1 = 2.0 * (double(rand()) / double(RAND_MAX)) - 1.0; var2 = 2.0 * (double(rand()) / double(RAND_MAX)) - 1.0; rsquared = var1 * var1 + var2 * var2; } while (rsquared >= 1.0 || rsquared == 0.0); // Calculate polar tranformation for each deviate const double polar = sqrt(-2.0 * log(rsquared) / rsquared); // Return second deviate return var2 * polar * sigma + mu; } /* // PREVIOUS VERSION double randn_notrig(const double mu, const double sigma) { bool deviateAvailable = false; //flag float storedDeviate; //deviate from previous calculation double polar, rsquared, var1, var2; // If no deviate has been stored, the polar Box-Muller transformation is // performed, producing two independent normally-distributed random // deviates. One is stored for the next round, and one is returned. if (!deviateAvailable) { // Choose pairs of uniformly distributed deviates, discarding those // that don't fall within the unit circle do { var1=2.0*( double(rand())/double(RAND_MAX) ) - 1.0; var2=2.0*( double(rand())/double(RAND_MAX) ) - 1.0; rsquared=var1*var1+var2*var2; } while ( rsquared>=1.0 || rsquared == 0.0); // Calculate polar tranformation for each deviate polar=sqrt(-2.0*log(rsquared)/rsquared); // Store first deviate and set flag storedDeviate=var1*polar; deviateAvailable=true; // Return second deviate return var2*polar*sigma + mu; } else { // If a deviate is available from a previous call to this function, it is // returned, and the flag is set to false. deviateAvailable=false; return storedDeviate*sigma + mu; } } */
STACK_EDU
This is an old revision of the document! The initial requirements for the project are describedhere. Other requirements will be added as list items here with detailed descriptions (where necessary) added to the requirements documentation. - Be able to skip whole directories with one check - Provide the same functionality as REDEIRECTTEST, not necessarily using the current design More documentation and source code can be found here here. ToDo & who - Add a check for 5.3 only (Georg - done, 6th may) - Implement valgrind (high priority from Nuno) - Implement CGI tests (Zoe) - GET (done, 27 April) - POST (done 28th April) - POST_RAW (done 3rd May) - GZIP_POST (done 28th April) - DEFLATE_POST (done 28th April) - EXPECTHEADERS (done May) - COOKIE (done 3rd May) - Implement FILE_EXTERNAL (Georg, done) - Prototype parallel running (Georg) - Replace the test status array with a status object (Zoe - done, 8th June) Record of development decisions - Use the PEAR coding standard (22/04/2009 - see php-qa list for concerns) - Deviation from the standard: Do not use _ prefix for private members. (Reason: too much re-work to existing code) - We prefix classnames with rt (Reason: avoid name conflicts with built-in PHP classes) - We do not follow the original PEAR class naming scheme (Reason: we use autoload and name prefixes, so we can keep classnames shorter than DIR_DIR_DIR_CLASS) - We will not include both File and Class doc blocks. (Reason: Only ever one class per file) This page configuration (the address is a bit screwy, but I won't change it as I've posted a link to it) has a list of currently implemented options, and whether we will re-implement or not. Yes means we will - blank means not decided, have asked for input on QA list. - Run tests in parallel? - XML or CSV or TXT (to replace HTML) - Option specify arguments to valgrind Things that will not be re-implemented This section is here to record differences between the new and old implementation of run-tests where they affect the excecution of existing tests or the manner in which run-tests.php is executed. The way these are currently implemented the test has to look like this: --POST-- Some posted content --GZIP_POST-- 1 --FILE-- etc This isn't really necessary. In the new implementation the --GZIP_POST-- section will just contain the content to be posted and gzip'd. Same for DEFLATE_POST. This affects two tests:
OPCFW_CODE
In this post we’ll explore the differences between the new -E revision ESP32 modules verses previous revision modules (-D and older). This post will highlight the differences between both ESP32-WROOM-32D / 32E and WROVER-I /WROVER-IE. At the beginning of 2020, Espressif announced the new series of modules, which ended with the letter E, namely All these modules share the ESP32 ECO-V3 chip, which is the newest ESP32 core silicon release (at the time of writing). The standard ESP32 SoCs, the ESP32-D0WD and ESP32-D0WDQ6, have been updated and named ESP32-D0WD-V3 and ESP32-D0WDQ6-V3. In the next section we will see which are the main differences between the ECO-V3 and the previous ESP32 versions and in the third section we’ll look at one very important improvement. We will then see when to use the -E modules and when to use the older ones. Regarding firmware and pinout compatibility, the older and new modules are interchangeable. Improvements in the ECO-V3 chip # There are six main differences reported by Espressif – four bug fixes and two improvements. Listed in the order chosen by Espressif: - External PSRAM bug fix: Previously when accessing the external PSRAM in a specific sequence a read/write error could occur. - CPUs simultaneous reading error fix: Reading certain different memory regions at the same time with both CPUs could previously yield an error. - Crystal oscillator start-up fix: In the previous silicon release the crystal sometimes couldn’t start if certain environmental conditions are met. - Fault injection security issue fix: this problem is detailed in the next section. - Minimum CAN baudrate reduced: ECO-V3 can use a baudrate as low as 12.5kHz instead of 25kHz. - Download boot mode can now be disabled For more information, please refer to the Official ECO-V3 User Guide. Fault Injection Vulnerability # In September 2019, LimitedResults described in his post a serious ESP32 vulnerability. He showed how a timely glitch on the supply rails of the ESP32 can lead to the reading of both secure bootloader encryption key (SBK) and flash encryption key (FEK). The ESP32 eFuses and boot process # To understand the exploitation, we must first understand the ESP32 eFuses. eFuses are non-volatile memory blocks which can be written only once: When a 1 is written in an eFuse bit, it cannot be reverted to 0. eFuses are organized in four blocks: - BLK0: contains configuration bits, among which read and write disabled flags for blocks BLK1 and BLK2. - BLK1: Flash encryption key (FEK) - BLK2: Secure bootloader key (SBK) - BLK3: Non-security related configuration bits The ROM bootloader uses the SBK to verify the user bootloader on flash, which then checks trough ECDSA whether the application firmware is signed properly. After the reset, the eFuse controller reads the fuses and checks all the BLK0 flags. LimitedResults was able to inject a glitch on the supply rails at the time when the eFuses controller reads the configuration bits, thus allowing him to read both SBK and FEK – even if the chip was secured and all the BLK0 flags are set to disable the reading. For detailed explaination, please read the ESP32 Fault Injection Vulnerability Press release. The required equipment for this attack is fairly common and inexpensive but on the other hand the attacker needs physical access to the device to exploit this vulnerability. Our only recommended solution is to upgrade to the new ECO-V3 chip, but note that this will require re-certification of your product. Previous revision modules are still available to buy, but using them is not recommended.
OPCFW_CODE
Why does my laptop overheat on 14.04? I have read similar posts here but could not find an answer that solves my problem. So here is what happens: As soon as i open one app in Ubuntu 14.04 my laptop overheats making noise . Laptop specs are : Intel Corei5-3210M CPU @ 2.50GHz 2.50 GHz, RAM 4GB, Nvidia Geforce GT 630M 1GB. I am aware of the project called bumblebee , i have installed it (using the cmd: sudo apt-get install bumblebee bumblebee-nvidia primus linux-headers-generic) but nothing changes. I am also aware of nvidia-prime , installed it but still nothing changed. Can you suggest a possible solution ? What am i doing wrong ? Is it brand new or is it full of dust? had similar problems in Windows on my laptop... @Alvar It is kinda new actually ... but i will check it for dust , thanks for the suggestion ! What is the app that causes it to overheat? Or is it all apps? I tried to fix mine (Intel i7 + Intel HD 4500 Graphics) and the real problem is Unity Desktop, It is not lightwight enough, in every new version it is getting a little better but still not good enough as the classic desktop, gnome 3 or the kde. I hope Canonical Devs will fix this and finally concentrate on flat design. You can install indicator-cpufreq to adjust the cpu to a lower frequency. sudo apt-get update sudo apt-get install indicator-cpufreq Use CTRL + F2 to open a run dialog and enter the command: indicator-cpufreq to start it up the first time and it should auto start at boot after that. Also, make sure you have all the sensor apps installed sudo apt-get install lm-sensors hddtemp libsensors4 xsensors and run the following command to detect the sensors sudo sensors-detect Follow the recommendations closely just don't do anything risky and you'll be fine. To show the temps in realtime use the following command: watch -n 1 -d sensors Also, you can install the prime-indicator to help you manually select the graphic card sudo add-apt-repository ppa:nilarimogard/webupd8 sudo apt-get update sudo apt-get install prime-indicator And check out this post from webupd8 concerning nvidia-prime on 14.04. http://www.webupd8.org/2013/08/using-nvidia-graphics-drivers-with.html source http://www.webupd8.org/2014/01/prime-indicator-lets-you-quickly-switch.html Finally, you might find this helpful http://www.webupd8.org/2014/04/prevent-your-laptop-from-overheating.html I have followed the steps above but my laptop still overheats when i open apps like firefox, software center, and sometimes office... Maybe because i have both nvidia-prime and bumblebee installed ? Should i unistall some of those ? Thanks for the detailed answer ! @user308137 have you checked the actual temps? the only other thing I can think of is switch to intel for firefox and use VAAPI with VDPAU http://www.webupd8.org/2013/09/adobe-flash-player-hardware.html after install, switch to intel driver and execute VDPAU_DRIVER=va_gl firefox to run firefox. It's supposed to cut cpu usage because of support for accelerated decoding. @user308137 allot of people don't actually check the temps and when they do, report regular operating temps for their processor(s). Another aspect to take into consideration is that, even if you are using windows 7 or 8 x86_64, most of the time, you are actually only running the 32 bit version of any given program on a 64bit capable operating system. Take firefox for instance, which is not even available for windows in 64bit form unless you use "firefox nightly" . It may be that you're not used to fully utilizing your processor(s) because you were only running 32bit applications but now 64. Ok the problem is solved ! Here is what i did after a fresh install of Ubuntu 14.04: 1) sudo apt-get install bumblebee bumblebee-nvidia primus nvidia-331 2) sudo apt-get update 3) sudo apt-get install indicator-cpufreq 4) Run indicator-cpufreq (like said above). Thank you all for putting the time and effort to help ! if your laptop has Nvidia chipset, this next lines may help you, I was having heat problems when watching any video through the internet o any local video player, cause the computer was overheating a lot, even with external cooler pads, so I search for many suggestions and finally one solution brought coolness to the coreI7 laptop which actually has a GeForce graphics card within the link is this... http://www.webupd8.org/2013/08/using-nvidia-graphics-drivers-with.html but you should check it out before do anything but the resumed process is here... Install Nvidia-Prime in Ubuntu 13.10 or 14.04 Follow the instructions below only if you know what you're doing and how to revert the changes in case LightDM fails to start, etc.! Update: the instructions below should work under Ubuntu 14.04 Trusty Tahr too. Note: the nvidia-prime package only supports the proprietary Nvidia drivers and won't work with Nouveau! If you're using Ubuntu 13.10 Saucy Salamander or 14.04 Trusty Tahr and want to test the Optimus support in the Nvidia Graphics Drivers 319.12+, here's what you need to do: Firstly, purge Bumblebee if installed: sudo apt-get purge bumblebee* Also, make sure libvdpau-va-gl1 is not enabled system-wide because it causes Nvidia Settings to crash on start - if it is, either disable it or simply remove the package: sudo apt-get purge libvdpau-va-gl1 Install the proprietary Nvidia drivers and the Nvidia Prime package: sudo apt-get install nvidia-319 nvidia-settings-319 nvidia-prime Reboot the system. That's it, after a reboot, your laptop should be using the Nvidia GPU to render the desktop. Now my laptop is cooling itself. I just installed: apt-get update apt-get install bumblebee bumblebee-nvidia indicator-cpufreq reboot apt list --installed | grep -i nvidia bumblebee-nvidia/xenial,now 3.2.1-10 amd64 [installed] nvidia-304/xenial,now 304.131-0ubuntu3 amd64 [installed,automatic] nvidia-current/xenial,now 304.131-0ubuntu3 amd64 [installed,automatic] nvidia-opencl-icd-304/xenial,now 304.131-0ubuntu3 amd64 [installed,automatic] nvidia-settings/xenial,now 361.42-0ubuntu1 amd64 [installed,automatic]
STACK_EXCHANGE
What is an Internal Developer Platform (IDP)? # An Internal Developer Platform (IDP) is a layer on top of the tech and tooling an engineering team has in place already. It helps Ops teams structure their setup and enable developer self-service. TLDR; Internal Developer Platforms (IDPs) are configured by Ops teams and used by developers. Ops teams specify what resources start up with what environment or at what request. They also set base-line templates for application configurations and govern permissions. This helps them to automate recurring tasks such as spinning up environments and resources and makes their setup easier to maintain by enforcing standards. Developer teams gain autonomy by changing configurations, deploying, spinning up fully provisioned environments, and rollback. IDPs can be built or bought. How Internal Developer Platforms are used by Ops, DevOps, or Platform teams # The Ops team primarily runs and configures the IDP. Teams running IDPs concentrate on infrastructure, service level agreements, and workflow-optimization and configure the IDP to abstract away any recurring or repetitive tasks, such as spinning up resources or environments for developers. The Ops team also sets baseline templates for configuration and avoids unstructured scripting to prevent excessive maintenance time. See below for all building blocks that Ops usually operates. How Internal Developer Platforms are used by application developers # IDPs integrate into existing workflows which usually remain a git-push deploy workflow but add further automation. The entire deployment process is now at the disposal of the developer. They can request resources, spin up fully provisioned environments, rollback, deploy and set deployment automation ruling autonomously. Five core components # Although variations exist, a fully-fledged IDP is made out of five core components. Two features are exclusively used by the Ops, DevOps or Platform team: Infrastructure Orchestration and Role Based Action Control (RBAC). Application Configuration Management is used by the Ops team to set baseline-templates but also used in day-to-day activity by the application development team. Developers use the functionalities Deployment Management and Environment Management.-> Core Components UI, API, or CLI? # All of the above-mentioned building blocks are centered around an API. Depending on the maturity of the IDP, an User Interface (UI) or Command Line Interface (CLI) can be built around the API. While many IDPs are CLI based, only a few provide a complementary UI. We also found that teams with the full set (UI, CLI, and API) show the highest satisfaction with the product. Integrating with all existing tech and tools # IDPs integrate with all the existing tech and tooling a team has in place already. They integrate mainly through APIs to prevent introducing yet more scripts running in clusters which would increase the security risk and increases maintenance overhead. On the cluster side, modern IDPs are (in 95% of all cases) built on Kubernetes with containers as workloads. Ops teams usually assign fixed clusters to the platform and assign them to environment types. If a developer requests a new environment, the platform can now set up a namespace in the assigned cluster and take care of updating configurations. IDPs closely integrate with CI setups by fetching built images needed to update environments or create new ones. External resources such as databases, DNS, and others are connected through resource drivers that signal the success or failure of updating/creating a resource back to the IDP’s API. Those drivers can be Infrastructure as Code (IaC) scripts or simple little services. Ops tools such as monitoring, chaos engineering, GitOps tools can be plugged into the different workflows of an IDP at the team’s convenience. We’ve compiled a long list of all tools we see commonly used with IDPs.-> Integrations What happens under the hood? # Before a developer deploys an environment they specify the type of environment, which tells the IDP what resources should be set to which state. They select the images (workloads) they require in the application, apply changes to the base-line configurations (if necessary), and initiate a deployment. The IDP will now take the changes to the baseline configurations and create a manifest. It will use the functionality of Infrastructure Orchestration and set the right resources into the right state (let’s say the application requires a namespace in GKE, a Postgres database, and a certain DNS setting to run). It will then inject the environment variables into the container and serve the running environment to the developer. Why is it called an Internal Developer Platform? # Before we dive into the specifics, let’s briefly look at the reason this category is evolving along with those naming conventions. - Internal – clearly separated from externally facing platforms such as Twilio’s developer platforms. IDPs are meant for internal use only. - Developer – indicates the internal customer and the primary user, the application developer. - Platform – characterizes the product type. Slight variations exist, but we’ve actively decided against those as the descriptions are less accurate and the risk of misunderstanding is too high. Those include: - Internal platform (too broad) - Developer portal/platform (Google it, too much overlap with externally facing portals) - Application management framework (imprecise)
OPCFW_CODE
A few days ago, the 10,000th user signed up for 0Boxer, so I figured this would be a great time to share some of the code that makes the realtime gaming mechanics possible. Go ahead, grab the code and play with it, make your own extensions and apps, and improve it! A Thorny Road Also, no one else (as far as I know) has released any kind of frontend API for Gmail. And, in order for my app to work, I needed access to the various user actions in the UI — the backend APIs weren’t going to cut it. So, I was stuck writing my own library from scratch. A Call to Stop Duplicating Risky Work And yet, there are many companies, like Rapportive, Baydin, and Unsubscribe.com that build their own APIs. They’re all building out complex APIs with similar functionality, that can all break independently if Gmail decides to significantly change their app structure (which they inevitably will). Right now, Gmailr is pretty barebones. There are a lot of missing features. However, it has been used for the past half year at 0Boxer, serving thousands of active extension users per week. What I’m hoping for is that developers will improve Gmailr, and over time, it will be the robust API that many developers will find useful. And maybe, if we get enough apps built on top of it, the Gmail team will take notice. The architecture of Gmailr and the process of reverse engineering Gmail probably deserves its own blog post. For now, I’ll just show you a quick code snippet of what using Gmailr looks like. If you want more details, head on over to the GitHub repo. A typical use case looks like this: Basically, this will show a nice annoying message to the user everytime they archive email. The call to init does a bunch of work to bootstrap the Gmailr API. After that, there is a callback that passes in the API object, which provides the Gmailr API methods. Let’s say you want to insert some custom DOM elements. That’s easy: This will insert a div on the top of the Gmail API, and also inject your own CSS. You can use this to add new features to Gmail. I don’t have much time these days to devote to Gmailr. What I’m hoping for is to pass the torch to other developers to improve upon the seedlings. In particular, here are some ways to improve it: - Expose methods that allow insertion of UI elements into various places in Gmail, like the sidebar. - Improve the reliability of the API, taking into account the various weird states that Gmail can get into. - Add capabilities to read the contents and metadata of emails received, sent, and interacted with. If you want to get in touch, please contact me at jamesjacobyu -AT- gmail.com. Did you like this post? If so, you might like reading Empathetic Product Development for Hackers.
OPCFW_CODE
In times of change, the learners inherit the earth - the learned find themselves...– Eric Hoffer The greatest lesson in life is to know that even fools are right sometimes.– Winston Churchill The Twelve-Factor App → Interesting take on Application Architecture from a maintainability and deployability perspective. While I don’t agree with everything (e.g. XI Logs), I do really like some of these points, especially “Config”, “Build, release, run” and “Disposability”. G. Ruby: Mocking HTTP Response body I’m sure there are many ways to do this — probably better ways — but this is a simple way I’ve done it. class FakeResponse attr :body def initialize body @body = body end end describe Foo" do before(:each) do @data = File.read "./http/response/body.txt" Net::HTTP.stub!(:post_form).and_return(FakeResponse.new @data) end it "should return stuff" do ... F2. Proud Papa... I totally just caught Thing 1 (almost 4) speaking Huttese while watching Return of the Jedi. F2. Pro-Parenting Tip Have your kid(s) get their PJs on 30 minutes to an hour before bed time when their motivated to do so quickly by something that want to do, as opposed to motivated to do so slowly at the prospect of going to bed. I’m sharing this, as it took me way too long to learn it on my own. :P G. Bash: past dates in epoch format Here are some notes on converting dates to epoch and tweaking what dates are displayed. For the purpose of this, I’m trying to pass start and end dates in epoch format to a REST API. I’m looking for ranges which include: yesterday, last 7 days, last 14 days and last 31 days. Start Time Note: I set this up so that by changing the DAYS_AGO, you can control how far back in time you... G. Ruby: sourcing and setting environment varibles I was looking for a simple way to source and set a bash environment variable from a file, in this specific instance, I can’t be 100% sure that the any steps I take to ensure that the file itself is sourced on the host before Rails is started, nor can I impact the format of the file itself (e.g. changing it to YAML). Here are the best options I’ve found. File.readline This is not... G. On Set: Empire Strikes Back → F2. Hey Father... Thing 1: Hey, father? Me: Yes, daughter? Thing 1: When is Mother going to be home? We're been working on manners, but I have no idea where that came from?!?!
OPCFW_CODE
Difference between revisions of "A Productive Rant About instant likes instagram" (Created page with "Python one of several main programming Utilized in 2019 most businesses are thinking about to employ it for upcoming tech. You could possibly hear the converse of the most...") Latest revision as of 09:07, 1 July 2020 Python one of several main programming Utilized in 2019 most businesses are thinking about to employ it for upcoming tech. You could possibly hear the converse of the most up-to-date version Python four. Nevertheless, Python is in the solution wherever items get sophisticated. When we glance at mobiles, IOT apparatus, and only all the landscape of computing resembles now, Along with the containers and cloud. So there is not any method to forecast not fifteen or 10 several years from right now, and what it can seem like 5 years from now. I feel we're going to carry on to determine the event in the programming component of Python. So things that really encourage It is really Procedure as equilibrium along with a language, will go on to evolve. Beyond this, I feel that It is just a large amount, common and robust language. I feel that the necessities with the Local community will feed into and have an impact on where by the speech belongs. Substantially, individuals have illustration from different groups within the development team. Smarter minds than mine can provide a much better reaction. I'm self-confident that Guido has a number of issues within your account for where he would like to view it go. Aside from the challenges which the many implementations have, another thing which Python has as terminology, And that i feel that is its authentic benefit, can it be climbs together with the individual scale. By way of occasion, you will have one human being publish some scripts within their notebook to fix a challenge they have up - Python's good for this. I think the speech will continue on to move forward to the fabric which is in Python 3. Some code foundations, which include Instagram, have transitioned to 3 from Python two. I believe once we deal with a handful of of these constraints, that may be doable, then we set up Python for the next 20 decades of growth and achievements. Very long may have a few problems. The globe differs nowadays as well as Python was devised to repair difficulties and can develop into different. Thus a lot of companies which happen to be constructing really big Python codebases are embracing type annotations, not excessive to assist Together with the working of your software, but to help with nearly all of new programmers. I think that goes a really long way in helping Python to continue to scale a scale. On pretty big Python jobs, in which you've a combination of senior and junior programmers, it may be quite a lot of effort for junior programmers to learn how to use a modern library or system, due to the fact They are coming out of a statically-typed language. Python is definitely the swiftest-escalating language about Earth. The Stack Overflow poll of this year generates symptoms it really is escalating at a formidable pace. In addition, It's not necessarily that shocking -- energetic, adaptable, and simple to learn; This is a language that's potent adequate to fix concerns in An array of locations and obtainable. buy instagram views How can it evolve to fulfil the necessities of its Community of analysts and engineers? Businesses who're setting up enormous Python codebases are embracing sort annotations to assist new programmers The Requirements from the Python Neighborhood Will impact where the terminology goes in long run They discover it and dip in and will pull down a Python supply code for a task which they have not discovered just before. I experience as if matters such as the kind annotations are fixing men and women, As an example, Despite the fact that Here are a few troubles as climbs within the scale. Python also climbs, allow us to say compact open up-source project with probably ten or 15 men and women leading. Python scales to tens of 1000's of people focusing on tens of Many people, or simply a work was working on huge software jobs. In precise approaches, it is actually tough to predict in which Python is relocating. Python hasn't bloated in exactly a similar fashion I feel the Java ecosystem has. With the maturity amount, I are convinced It can be considerably possible that Python's feelings will provoke potentially additional technological, languages geared toward regions of software. I see this as healthy, and I haven't got any need to generate all developers use Python for all. Change your productiveness to nesxt stage by Discovering python coaching in kochi within the Major institute in Kerala. It truly is been through more than 25 many years of this prosperous span, and It really is Amongst the speediest-developing programming languages. Python alone also exhibits a promising long run as well as its achievements Tale. Has Python turn into preferred? Python has received increased fame than ever before. Python delivers attributes which seize the interest of each programmer. Python is simple to go through and compose it lowers the confusion one of many developers. One of several companies Google works by using Python thanks to its purposes and is made up of a devoted portal website to Python. Down below undoubtedly are a couple traits of Python that may job the reasons it's received a thing in widespread. 1. Supportive Community Python has a abundant in the programming languages also have supports challenges? A variety of them absence in the instruction, making it really hard for a developer. Python does not have these troubles. It can be existed for fairly a while, so you can find a great deal of guides, tutorials, instruction and much more. It supports to the programmers and comes with implementation. The community has a developer who delivers support and seasoned developers. two. Basic to Code and Produce Python includes a readable and simple code in lieu of other programming languages like Java, C or C++. The code is built in an easy vogue, which can be interpreted by a beginner developer. Whilst to master Python programming, then it's going to expect a substantial amount of work and time, nonetheless to master that language from scratch is straightforward for just a rookie. That he can demonstrate exactly what the code is imagined to carry out, even looking within the code. 3. Availability and Open up-Resource Python is surely an open up-source programming language which implies its source code is obtainable. You utilize or might alter its program. It's freely offered, and you can obtain it using this hyperlink It is possible, to begin with, this Python by just setting up it. four. Standard Library Python features a extensive conventional library. These libraries choose absent the try to publish code or perhaps a function. The library is manufactured up of which have been pre-prepared and inbuilt applications, which suggests you needn't generate a code for every and all the things. Including device-tests sayings, World wide web browsers, databases, threading plus much more. 5. Cross-System Language It can run competently on several functioning devices like Windows, Linux, Ubuntu, etc. in order that it could be algorithms interpreted It is just a mobile language. In the event you've created your code on the Home windows System Which means, Additionally it is attainable to operate it. There wouldn't be a need to produce modifications in your method to run it. Job Opportunities Related to many programming languages obtainable with Python, Python has outraced the words and phrases. Simply because its prevalence has risen by forty for every cent, job chances also have developed. IT corporations are trying to find applicants with tactics and skills. This has shown the job scope for its Python developers. This is the listing of Individuals Job profiles for your Python developers with their wages. Additionally, it is made up of programs in match improvement, desktop programs, embedded computer software or scripting. Several of the broadly utilised Python applications are 1. World-wide-web Enhancement: On the web frameworks permit you to generate. Working with web frameworks, it helps in developing
OPCFW_CODE
Comparing corpora So first, a disclaimer: I'm less than beginner level in everything that has to do with programming. I'm analysing the occurrences of English code-switching/translanguaging in a particular Polish-language IM groupchat. I've already compiled the conversation into a single text document (like 700k words in total, I know, a lot.) and now I'd like to extract just occurrences of English words among all the Polish words. How do I do it? I'd be super grateful for any beginner level explanations. p.s. I managed to upload the data to sketchengine too. What I would do is use a wordlist as the frontline approach, and then for words that aren't matched by either wordlist or, even, are matched by both, I would use one of the many trigraph-based "language detection" libraries around. These tend to work much better with text longer than just one word, though, so I'd definitely do #1 first and #2 only if that fails. It seems like you are about to make a little corpus analysis. You already gathered different discourses together and created a corpus. I would recommend to use https://www.laurenceanthony.net/software/antconc/ It is a free professional software which I still use when analysing my own corpora. It is not that difficult to use and analyses a lot of things. First of all, you should list all the words that occur in your corpus. Just ignore all the words that are not in English. As the next step you should think about what you are really interested in. "...occurrences of English code-switching/translanguaging in a particular Polish-language IM groupchat." Think about a hypothesis and then go on with the actual analysis. Now you should look at collocation and stuff like this. So look up the words you found in the program and look where exactly they occur. What is there purpose and why does the speaker use them at this specific point. In Antconc this is pretty easy. You simply type in the word you want to analyse under the concordance tab and it will show all the passages of your corpus where they occur. After this it is up to you what exactly you want to look at. And by the way, code-switching and translanguaging are not synonyms for each other. They are concepts about completely different things. I believe that you mean code-switching where you switch inside of a conversation to another language. Translanguaging is something different and should not be fortaken for code-switching. Furthermore you are also not comparing corpora. You are simply analysing a corpus. Hey, thanks for the detailed answer!!! But the thing is that among the 700k words, only a fraction will be in English, so I'm trying to extract these ones. Doing it by hand will be extremely tedious... I do have another corpus with a collection of hundreds of thousands of words from English-language Twitter (I'd use a dictionary but I want to account for slang words/pooossibly potential typos as well), so I was thinking to extract all the words from the Polish corpus that are also present in the English one. Does that make sense? Again, thanks so much! If you work with Antconc this is pretty easy. Just make a word list of words you want to investigate on or words that you want to ignore in the word list tab. This is also a great way to learn how language functions. Otherwise you can also make a Zipf´s law distribution. At the far end you should encounter all the words which are no polish because they do not occur that often. Then you simply create a .txt with these words and load them into antconc or other corpus analysis tools. This is the professional way of analysing text.
STACK_EXCHANGE
The Heartbleed exploit explained with jellybeans, tweak Windows 7 or 8 for extra speed, manage your passwords with LastPass, and a first look at the WRT 1900AC. What is the Hearbeat? * The problem lies in the "Hearbeart" - It's a way to keep a SECURE TLS session alive /// to keep it from "timing out" - The Heartbeat is a payload of arbitrary data which is sent from one end of the connection to the other, and back again. - If the heartbeat makes the round trip intact, then both sides of the connection know that the connection is still active and still secure. What is the Exploit? * The exploit is in the way that OpenSSL responds to the heartbeat. - The SENDER of the "keep alive" packet gets to decide how much arbitrary data it sends. -- The sender sends heartbeat of a certain size, then tells the receiver how much data must be sent back *** Here's the rub... along with that data, the sender tells the receiver (running OpenSSL) how much data should be sent back... and OpenSSL doesn't check that number with the size of the incoming data. -- Since the SENDER decides how big the arbitrary heatbeat data will be, and because OpenSSL trusts the sender as to how much data is in that heartbeat, the SENDER can sent a heartbeat that will return data that was NOT originally sent by the sender. -- In other words, an attacker can make the compromised system send data that was intended to be secure and private. How is the Exploit Used? * To use the exploit, an attacker would first establish a SSL connection to a compromised system. * The attacker would then send a heartbeat to the compromised system with a 1byte payload * However, the attacker tells the compromised system that it must return 64k Bytes * The compromised system sends back a 64k heartbeat response, giving the attacker 63,999 bytes of data that is SHOULDN'T have released. * The attacker keeps doing the attack until they have the compromised systems certificates and any other information that is in memory. What is the impact * If the attacker is able to steal the credentials of the compromised system, they are able to do a number of things: 1. A MITM attack on people connecting to the site -- An attack in which EVERYTHING is in the clear 2. The ability to create "spoof sites" with the authentic certificate of the compromised site Speed Up Windows "Remote Assistance" is a service that runs in the background of Windows 7 & 8, allows a remote "helper" to log into your computer to fix problems while you watch. Most people will never never use the service, and it can actually be a security hole. "System Restore" is a background service that keeps track of "save points" - Theoretically it will allow you to return to one of these "healthy" save points should something happen to your OS. It's a useful feature of Windows, but it CAN'T clear viruses and it can only return to save points that is has created... meaning that it almost never gets you back to a completely healthy image. To Turn Off these Services 1. Right click the "Computer" icon on the desktop and choose "Properties" 2. To the left of you computer's stats, you'll see "Control Panel Home" along with four shielded options. Click "Remote Settings" 3. You'll see a field for "Allow Remote Assistance Connections to this computer"- uncheck that option and click "apply". 4. Click on the "Remote Assistance" tab and look for "Protection Settings" - Select the drive on which protection in enabled and click "Configure" 5. Select the radio button to "Turn off system protection" and apply the change. LastPass Password Manager - After the Heartbleed security breech and just everyday use of Multi-site logins, you should be using strong passwords and different passwords for every site. - Lastpass keeps your passwords in a Vault and helps you generate new random passwords and works on all platforms - Your passwords are encrypted, which is only stored on your machine. That means even LastPass doesn't know your password. It's called the "Trust No One" approach - Don't believe me? Watch Leo and Steve Gibson go into detail not only about Heartbleed but Lastpass and Trucypt Leo's introduction to LastPass on TWiT Live Specials Security Now Episode: 450 & 451, Leo & Steve go into detail about Heartbleed & more. Security Now Episode: 450 LastPass Security Page LastPass "Heartbleed" Blog Post Install LastPass and Start saving Passwords - Lastpass has extensions for Chrome, Firefox, Safari and works for all platforms, Windows, OS X, Linux, and Mobile OS like iOS, Android Windows Phone and even Blackberry... Audit and Update your Passwords - Make sure not to save your passwords in the browser options, and start using LastPass - Lastpass can scan your passwords and give you a rating and show you sites thave have duplicate passwords - It'll even tell you if any of your Usernames have been used in a Security breach How do you improve your score? - Eliminate Duplicate Passwords - Weak Passwords - Don't store passwords in email, docs, pieces of paper, your hand... Install on your Mobile Devices - Login with your Lastpass password. Before a recent update you use to have to log into Lastpass and use their web browser through the app. But not anymore! - Lastpass will help you log into apps without having to copy it from the Lastpass app. Linksys Router Madness! * 2.4Ghz Only * 10/100 LAN & WAN Ports * Broadcom 125Mhz processor (updated to 216Mhz) * 4MB Flash Storage // 16MB System Memory (Later Versions had 2MB / 8MB) * No external storage options * 2.4 & 5 Ghz SIMULTANEOUSLY * Beamforming Tech * 4 Gigabit LAN Ports * 1 Gigabit WAN Port * Dual Core 1.2Ghz CPU * 128MB Flash Storage // 256MB DDR3 System Memory * USB 3.0 & eSATA connectors // Support of FAT/NTFS/HFS Connect with us! Don't forget to check out our large library of projects on this site. If you want to search for a topic, try this custom search engine . Also, check out our transcripts - Google+ Community at gplus.to/twitkh - Tweet at us at @padresj - Email us at firstname.lastname@example.org
OPCFW_CODE
Ultima VI: The False Prophet:RAM map From Data Crystal Mob data is split across 16 arrays in range 7e9fff, each holding 2 bytes per mob with room for 256 mobs. Address Description ------- ----------- 7E8000 Byte 0: sprite ID Byte 1: ??? 7E8200 Byte 0: ??? Byte 1: animation state (whether moving, facing direction, animation frame) 7E8400 7E8600 7E8800 7E8A00 7E8C00 7E8E00 X coordinate 7E9000 Y coordinate 7E9200 Strangely packed version of coordinates, only for visible mobs 7E9400 AI data? Always 00 00 for NPCs in your party 7E9600 Initial X coordinate 7E9800 Initial Y coordinate 7E9A00 7E9C00 7E9E00 Byte 0: ID of potential party member? NPCs that can be party members have the values 01-0d assigned in ascending order Empty mob AB has the value 0E Empty mob AE has the value 0F All other mobs have the value 0 The memory immediately after, 7EA000, has a compacted version of this array using only 1 byte per mob. This may be a mapping of mob IDs to inventory IDs or combat stats. Mob index Description --------- ----------- 00 Empty 01-A7 NPCs A8-CB Empty? CC-FF Dynamically allocated mobs, i.e. spawned monsters Address Size Description ------- ---- ----------- 3C 1 Selected spell level in the Cast menu 209 1 Index of currently selected menu item 17FF Unpacked spellbook (8 bytes per level, populated with indexes of known spells) 8A56 12 vtable for menu functions (Inventory is special-cased) 038000 Talk function 038476 Look function 0382BB Attack function 038432 Cast function 01B64D Camp function 03880E Save function Address Size Description ------- ---- ----------- 49 2 Joy1 input state (whether a button is currently pressed) 4B 2 Joy2 input state (whether a button is currently pressed) 4D 2 Joy1 input state (whether a button started being pressed this frame) 4F 2 Joy2 input state (whether a button started being pressed this frame) AB 2 Pointer to start of sprite data for visible mobs (always 0500) AD 2 Pointer to end of sprite data for visible mobs B5 2 Pointer to the next available slot for DMA copies 500 Sprite data for visible mobs A00 Queue of DMA copies for this frame 1300 1FF Tile data copied to OAM each frame A3BA 1 Screen fadeout level - 0F for full vision, 00 for faded to black 1014B 1 Amount of karma 1014C 2 Amount of gold 7N0151 1 Hour 7E0152 1 Minute 7F0000 FFFF Temp space and output buffer for decompressed data
OPCFW_CODE
Clemens Verhoosel is an Associate Professor in Computational Methods for Model- and Data-Driven Engineering in the section Energy Technology and Fluid Dynamics at the Department of Mechanical Engineering at Eindhoven University of Technology (TU/e). His main interest is the development and application of numerical methods for engineering applications in solid mechanics, fluid dynamics, and coupled problems. Clemens’ current research focuses on scan-based immersed isogeometric analysis, an analysis paradigm that enables the simulation of complex geometries based on scan data. Compared to traditional finite element methods, using this methodology, meshing and geometry clean-up operations can be eliminated from the analysis work flow, resulting in a substantial time reduction of the design-through-analysis cycle. Clemens' research also considers uncertainty quantification, focusing on the tailoring of numerical methods to enable the application of Bayesian inference to complex engineering problems. Proposition 9, appended to the Thesis Multiscale and Probabilistic Modeling of Micro Electromechanical Systems by Clemens V. Verhoosel, 12 October 2009: “A numerical method deserves the stamp ‘robust’ when an overnight computation can be combined with a good night’s rest.” Clemens Verhoosel obtained his MSc (with honors) from the Faculty of Aerospace Engineering at Delft University of Technology in 2005. His Master’s degree was complemented with successful participation in the TU Delft Honors Track program. In 2009, Clemens was awarded his PhD degree at TU Delft with the cum laude distinction. In 2009 and 2010 Clemens held a post-doctoral position at the Institute for Computational Engineering and Sciences at the University of Texas at Austin, where he conducted pioneering work on isogeometric failure analysis. As of 2010, Clemens holds the position of associate professor at the Department of Mechanical Engineering at Eindhoven University of Technology, where in 2011 he was awarded with a prestigious personal VENI grant from the Netherlands Organisation for Scientific Research (NWO). Clemens’ research interests are reflected in his teaching, in particular in the Advanced Discretization Techniques course (Isogeometric Analysis) and the Scientific Computing for Mechanical Engineeering course. Clemens is an active contributor to the open source Python-based (isogeometric) finite elment toolkit Nutils (www.nutils.org). Image-based goal-oriented adaptive isogeometric analysis with application to the micro-mechanical modeling of trabecular boneComputer Methods in Applied Mechanics and Engineering (2015) An isogeometric analysis approach to gradient damage modelsInternational Journal for Numerical Methods in Engineering (2011) An isogeometric approach to cohesive zone modelingInternational Journal for Numerical Methods in Engineering (2011) A phase-field description of dynamic brittle fractureComputer Methods in Applied Mechanics and Engineering (2012) Computational homogenization for adhesive and cohesive failure in quasi-brittle solidsInternational Journal for Numerical Methods in Engineering (2010)
OPCFW_CODE
Modellansatz: Modell171 - Algebraic Geometry On closer inspection, we find science and especially mathematics throughout our everyday life, from the tap to automatic speed regulation on motorways, in medical technology or on our mobile phone. What the researchers, graduates and academic teachers in Karlsruhe puzzle about, you experience firsthand in our Modellansatz Podcast: "The modeling approach“. Gudrun spent an afternoon at the Max Planck Institute for Mathematics in the Sciences (MPI MiS) in Leipzig. There she met the Colombian mathematician Eliana Maria Duarte Gelvez. Eliana was a PostDoc at the MPI MiS in the Research group in Nonlinear Algebra. Its head is Bernd Sturmfels. Now she works as Postdoc at the University of Magedburg. They started the conversation with the question: What is algebraic geometry? It is a generalisation of what one learns in linear algebra insofar as it studies properties of polynomials such as its roots. But it considers systems of polynomial equations in several variables so-called multivariate polynomials. There are diverse applications in engineering, biology, statistics and topological data analysis. Among them Eliana is mostly interested in questions from computer graphics and statistics. In any animated movie or computer game all objects have to be represented by the computer. Often the surface of the geometric objects is parametrized by polynomials. The image of the parametrization can as well be defined by an equation. For calculating interactions it can be necessary to know what is the corresponding equation in the three usual space variables. One example, which comes up in school and in the introductory courses at university is the circle. Its representation in different coordinate systems or as a parametrized curve lends itself to interesting problems to solve for the students. Even more interesting and often difficult to answer is the simple question after the curve of the intersection of surfaces in the computer representation if these are parametrized objects. Moreover real time graphics for computer games need fast and reliable algorithms for that question. Specialists in computer graphics experience that not all curves and surfaces can be parametrized. It was a puzzling question until they talked to people working in algebraic geometry. They knew that the genus of the curve tells you about the possible vs. impossible parametrization. For the practical work symbolic algebra packages help. They are based on the concept of the Gröbner basis. Gröbner basis help to translate between representations of surfaces and curves as parametrized objects and graphs of functions. Nevertheless, often very long polynomials with many terms (like 500) are the result and not so straightforward to analyse. A second research topic of Eliana is algebraic statistics. It is a very recent field and evolved only in the last 20-30 years. In the typical problems one studies discrete or polynomial equations using symbolic computations with combinatorics on top. Often numerical algebraic tools are necessary. It is algebraic in the sense that many popular statistical models are parametrized by polynomials. The points in the image of the parameterization are the probability distributions in the statistical model. The interest of the research is to study properties of statistical models using algebraic geometry, for instance describe the implicit equations of the model. Eliana already liked mathematics at school but was not always very good in it. When she decided to take a Bachelor course in mathematics she liked the very friendly environment at her faculty in the Universidad de los Andes, Bogotá. She was introduced to her research field through a course in Combinatorial commutative algebra there. She was encouraged to apply for a Master's program in the US and to work on elliptic curves at Binghamton University (State University of New York) After her Master in 2011 she stayed in the US to better understand syzygies within her work on a PhD at the University of Illinois at Urbana-Champaign. Since 2018 she has been a postdoc at the MPI MiS in Leipzig and likes the very applied focus especially on algebraic statistics. In her experience Mathematics is a good topic to work on in different places and it is important to have role models in your field. - E. Duarte, Ch. Görgen: Equations defining probability tree models - E. Duarte: Implicitization of tensor product surface in the presence of a generic set of basepoints. 2016. Journal of Algebra and Applications(to appear). - Rigidity of Quasicrystal Frameworks - webpage - E. M. Duarte, G. K. Francis: Stability of Quasicrystal Frameworks in 2D and 3D Proceedings of the First Conference Transformables 2013.In the Honor of Emilio Perez Piñero 18th-20th September 2013, Seville, Spain - Portraits of people working in Nonlinear Algebra
OPCFW_CODE
I have heard a lot lately about data scientists. It seems like these guys are almost as big as Justin Bieber right now. I even saw the Harvard Business Review recently called data scientist the sexiest job of the 21st century. Is that cool or what? When I saw that article, and I had to tweet “that true, if they disqualified actors, singers, dancers and beauty queens from the contest, then yes it’s probably on the top 10 sexiest jobs”. But you know my criterion for sexiest job is that the people have got to look good naked. I doubt that data scientists fill that bill, but you never know. But no we don’t want them tweeting those pictures to us right now. But let’s get serious for a second. What is a data scientist? I keep getting asked that question for the good reason that I keep a blogging on the topic and talking about it constantly. The whole notion of a data scientist is that it is somebody who is an analytics professional, whose core job it is to build statistical models of complex data sets, large complex data sets, in order to be able to find statistical patterns within that data that are not apparent to the naked eye or may not be apparent to structured reports that let’s say you might pull up in your business intelligence application. So we can say statistical modeling, fundamentally, it’s what a data scientist does. And they build statistical models for a number of purposes. Applications and businesses have been using them for a long time. |#1 Ranking: Read how InetSoft was rated #1 for user adoption in G2's user survey-based index Data mining is another task you associate with data scientists. First and foremost you’ll find statistical dependence for what are often called non obvious patterns in data sets. You are trying to mine the data. It could be customer buying data. You are looking for customer buying patterns going back any number of years. Where you’re trying to look at those patterns across diverse variables, diverse independent variable that you know individually or in combination you know explain why a customer bought a given product on a given day in a given store at a given price and so forth. So one part is data mining, which is looking for patterns in historical data sets. The forward looking aspect to that is predictive modeling. You look at historical trends based on statistical patterns found in the data. And then you projector or forecast what will or might happened if various variables come to pass in the future. When I say the future, quite often predictive modeling in a business context is what is the customer likely to do in one minute from now if we make them the following offer with the following terms and so forth. So if you look at data mining and predictive modeling as being core functions of data scientists, you also look at things like natural language processing for content analytics like the social sentiment analysis. That’s another core data scientist function. So really the whole range of advance analytics functions focused on predictive and content analytics -- that all data science. Modeling and predictive analytics going on sounds like it’s pretty intense. What kind of training is required? Are we looking at something like eight years of college and doctorate degree? You’re more than welcome to get a doctorate degree not only in statistical and mathematical subjects, but just in business areas, whether it’s economics or marketing or psychology or what not, any number of degrees are really, really good background for data science. You don’t necessarily need an advanced degree to do this science. You don’t even necessarily need to finish all four years in college if you have learned at the skills of data science in your schooling or on the job or even taught them to yourself. View a 2-minute demonstration of InetSoft's easy, agile, and robust BI software. The important thing is can you do the work. Can you build statistical models? Can you score them against fresh data to look at the fitness of those statistical models against actual data that’s observed in the field. Can you use the tools whether it be SPSS or modeling tools that allow you to build our models or what not. Can you build models? Can you prepare the data? Can you extract them from various sources and combine it and transform it to a form that you can build a model around? So in other words if you can do the work, and you don't have any schooling, that’s great. But usually in the business world we prefer that you have at least a background and a BA or a BS, and hopefully you’ve got a focused major that makes you a valuable in the business world. Let’s say you’re building a marketing campaign optimization model, it’s often a good idea to either have a degree or understanding of marketing best practices. Then in that case you’re competent to build your statistical model. You’re a subject domain expert. That makes you quite valuable.
OPCFW_CODE
For some, success can be defined by revenue and downloads, while for others, creating an app is about promoting a cause and trying to help others. An iOS developer will tend to have a bachelor’s degree in computer science, software engineering or similar technical field. Measure user engagement, marketing campaign performance, and monetization with App Store Connect, which includes information you won’t find anywhere else and requires no technical implementation. If you don’t, studying canned answers will not bring you far. One thing I never understood is people that prepare for interviews by studying common iOS development interview questions. If you want more details about the process and email scripts to contact people you don’t know, check this article from Ramit Sethi. And here, you can find what questions to ask once you get a contact. Average amount of time ios developers train SQLite is a multiplatform technology, so your knowledge will be transferrable too. In more than ten years, only a few of the projects I worked on used SQLite. I would say that the second-most-important tool for iOS developers after Xcode is Git. To create the user interface of iOS apps, you need to use a UI framework. As a first step, I recommend you get familiar with Apple’s Human Interface Guidelines for iOS. We offer a wide variety of programs and courses built on adaptive curriculum and led by leading industry experts. This course covers the essentials of using the version control system Git. You’ll be able to create a new Git repo, commit changes, and review the commit history of an existing repo. Hello there, Welcome to Full Stack Web Development with C# OOP, MS SQL & ASP.NET MVC course. Designed for iPad App Development with Swift certifications are available through an exam administered by Certiport and shows that you’re ready to take the next step in becoming an app developer. Build fundamental iOS app development skills with Swift. And, master the core concepts https://wizardsdev.com/en/vacancy/middle-ios-developer-swift/ and practices that Swift programmers use daily and build a basic fluency in Xcode’s source and UI editors. So if you’re ready to be in demand, iOS developer is a great career choice! And remember, Udacity has programs that teach all of this and more. The Grammarly app checks content for grammar mistakes and makes suggestions for improvements to your writing. Creating a to-do-list-style productivity app is an excellent exercise for a novice iOS developer. Although the concept is simple, the execution has plenty of room for expanded features. Should you learn to make apps using SwiftUI or UIKit? It’s free and you can download it straight from the Mac App Store, which you should do right now. The thinking behind this approach is that most iOS developers often try to learn both Swift and iOS at the same time but usually end up getting confused and not understanding either. Although different programming skills that can be used for iOS development are transferable, using the programming languages that are meant for iOS and the Apple ecosystem are your best bet. If you are learning to code for the first time or want to focus on iOS and Apple development singularly, then Swift is the programming language to focus on. Across the board, it’s not easy to build a solution that’s fast and efficient for all users. That’s why app developers often earn high salaries and great benefits. So, in the end, my recommendation is the same for both Combine and Firebase. Ignore them until you get to the point where you can evaluate if they are suitable for you or you need them for your job. Combine and SwiftUI came out simultaneously, and yet, the number of questions on Stack Overflow for the former is one order of magnitude less than for the latter (and Combine is not easy to understand). FRP, like other paradigms I listed above, is an advanced concept. You should not care about anything advanced until you have a solid understanding of other vital topics. Learn from trusted sources, not from random code you find online. From Code to Customer Your first app will launch and run on the iOS simulator. Try implementing the Xcode simulator to test the app performance on various iOS systems or use an active Apple Developer account & the TestFlight tool in Xcode to download. Apple users highly value simple, elegant, inventive, and user-friendly products, and that’s what Apple wants to feature on the App Store. Since I learned programming in university a long time ago, I don’t have any books I can recommend. But the Big Nerd Ranch guide seems to be a good choice that many recommend. It only focuses on Swift programming, and it has good reviews, so you can start from there. Still, it’s best to learn programming in the language you will use to make apps on iOS and other Apple platforms, which is Swift. If instead, you’re going to get a job as a developer, it will depend on the job. To decide, be sure to read the last chapter of this article on finding a job. Sure, things like augmented reality, machine learning, or video game technologies are cool. But you won’t be able to use them until you learn the foundations of iOS development. - Core Data is essentially the persistence framework for Apple devices. - WidgetKit widgets, for instance, have strict API restrictions. - If you want your software to succeed in the App Store, you must adhere to its guidelines. - A receipt will be emailed to you, and you can resend the receipt to yourself via email at any time from Purchase History in Settings. 1 While the Apple Developer app is available in regions supported by the App Store, enrollment may not be supported in certain regions, for example due to sanctions or other restrictions. For information about using the Apple Developer app in China mainland, view this page in Simplified Chinese. Once your enrollment information has been verified and approved, you’ll receive an email letting you know that you can complete your enrollment. After you’ve submitted your information, it will be reviewed by Apple. Developers in select regions1 around the world can use the Apple Developer app to enroll in the Apple Developer Program and to verify their identity for other processes.
OPCFW_CODE
Python Tips for SDK Developers# Tip #1: Intro to Virtual Environments, Poetry, and Pipx# Everyone comes from a different perspective - please select the scenario that you most identify with. If you know nothing about virtual environments# If you are completely new to the concept of virtual environments, that’s great! You have nothing to “unlearn”. Poetry and Pipx will make your life easy. They basically make it so you never have to worry about virtual environments. Pipx and Poetry take care of virtual environments for you so that you don’t have to worry about dependency conflicts. Pipx: Use this instead of pip whenever you are installing a python program (versus a python library). Pipx automatically creates a virtual environment for you and automatically makes sure that the executables contained in the python package get added to your path. Poetry: Use this when you are developing in Python. Use pipx to install poetry with pipx install poetry. The SDK cookiecutter template already sets you up for poetry. When you are running a command with poetry run ..., poetry is doing the work to make sure your command runs in the correct virtual environment behind the scenes. This means you will automatically be running with whatever library versions you have specified in poetry add .... If it ever feels like your environment may be stale, you can run If you already know about virtual environments# If you are used to working with virtual environments, the challenge with pipx and poetry is just to learn how to let these new tools do the work for you. Instead of manually creating and managing virtual environments, these two tools automate the process for you. poetry: Handles package management processes during development. Admittedly, there’s a learning curve. Instead of requirements.txt, everything is managed in Adding new dependencies is performed with poetry add <pippable-library-ref>or poetry add -D <pippable-dev-only-ref>. If version conflicts occur, relax the version constraints in pyproject.tomlfor the libraries where the conflict is reported, then try again. Poetry can also publish your libraries to PyPi. pipx: Install pipx once, and then use pipx instead of pip. If you are using poetry for development, then all other pip-installables should be executable tools and programs, which is what pipx is designed for. You don’t need to create a virtual environment, and you don’t need to remember to activate/deactivate the environment. For instance, you can just run pipx install meltanoand then directly execute meltanowith no virtual env reference, with no prefix to remember, and with no activation. What is an virtual environment anyway?# The quick explanation of a virtual environment is: a directory on your machine that holds a full set of version-specific python packages, isolated from other copies of those same libraries so that version conflicts from package to package do not cause conflicts one each other. Each program can have its own version requirements for its dependencies, and that’s okay because each virtual environment is separate from the others. For years, python developers have had to create, track, and manage their virtual environments manually, but luckily, now we don’t have to! Tip #2: Static vs Dynamic Properties in Python and the SDK# In Python, properties within classes like Stream and Tap can generally be overridden in two ways: statically or dynamically. For instance, replication_key should be declared statically if their values are known ahead of time (during development), and they should be declared dynamically if they vary from one environment to another or if they can change at runtime. Here’s a simple example of static definitions based on the cookiecutter template. This example defines the primary key and replication key as fixed values which will not change. primary_keys = ["id"] replication_key = None Dynamic property example# Here is a similar example except that the same properties are calculated dynamically based on user-provided inputs: """Return primary key dynamically based on user inputs.""" """Return replication key dynamically based on user inputs.""" result = self.config.get("replication_key") if not result: self.logger.warning("Danger: could not find replication key!") Note that the first static example was more concise while this second example is more extensible. Use the static syntax whenever you are dealing with stream properties that won’t change and use dynamic syntax whenever you need to calculate the stream’s properties or discover them dynamically. For those new to Python, note that the dynamic syntax is identical to declaring a function or method, with the one difference of having the @propertydecorator directly above the method definition. This one change tells Python that you want to be able to access the method as a property (as in pk = stream.primary_key) instead of as a callable function (as in pk = stream.primary_key()). If you are working on a SDK tap/target that uses a poetry-coreversion before v1.0.8, you may have trouble specifying a pip_urlin Meltano with “editable mode” ( -e path/to/package) enabled (as per #238). This can be resolved by upgrading the version of For more examples, please see the Code Samples page.
OPCFW_CODE
Gift ideas for the Excel user/computer geek in your life -- find the perfect present for your boss, or a departing co-worker, or someone special. Click on a picture to go to the web site for that item. If you love numbers, this is the clock for you. And with the jumble of numbers in this geek gadget' face, you can pretend it's any time that you want! The perfect Swiss Army tool for a computer user, has a 2 GB flash drive attachment, knife blade, nail file with screwdriver, scissors, key ring, LED mini-light, and retractable ballpoint pen. This geek gadget's not for your carry on bag when travelling though! What could be a more perfect geek gadget for a programmer, than a programmable coffee maker? Fun for your favourite mathematician. It's never too early to learn about numbers, and prepare for an exciting life with spreadsheets. An arithmetic game for children 4 and up, or pirates. This geek gadget's binary code shows what time it is. A great clock for a programmer. FBI agents use mathematics to solve crimes. It's not really a movie about Excel, the spreadsheet, but the series titles seem appropriate. For example: Excel Saga - Doing Whatever It Takes Excel Saga - Missions Improbable Excel Saga - When Excels Strike (Out) Matt Damon stars as Will Hunting, a closet math genius who works as a university janitor. He solves an impossible calculus problem scribbled on a hallway blackboard and reluctantly becomes the prodigy of an arrogant MIT professor. Behind the pi symbol on this tie, pi is calculated to the fiftieth decimal point. Awesome! These geek cufflinks might be useful if you have to calculate the tip at a fancy dinner. What geek wouldn't be proud to wear this? There are only 10 types of people in the world: Those who understand binary and those who don't Your Excel user could be the first one on the block to be able to program a hat. Who knew that hats could be geek gadgets? This book will help scientists-in-the- making discover how our world works with creative project ideas, including how to: by Scott Adams Humour, or perhaps inspiration, for those who work in cubicles. If you'd prefer a more serious gift, instead of a geek gadget, you can add an Excel book to someone's collection. There's a long list here: Excel Books and a list of Excel and Office books that I use here: Excel Bookshelf Here are a few more geeky gift ideas, from Amazon Note: This page has affiliate links, so I'll earn a small commission if you buy something on the linked site. Thanks! Last updated: December 13, 2016 10:59 AM
OPCFW_CODE
A fairly busy last few months has involved proposal writing, lab work, seminar presenting and even some fieldwork, this time just down the road (relative to my Antarctic expedition). This fieldwork was part of a fellow blogger’s PhD project, focussed on mapping evidence of past glaciers and ice caps in New Zealand’s Tongariro National Park (a cultural and natural World Heritage Site). The work is part of a larger GNS mission, which also includes mapping the area’s volcanic past, and the result should be similar to what has been achieved in the central Southern Alps. “That sounds interesting, but what’s the point?” Well aside from informing the public and helping tourists envisage glaciers in the valleys and mountain slopes where they now trek, it’s also important for piecing together New Zealand’s evolving climate. There are some records of past climate from marine and terrestrial indicators, including evidence of much more extensive glaciers in the South Island, however very little work has been done in the North Island. Glaciers have a close relationship with both temperature (primarily affecting the amount of melting) and precipitation (which in the form of snowfall, creates and extends a glacier). So by working out where a glacier once existed and how extensive it was you can then investigate the temperature and precipitation drivers for that time. Developing the picture of past environments and climates is a vital step in understanding causes of climate change and, together with climate simulation models, looking at future change. The detective work for these glaciers had been largely achieved on numerous previous trips. Glaciers erode valleys into bedrock slopes and deposit large moraine ridges made out of all this eroded boulder material at its limits. The next stage is to provide an age of when these glaciers formed the valleys and deposited the moraines. A best guess would be that they were this extensive during the last ice age (about 20,000 years ago), but science doesn’t make such great assumptions so this needs to be tested. Using a similar approach on Tongariro and Ruapehu to what I am using for Antarctic glaciers, the ages of when glaciers and ice caps last existed and then retreated can be determined. Put simply, cosmic radiation which has been stored in a boulder’s surface since it was uncovered and deposited by glacier ice, is measured and compared to a known rate of radioactive decay and therefore an estimation of its time since exposure can be calculated. Many days tramping around these mountains and copious boulders later, we had an ample collection of samples. Stay tuned, the dates are probably a year or two away. So basically, all this (one-way) chat is just to introduce my first attempt of time lapses and video editing. It will also hopefully explain why we are using a circular saw to attack rocks. It’s in the name of Science.
OPCFW_CODE
Free online video editor with skilled options, no expertise required. Apowersoft Free On-line Audio Editor is an efficient system which designed for enhancing audio recordsdata rapidly. So long as all the MP3 data are recorded on the same bitrate, it should merely work. Merge MP3 is a simple but efficient moveable utility to merge MP3 information into one – this technique menus and choices are also very intuitive. Useful Audio Editor is a multi-practical machine for editing audio recordsdata as an audio trimmer, merger and recorder on iPhone and iPad. All of the methods above will definitely revenue you in accordance to your need. The primary free audio joiner is very really useful for it is simple and environment friendly, and has no limits. It’s specific suitable for many who don’t like to install further purposes. Freemake Audio Converter will attract the customers with its intuitive interface and a wide range of supported formats. For Mac users, Fission maybe a reliable choice for it might probably merge MP3s on any Mac operating strategies with ease. For such purposes, you possibly can use an all-round audio modifying freeware program like Audacity, but that isn’t the most handy or efficient approach. Your best wager is probably to make use of a smaller, extra specific program for the roles: a lightweight freeware splitter or joiner. Click on ‘Add’ button so as to add the MP3 recordsdata to the merge itemizing, or simply merely drag & drop the files to MP3 Joiner window. А free on-line app you have to use to affix numerous audio tracks into one. Anyone who understands what mp3 encoding does to an audio file will see why it is a unhealthy workflow. A traditional use for this program might be to chop out a piece of an MP3 recording that you do not like. On the principle panel, click Add Media Information and choose the audio information you’d prefer to hitch collectively. There’s almost no restriction regarding the format of the output recordsdata, so you need to make the most of this system as an MP3 combiner, add WMA recordsdata, or much more esoteric formats like FLAC or APE. When you add audio information to the program, blog.21mould.net they are going to be routinely joined together and positioned on the Timeline one after one different within the order by which they were added. You’ll be able to change the order by merely moving the recordsdata round on the Timeline. Many individuals could wish to merge audio recordsdata for private use, particularly to take away undesirable parts of the unique audio file and then merge with other audio. To resolve similar problems, this publish introduces the six best audio mergers. MiniTool Movie Maker, released by MiniTool , must be your first choice. By default, Freemake Audio Joiner merges the tracks with none gap. If you want to add a small pause between the parts, you could import a mute file and place it between the songs. You may simply create such a file with our software (see the instruction right here ), Audacity or VLC. Add any number of mp3 information and download the merged mp3 in a single click on. Trim mp3 tracks and different audio recordsdata on-line without placing in complicated software program in your system. I’ve a lot of MP3 recordsdata which I must merge into one prolonged file. It is often necessary to merge mp3 on-line into one recording. Obtain MP3 button downloads the mission as an MP3 file. The problem with Ernesto’s recommendation is that it requires decoding your mp3 into a temporary wave format for modifying, after which recompressing to mp3 once you save the edited model (even when you don’t see mp3 choices when saving, that IS what is going on). Learn the way to merge songs on-line MP3 Info with Bear file converter: Step 1. Click Add” or enter the url code for the mp3 file and click on Add File”. There could also be additionally an choice to pull and drop the recordsdata on the program. Step 2. Hit the Merge” button after which Obtain” button once the merge is profitable. We did not discover outcomes for: Merge Songs On-line. Strive the strategies under or kind a brand new question above. Tip: This doc is referring to users who are trying to find strategies of mixing multiple media files equivalent to movie codecs: AVI , MPEG , WMV, and audio codecs: MP3 , OGG, WAV, and lots of others. into one huge file. Audio Joiner internet service is a good and simple strategy to merge audio recordsdata of assorted codecs. Your complete course of may be very simple: add the recordsdata, merge the audio info, and acquire the output audio. Suggestions: Drag and drop the folder where the MP3 information are to Audio Joiner, if you happen to don’t need to add tracks one by one. Be aware that these recordsdata are organized in line with the play order. So if you have to change the location of two MP3′s, simply free drag them to your needed position. If you’d like to merge songs, use the Add button to seek out the ones you need to hitch. Once they’re lined up within the window, examine those you want to join together, and hit Begin. The main window affords choices for assist and the right way to use the software program, but both open a reasonably rudimentary help web page on the developer’s web site.
OPCFW_CODE
<?php namespace FireTracker; use Unirest\Request; /** * */ class Tracker { public static $_FIRE_LEVEL_DEBUG = 100; public static $_FIRE_LEVEL_INFO = 200; public static $_FIRE_LEVEL_NOTICE = 250; public static $_FIRE_LEVEL_WARNING = 300; public static $_FIRE_LEVEL_ERROR = 400; public static $_FIRE_LEVEL_CRITICAL = 500; public static $_FIRE_LEVEL_ALERT = 550; public static $_FIRE_LEVEL_EMERGENCY = 600; public static $_FIRE_ENV_TEST = 'TEST'; public static $_FIRE_ENV_DEVELOPMENT = 'DEVELOPMENT'; public static $_FIRE_ENV_STAGING = 'STAGING'; public static $_FIRE_ENV_PRODUCTION = 'PRODUCTION'; public static $_FIRE_ENV_VAR_NAME = "_FIRE_ENV"; public static $_FIRE_ENDPOINT = "https://api.firetracker.io/"; public static $_FIRE_USER_SECRET = "_FIRE_USER_SECRET"; public static $_FIRE_USER_KEY = "_FIRE_USER_KEY"; public static $_FIRE_INPUT_LEVEL="fire_level"; public static $_FIRE_INPUT_MESSAGE="fire_message"; public static $_FIRE_INPUT_CONTEXT="fire_context"; public static $_FIRE_INPUT_ENV="fire_env"; public static $_FIRE_INPUT_KEY="fire_key"; public static $_FIRE_INPUT_SECRET="fire_secret"; public static $_FIRE_INPUT_HOST="fire_host"; public static $_FIRE_INPUT_LOCAL_IP="fire_local_ip"; public static $_FIRE_INPUT_REMOTE_IP="fire_remote_ip"; public static $_FIRE_INPUT_LINE="fire_line"; public static $_FIRE_INPUT_FILE="fire_file"; public static $_FIRE_INPUT_DIR="fire_dir"; public static $_FIRE_INPUT_CLASS="fire_class"; public static $_FIRE_INPUT_FUNCTION="fire_function"; public static $_FIRE_INPUT_METHOD="fire_method"; public static $_FIRE_INPUT_TRAIT="fire_trait"; public static $_FIRE_INPUT_NAMESPACE="fire_namespace"; protected static $_FIRE_HASH_ALGO="sha256"; /** * Fire Test Function */ public static function test() { echo "Tracker Fired"; $testQuery = array( self::$_FIRE_INPUT_LEVEL => "LEVEL", self::$_FIRE_INPUT_MESSAGE => "MESSAGE", self::$_FIRE_INPUT_CONTEXT => "CONTEXT", self::$_FIRE_INPUT_ENV => "ENV", self::$_FIRE_INPUT_KEY => "KEY", self::$_FIRE_INPUT_HOST=>php_uname(), self::$_FIRE_INPUT_LOCAL_IP=>$_SERVER["SERVER_ADDR"], self::$_FIRE_INPUT_REMOTE_IP=>$_SERVER["REMOTE_ADDR"] ); $testQuery[self::$_FIRE_INPUT_SECRET]=self::fireHash($testQuery); return $testQuery; } /** * Main function to fire logs * @param string $level * @param string $message * @param string $context * @return mixed */ public static function Fire($level, $message , $context ) { $headers = array('Accept' => 'application/json'); $query=self::_fire($level,$message, $context); return Request::post(self::$_FIRE_ENDPOINT, $headers, $query); } /** * Fire Log With Exception * @param string $level * @param Exception $x */ public static function FireException($level,\Exception $x){ $headers = array('Accept' => 'application/json'); $query=self::_fire($level,$x->getMessage(),$x->__toString()); return Request::post(self::$_FIRE_ENDPOINT, $headers, $query); } /** * @param string $message * @param string $context */ public static function FireEmergency($message, $context ) { self::Fire(self::$_FIRE_LEVEL_EMERGENCY, $message, $context); } /** * @param Exception $x */ public static function FireEmergencyException(\Exception $x) { self::FireException(self::$_FIRE_LEVEL_EMERGENCY, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireDebug($message, $context ) { return self::Fire(self::$_FIRE_LEVEL_DEBUG, $message, $context); } /** * @param Exception $x */ public static function FireDebugException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_DEBUG, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireInfo($message, $context) { return self::Fire(self::$_FIRE_LEVEL_INFO, $message, $context); } /** * @param Exception $x */ public static function FireInfoException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_INFO, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireNotice($message , $context) { return self::Fire(self::$_FIRE_LEVEL_NOTICE, $message, $context); } /** * @param Exception $x */ public static function FireNoticeException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_NOTICE, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireWarning($message , $context) { return self::Fire(self::$_FIRE_LEVEL_WARNING, $message, $context); } /** * @param Exception $x */ public static function FireWarningException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_WARNING, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireError($message, $context ) { return self::Fire(self::$_FIRE_LEVEL_ERROR, $message, $context); } /** * @param Exception $x */ public static function FireErrorException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_ERROR, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireAlert($message , $context ) { return self::Fire(self::$_FIRE_LEVEL_ALERT, $message, $context); } /** * @param Exception $x */ public static function FireAlertException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_ALERT, $x); } /** * @param string $message * @param string $context * @return mixed */ public static function FireCritical($message , $context ) { return self::Fire(self::$_FIRE_LEVEL_CRITICAL, $message, $context); } /** * @param Exception $x */ public static function FireCriticalException(\Exception $x) { return self::FireException(self::$_FIRE_LEVEL_CRITICAL, $x); } /** * @param array $query */ protected static function fireHash($query){ $userSecret = (isset($GLOBALS[self::$_FIRE_USER_SECRET])) ? $GLOBALS[self::$_FIRE_USER_SECRET] : null; $response=false; if($userSecret==null){ return $response; } $data=""; $data.=(isset($query[self::$_FIRE_INPUT_LEVEL]))?$query[self::$_FIRE_INPUT_LEVEL]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_MESSAGE]))?$query[self::$_FIRE_INPUT_MESSAGE]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_CONTEXT]))?$query[self::$_FIRE_INPUT_CONTEXT]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_ENV]))?$query[self::$_FIRE_INPUT_ENV]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_KEY]))?$query[self::$_FIRE_INPUT_KEY]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_HOST]))?$query[self::$_FIRE_INPUT_HOST]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_LOCAL_IP]))?$query[self::$_FIRE_INPUT_LOCAL_IP]:"$"; $data.=(isset($query[self::$_FIRE_INPUT_REMOTE_IP]))?$query[self::$_FIRE_INPUT_REMOTE_IP]:"$"; $data.=$userSecret; $response=hash(self::$_FIRE_HASH_ALGO,$data); return $response; } /** * @param string $level * @param string $message * @param string $context */ protected static function _fire($level,$message,$context){ //api auth infos $userSecret = (isset($GLOBALS[self::$_FIRE_USER_SECRET])) ? $GLOBALS[self::$_FIRE_USER_SECRET] : null; $userKey = (isset($GLOBALS[self::$_FIRE_USER_KEY])) ? $GLOBALS[self::$_FIRE_USER_KEY] : null; //check auth if ($userKey == null || $userSecret == null) { return false; } //default env value DEVELOPMENT $env = self::$_FIRE_ENV_DEVELOPMENT; //array to check sended value $targetEnv = array( self::$_FIRE_ENV_DEVELOPMENT, self::$_FIRE_ENV_TEST, self::$_FIRE_ENV_PRODUCTION, self::$_FIRE_ENV_STAGING, ); // check env var value if (isset($GLOBALS[self::$_FIRE_ENV_VAR_NAME])) { if (in_array($GLOBALS[self::$_FIRE_ENV_VAR_NAME], $targetEnv)) { $env = $GLOBALS[self::$_FIRE_ENV_VAR_NAME]; } } $query = array( self::$_FIRE_INPUT_LEVEL => $level, self::$_FIRE_INPUT_MESSAGE => $message, self::$_FIRE_INPUT_CONTEXT => $context, self::$_FIRE_INPUT_ENV => $env, self::$_FIRE_INPUT_KEY => $userKey, self::$_FIRE_INPUT_HOST=>php_uname(), self::$_FIRE_INPUT_LOCAL_IP=>$_SERVER["SERVER_ADDR"], self::$_FIRE_INPUT_REMOTE_IP=>$_SERVER["REMOTE_ADDR"], ); return $query; } }
STACK_EDU
Optimize WordPress Database by PhpMyAdmin This post will describe how to “Optimize WordPress Database by PhpMyAdmin” in step by step. Now a days, many people even not coming from technical background create and maintain blog website on WordPress. There are many Plugins available for this but thumb role is avoid plugin if possible to do it directly. Moreover, you know what you are doing whereas for plugin you have to completely unaware of plugin internal code. Some plugins may destroy your database. This article will help them who want to keep their database fast and optimized. As time goes on, there are some unnecessary data stored or should be removed in/from WordPress database. So at first delete this unnecessary data from your database by the following procedure Step 3: Click SQL tab at the Top Step 4: Now copy the following SQL in the SQL box and click Go button. delete from wp_comments where comment_post_ID not in ( select ID from wp_posts); delete from wp_commentmeta where comment_id not in ( select comment_id from wp_comments); delete from wp_postmeta where post_id not in (select id from wp_posts); delete from wp_term_relationships where object_id not in ( select id from wp_posts); delete from wp_term_relationships where term_taxonomy_id not in ( select term_taxonomy_id from wp_term_taxonomy); delete from wp_usermeta where user_id not in ( select id from wp_users); optimize table wp_comments; optimize table wp_commentmeta; optimize table wp_postmeta; optimize table wp_term_relationships; optimize table wp_users; You can run the operation once a month that will delete unnecessary data from your Database and thereafter optimize related tables. Now you should optimize the whole database. If you don’t want to delete any data but would like to optimize whole database only , follow the following steps. - Select you Database through Step 1 and Step 2 mentioned above. - Check all tables - Just right of check all select optimize tables - Check the output after clicking optimize table - Optimize all tables by a single command If you have command prompt OS access in Linux/Unix you can do it by a single command. [root@localhost ~]# mysqlcheck -o wp_test -u test -p Optimization requires at least once a month. There is a common theory in Database “If you have fewer rows in a table , you have quicker access”. As WordPress wp_posts and wp_contents table may contain huge rows and you may need to delete some old posts and comment to speed up your website. After deletion the index need to be rebuild to take the advantage of reducing tables rows. Optimize command actually do this. There is a same procedure by which you can Repair WordPress Database by PhpMyAdmin
OPCFW_CODE
Please note! This essay has been submitted by a student. The 4-year engineering course was not just a study on concepts and theories that I went through, but a series of practical applications in terms of mini projects. Every semester had challenging new subjects that included some practical implementation as well in the laboratory. In the 3rd year of engineering, I worked on an online auction web development project. I particularly invested my time in parts such as web form validation using jquery, database operation using MySQL and PHP. In the next semester, my project team was inclined to work on some new domain. Android development seemed quite intriguing. At that moment, me and my team decided to develop an android food application. The app comprised of basic functionalities such as viewing recipes and watching relevant videos. I worked primarily on API’s as it served a primitive source of data in the application. The final year of the 4-year degree course proved to be a turning point in my life when I came across a new domain of Artificial Intelligence. This domain covered a wide range of subjects such as machine learning, Data Science etc. The subject machine learning covered a plethora of algorithms for identifying patterns for data analysis. This subject arose a keen sense of interest in me to implement a project based on this algorithm. With the approval of the respective project guide and the head of the department, I was assigned to develop a project using machine learning algorithm called the “Naive Bayes” algorithm. The project was a “News classification system” that classified articles given by the users as «news», «sports», «gadgets» and «education» labels. The system comprised of a crawler for scraping data to define a dataset for the algorithm. About 80% of the crawled data served as a dataset for the algorithm to learn, whereas the remaining 20% of the data was used for testing the model. Scikit-learn library was then used for implementing “Naive Bayes” model for training the data. The initial accuracy of the model was 0.62. However, the accuracy of the model ameliorated each time when the model misclassified any article, since the correct label for the respective article was taken as an input from the user and appended to the dataset. This strategy not only assisted in improving the accuracy but also helped in expanding the dataset. The accomplishment of this project fomented enthusiasm in me to explore the subject into more detail than I anticipated. Pursuing a master’s degree in this domain then suggested as the best choice for chasing my passion towards the domain. Other than Academics, I have also been involved in the college’s cultural activities, I was the Co-Head of the Creative Committee during the college’s cultural fest known as Milestone 2018. Completing a bachelor’s degree in computer science gave me an overview and basic understanding of all the possible domains and concepts. However, the knowledge that I acquired isn’t sufficient enough to solve any real-world problems. Also to work as a domain-specific employee in a company, more detailed knowledge about the concepts is significant apart from having a good work experience. A master’s degree in computer science will not only proliferate my knowledge but also with the internships included in the academic schedule will provide me a good exposure with the company. With the degree, the knowledge, as well as the work experience that I gain through the 2-year courses, would not only serve me an opportunity to work as a software developer in product based companies, but will also make me technically sound to handle real-world projects. After the 2-year MS course in computer science, I wish to work in companies like Google, Facebook or Qubit as a software developer for AI or ML. Working in these top companies will give me adequate exposure to different software projects under consideration, different ideas from different software engineers working in the same domain. After working in these companies, I feel my vision towards work will entirely change as I gain enough power and confidence to easily work in any company. Then Securing a job in Multinational companies after returning to my home country will be no big deal. The New Jersey Institute of Technology is a widely popular university that offers a variety of postgraduate courses. It has a good ranking at the national level institutes which is an essential aspect to be considered while selecting a university. The Ying Wu College of Computing within this institute provides a wide range of elective courses for MS in computer science that satisfies my criteria such as image processing and pattern recognition, data mining etc. Moreover, it has outstanding faculty, who not only assists students in learning the concepts better but also help them with their project. Being an international student, I would get an experience in learning from one of the best colleges with state of the art Laboratories would prove a requisite in the growth of my knowledge. An Admit from The New Jersey Institute of Technology will encourage me to fulfill my passions not only with the knowledge that I gain through the courses but also through the projects that I build.
OPCFW_CODE
using CommandLine; namespace Tasker.Options { public class CommandLineOptions { [Verb("get", HelpText = "Get object (tasks, groups, notes)")] public class GetOptions { [Value(0, HelpText = "Object type (task, group, note)")] public string ObjectType { get; set; } [Value(1, HelpText = "Object name or id")] public string ObjectName { get; set; } [Option('a', "all", HelpText = "Print all objects, even the closed ones")] public bool ShouldPrintAll { get; set; } [Option('o', "open", HelpText = "Print all open objects, not only default")] public bool ShouldPrintNotOnlyDefault { get; set; } [Option('d', "detail", HelpText = "Print more information about each object")] public bool IsDetailed { get; set; } [Option('s', "status", HelpText = "Print all tasks/groups in given status")] public string Status { get; set; } [Option('t', "days", HelpText = "Print all tasks from the last given days")] public int Days { get; set; } } [Verb("create", HelpText = "Create objects (tasks, groups, notes...)")] public class CreateOptions { [Value(0, HelpText = "Object type (task, group, note)")] public string ObjectType { get; set; } [Value(1, HelpText = "Object name or id")] public string ObjectName { get; set; } [Option('m', "message", HelpText = "Description message about the object")] public string Description { get; set; } } [Verb("remove", HelpText = "Removes object (tasks, groups)")] public class RemoveOptions { [Value(0, HelpText = "Object type to remove")] public string ObjectType { get; set; } [Value(1, HelpText = "Object id to remove")] public string ObjectId { get; set; } } [Verb("close", HelpText = "Close object (task - marks task status as closed)")] public class CloseOptions { [Value(0, HelpText = "Object to close (task)")] public string ObjectType { get; set; } [Value(1, HelpText = "Id of task to close")] public string ObjectId { get; set; } [Option('m', "message", HelpText = "Reason message for closing the object")] public string Reason { get; set; } } [Verb("move", HelpText = "Moves a task to a given group")] public class MoveTaskOptions { [Value(0, HelpText = "Object to move (task)")] public string ObjectType { get; set; } [Value(1, HelpText = "Task id to move")] public string ObjectId { get; set; } [Value(1, HelpText = "Task group to move the task to")] public string TaskGroup { get; set; } } [Verb("reopen", HelpText = "Reopen a closed task")] public class ReOpenTaskOptions { [Value(0, HelpText = "Object to re-open (task)")] public string ObjectType { get; set; } [Value(1, HelpText = "Task id to open again")] public string ObjectId { get; set; } [Option('m', "message", HelpText = "Reason message for reopening the task")] public string Reason { get; set; } } [Verb("work", HelpText = "Mark task as on work")] public class OnWorkTaskOptions { [Value(0, HelpText = "Object to set status on-work (task)")] public string ObjectType { get; set; } [Value(1, HelpText = "Task id to mark as on work")] public string ObjectId { get; set; } [Option('m', "message", HelpText = "Reason message for working the task")] public string Reason { get; set; } } [Verb("open", HelpText = "Open note with the default text editor")] public class OpenNoteOptions { [Value(0, HelpText = "Object to open (note, general note)")] public string ObjectType { get; set; } [Value(0, HelpText = "Note subject or task id to open the note")] public string NoteName { get; set; } } } }
STACK_EDU
Black&White Project/Kiwix for Sugar - Portage of Kiwix for Sugar (100% finished, directly usable without major bug) - Re-design of the user interface for a perfect integration and match of sugar guidelines. - Compatibility with Sugar journal - Documentation in the wiki of the sugar foundation - Delivrance of an .xo file - Integration of Kiwix as a Sugar-activity Dates & Duration - Start Date: September 5th - Estimated Delivery Date: September 30th. Sugar is a special interface mainly used by the OLPC Project. It is designed primarily for educational purposes, targeting children with no computer experience and ranging from primary school to secondary school. Kiwix is a complete desktop software with many features and capabilities. In order to integrate Kiwix properly within Sugar, we made the following U.I choices: The toolbar is the main out-of-context interface for the user. An icon on the toolbar is associated either to an action or displays a sub-toolbar. Those can not be chained (only one sub-toolbar level). - Display (quite common to display this as sub) - Text Size Up - Text Size Down - Full Screen (remove toolbar completely) - Search in page - Toggle bookmarks list - Bookmark this page - Options (drop down menu) - Toggle Content Manager - Open file - Recently opened - Random Article - Save as... - Purge history While Sugar support tabs, we believe it's not appropriate to use them here. Eventually decided to include them but without highlight on them. The following features are not accessible from Sugar. - Report a bug - Request a feature - Tools/Integrity check - Status bar - Language switch - Skin switch - Select All - Preferences? Currently it's in but I'm not sure it should. - Full Bookmarks Manager (Simplified version exists though) - Bookmark list - Journal entry for each page viewed - Sugar activity + .xo package - Neighborhood broadcast ???? not sure about that yet. Bellow is a list of imaginable development regarding Sugar integration: |Native Sugar U.I |Requires reimplementing most U.I code (J.S) in Python. Very unlikely that we'll properly maintain that over time. |100% Look-alike U.I in gecko skin |We can do something that's 70% like sugar but remaining is very hard to achieve if possible. |Journal Keep feature |Upon click on Keep, record the session + ability to reopen a session. Session will contain: |Journal activity record |At quit, open the sugar activity entry dialog to add an entry to the journal Journal is just a log of what users did. |Ability to mark pages, display a list of marked pages and click on them. Bookmarks should persit over sessions
OPCFW_CODE
I've been asked to check whether or not a CUCM cluster can be configured robustly for db replication purposes. For example, suppose that two subscribers on remote sites were disconnected from the rest of the network. After a few weeks (or months), this connection is restored. I'd like for the db replication to be performed automatically and as fast as possible. 1) Can this be fine tuned? 2) Are the intervals and limitations of dbreplication documented anywhere? 3) Can a site which is down for several weeks or months automatically sync with the publisher and other subscribers? I mean theoretically there is no max period a subsciber can be isolated from publisher, and the isolated sub will work with its local DB for the duration of which its isolated. CUCM clusters are designed ti have a continuous connection between eachother, with very predictable round trip times. what is it you are trying to achieve? If the dbrepl queue fills the replication agreement will be dropped and require a reset to recover replication. This is not configurable and as Dennis indicated, the product is designed/expected to have continuous connectivity amongst all cluster nodes. If a site may plausibly become isolated on a regular or prolonged basis it should have a dedicated local CUCM cluster. I'm interested in providing a unified database via a publisher + subscribers deployment rather than a publisher + several CMEs deployment. The topology is a standard hub and spoke one where the publisher is at the hub. The thing is that the spokes may be disconnected for long stretches of time, and when they regain connectivity I'd like them to reestablish replication with the other nodes. Configuring a CME for each spoke, with dial-peers towards the publisher and other spokes, is a lot more administrative overhead and could likely involve more human error. If there is no practical method of doing this with a CUCM cluster, I'm wondering what would be the suggested architecture to ease administrative burden: 1) CMEs at the spokes, or 2) Independant publishers at each spoke, with ILS+GDPR to publish directory numbers CUCM clusters are definitely not intended to function as you wish they did. It’s difficult to make significant design recommendations based on the limited information in the forums; however, my inclination would be toward local CUCM instances at each site instead of CME. This would provide a common platform to provision, offer consistent feature functionality, monitor, and maintain administrative competency in. CUCM has always struggled to scale down to small sites because of the x86 server requirements, though you didn’t say how small these sites are. A specs-based deployment model may relieve some of that cost though. As for ILS/GDPR, again that feature assumes reliable site connectivity to replicate. You will want to test tolerance to connectivity loss; however, the PSTN fallback method relies on AAR which itself requires a stable WAN connectivity and an active denial by CAC to reroute. That won’t happen if the site is isolated/offline. You will likely need to rely on +E.164 globalized numbering plans with classic Route Patterns & Lists that provide local egress, perhaps via a LRG, if the inter-cluster trunk is down.
OPCFW_CODE
PAM (Pluggable Authentication Module) USB Authentication In this article we are going to look at setting up a USB device to perform authentication using the pam_usb PAM module. pam_usb provides authentication for Linux using standard USB devices and other type of media i.e. (SD, MMC, etc). We will be using OpenSUSE 10.3. The reason for this is pam_usb has been developed for the latest version of HAL (hal-0.5.9) thus not being able to work on older versions of HAL. In this article we will compile the pam_usb module, which can be downloaded from the website. The installation of pam_usb requires four dependent packages. The dependencies are listed in Table 1 and can be installed using the “yast2 sw_single” command. |libxml2-devel||This package includes libraries and files mandatory for development.| |hal-devel||Developer package for HAL (Hardware Abstraction Layer).| |pam-devel||This package includes libraries and files for PAM development.| |dbus-1-python||This package provides python bindings for D-Bus.| Table 1: “pam_usb” dependencies. Once you have installed each package listed in Table 1 you can begin the installation of pam_usb. The “pam_usb-0.4.2.tar.gz” archive first needs to be decompressed as shown in Figure 1. linux-uxp3:/usr/src # tar zvxf pam_usb-0.4.2.tar.gz Figure 1: Decompressing the “pam_usb” archive. Once you have decompressed the pam_usb archive you can begin the installation by issuing the “make” command followed by the “make install” command as shown in Figure 1.1. linux-uxp3:/usr/src/pam_usb-0.4.2 # make && make install && make clean Figure 1.1: Installing “pam_usb”. In Figure 1.1 you might be wondering what the ampersands (&) are for and also the “make clean” command. The ampersands allow each command to be executed only if the previous command was successful and the “make clean” command deletes any files that were left behind during the compiling stage. Configuring pam_usb is very simple, simply place your USB device into your machine and use the “pamusb-conf” command to add your device to the “pam_usb” configuration file, as shown in Figure 2. linux-uxp3:~ # pamusb-conf --add-device "Damian Myerscough" Please select the device you wish to add. * Using "Kingston DataTraveler II+ (Kingston_DataTraveler_II+_5B770D9200D1-0:0)" (only option) Which volume would you like to use for storing data ? * Using "/dev/sdc1 (UUID: 47A8-7EC9)" (only option) Name : Damian Vendor : Kingston Model : DataTraveler II+ Serial : Kingston_DataTraveler_II+_5B770D9200D1-0:0 UUID : 47A8-7EC9 Save to /etc/pamusb.conf ? [Y/n] y Figure 2: Preparing the USB stick for authentication. Once you have successfully added your USB device to the pam_usb configuration file you can assign a user to the USB device using the “pamusb-conf –add-user damian” command as shown in Figure 2.1. linux-uxp3:~ # pamusb-conf --add-user damian Which device would you like to use for authentication ? * Using "Damian Myerscough" (only option) User : damian Device : Damian Myerscough Save to /etc/pamusb.conf ? [Y/n] y Done. Figure 2.1: Assigning the user to the USB stick. Once you have assigned a user to the USB device you will need to edit the “/etc/pam.d/gdm” configuration file to enable the pam_usb module. The directives that you need to add to the “gdm” configuration file are shown in Figure 2.2. auth sufficient pam_usb.so Figure 2.2: “/etc/pam.d/gdm” configuration file. Once you have modified the “/etc/pam.d/gdm” configuration file you can use the “pamusb-check” command to see if your device is capable of being authenticated as shown in Figure 3. Linux-uxp3:~ # pamusb-check damian * Authentication request for user "damian" (pamusb-check) * Device "Damian Myerscough" is connected (good). * Performing one time pad verification... * Regenerating new pads... * Access granted. Figure 3: Checking “pam_usb” authentication. Once you have run the “pamusb-check” command and have seen that access is granted you can now logout of your machine and log back into your machine using the USB device. The pam_usb module is an excellent PAM module as it tightens security thus making it hard for malicious users to attack your account. The pam_usb also supports one time pad passwords which can be configured to expire after a certain period of time or change after each login. I would recommend visiting the pam_usb website to find out more.
OPCFW_CODE
package gotermBox import ( "errors" "fmt" "io" "sync" "time" "github.com/antongulenko/golib" ) // Assert that CliLogBoxTask implements the golib.Task interface. var _ golib.Task = &CliLogBoxTask{} // CliLogBoxTask implements the golib.Task interface by creating a CliLogBox, // capturing all log entries, and regularly updating the screen in a separate goroutine. type CliLogBoxTask struct { CliLogBox updateTask *golib.LoopTask updateTrigger chan interface{} // UpdateInterval configures the wait-period between screen-refresh cycles. UpdateInterval time.Duration // MinUpdateInterval can be set to >0 to reduce the screen-refresh frequency // even if TriggerUpdate() is called more frequently than every MinUpdateInterval. MinUpdateInterval time.Duration // Update is called on every refresh cycle to fill the screen with content. // See also CliLogBox.Update(). Update func(out io.Writer, width int) error } // Init initializes the receiver and starts collecting log messages. // It should be called as early as possible in order to not miss any log messages. // If any log message is fire before calling this, it will not be displayed in the log // box, and the log box will overwrite the log message on the console. func (t *CliLogBoxTask) Init() { t.updateTrigger = make(chan interface{}, 1) t.CliLogBox.Init() t.RegisterMessageHooks() // Try to directly refresh the screen every time a new message comes in t.PushMessageHook = func(msg string) { t.TriggerUpdate() } } // String implements the golib.Task interface. func (t *CliLogBoxTask) String() string { return fmt.Sprintf("CliLogBoxTask (updated every %v)", t.UpdateInterval) } // Start implements the golib.Task interface. It intercepts the default logger // and starts a looping goroutine for refreshing the screen content. When // the task is stopped, it will automatically restore the operation of the default logger. func (t *CliLogBoxTask) Start(wg *sync.WaitGroup) golib.StopChan { if t.Update == nil { return golib.NewStoppedChan(errors.New("CliLogBoxTask.Update cannot be nil")) } t.InterceptLoggers() t.updateTask = &golib.LoopTask{ Description: "CliLogBoxTask", StopHook: func() { err := t.updateBox() // One last screen refresh to make sure no messages get lost. t.RestoreLoggers() golib.Printerr(err) }, Loop: func(stop golib.StopChan) (err error) { err = t.updateBox() if err == nil { // Wait between t.MinUpdateInterval and t.UpdateInterval, // but wake up from stop.WaitChan() and t.updateTrigger. sleepStart := time.Now() select { case <-time.After(t.UpdateInterval): case <-stop.WaitChan(): case <-t.updateTrigger: } sleepTime := time.Now().Sub(sleepStart) if diff := t.MinUpdateInterval - sleepTime; diff > 0 { select { // Don't wait for t.updateTrigger here case <-time.After(diff): case <-stop.WaitChan(): } } } return }, } return t.updateTask.Start(wg) } // Stop stops the goroutine performing screen refresh cycles, and restores the operation of // the default logger. func (t *CliLogBoxTask) Stop() { t.updateTask.Stop() } // Update triggers an immediate screen update. func (t *CliLogBoxTask) TriggerUpdate() { select { case t.updateTrigger <- nil: default: } } func (t *CliLogBoxTask) updateBox() (err error) { t.CliLogBox.Update(func(out io.Writer, width int) { err = t.Update(out, width) }) return }
STACK_EDU
0.8.2 fails to compile with rustc 1.48.0 due to unknown codegen option Compiling env_logger v0.8.2 error: unknown codegen option: `embed-bitcode` error: could not compile `env_logger` $ rustc --version rustc 1.48.0 (7eac88abb 2020-11-16) $ cargo --version cargo 1.48.0 (65cbdd2dc 2020-10-14) Maybe some more information: I do not crosscompile anything. I compile a crate I developed on another system on a centos7 machine. I did a cargo clean run before retrying cargo build but it did not help, same error. Also, I just cloned this repository and it builds fine. But not if I have env_logger as a dependency. Not sure what is happening here... env_logger doesn't have a build script or anything else that I can see that could cause this. The string bitcode doesn't appear anywhere inside this repository. I would recommend you search for help about this on users.rust-lang.org, Discord, reddit or some other place for general help and advice about Rust. Thanks for the fast reply. Apparently the compiler fails to understand an argument it gets ... don't know why tho. Anyways, I'll report to users.rust-lang.org! Thank! Hey @matthiasbeyer Where you able to solve this issue? I am also hitting this error message, whenever I try to compile anything that has env_logger in its dependencies.. Hey @matthiasbeyer Where you able to solve this issue? I am also hitting this error message, whenever I try to compile anything that has env_logger in its dependencies.. Actually I'm not sure what solved that issue for me, sorry. Make sure to use an up-to-date version of the compiler and also make sure you don't mess up different versions in one installation. Happened to me before and is most certainly not the way to go! :laughing: Actually I'm not sure what solved that issue for me, sorry. Make sure to use an up-to-date version of the compiler and also make sure you don't mess up different versions in one installation. Happened to me before and is most certainly not the way to go! :laughing: I'm hitting the same issue in a Docker build but with some more interesting output: Compiling env_logger v0.8.2 info: syncing channel updates for '1.41.0-x86_64-unknown-linux-gnu' info: latest update on 2020-01-30, rust version 1.41.0 (5e1a79984 2020-01-27) info: downloading component 'cargo' info: downloading component 'clippy' info: downloading component 'rust-docs' info: downloading component 'rust-std' info: downloading component 'rustc' info: downloading component 'rustfmt' info: installing component 'cargo' info: using up to 500.0 MiB of RAM to unpack components info: installing component 'clippy' info: installing component 'rust-docs' info: installing component 'rust-std' info: installing component 'rustc' info: installing component 'rustfmt' error: unknown codegen option: `embed-bitcode` error: could not compile `env_logger` 1.41.0 matches the rust-toolchain checked into env_logger, I suspect excluding this file from the crates.io release will fix it. Yes, assuming you have a stable version of the Rust toolchain installed, this command reproduces the issue: eval $(rustup which --toolchain stable cargo) build Honestly this is quite likely an upstream bug in Cargo -- but excluding rust-toolchain should fix it. Removing the rust-toolchain from the crates.io release should be easy enough and I don't see a reason not to do that 👍🏼 Done in eed165155261fcb31f6648485462b9ee3c0ab670. I'll try to do a release in the coming days, feel free to ping me if I still haven't gotten to it next week. @jplatte been a week so gentle ping, thanks! :) Release it out. Latest master (commit 16d982ed979bb9361048fc25a5589ac5e06daf17) also fails to build for me. Command: cargo build Error: error: the 'cargo' binary, normally provided by the 'cargo' component, is not applicable to the '1.41.0-x86_64-unknown-linux-gnu' toolchain Command: eval $(rustup which --toolchain stable cargo) build Error: error: unknown codegen option: embed-bitcode @Vagelis-Prokopiou that sounds like the cargo but also triggers on git dependencies which aren't affected by the previous fix to this issue. I don't really know what to do about that short of removing rust-toolchain. Oh, actually you're building directly, right? That's bad if that fails in this way.. My platform is Debian Linux (Linux debian 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux); Cargo: 1.51.0 (43b129a20 2021-03-16) Rustc: 1.51.0 (2fd73fabe 2021-03-23) Commands that lead to failure: git clone https://github.com/env-logger-rs/env_logger.git; cd env_logger; cargo build; :heavy_check_mark: Deleting the rust-toolchain file, results in successful build. I have no ~/.cargo/config at all. I don't know what lld is :-) Can you run cargo build -vv 2> output.txt and upload the logs to Gist or some other service? The paths in that file will contain your username/home directory path, just a heads up :) The output.txt contains nothing more that the error that I am getting in the terminal. cat output.txt error: the 'cargo' binary, normally provided by the 'cargo' component, is not applicable to the '1.41.0-x86_64-unknown-linux-gnu' toolchain When the errors happens, no target folder is created at all. The following is the contents of target/.rustc_info.json after the successful build (when deleting the rust-toolchain file): { "rustc_fingerprint":15281189911875476545, "outputs":{ "4476964694761187371":{ "success":true, "status":"", "code":0, "stdout":"___\nlib___.rlib\nlib___.so\nlib___.so\nlib___.a\nlib___.so\n/home/va/.rustup/toolchains/stable-x86_64-unknown-linux-gnu\ndebug_assertions\nproc_macro\ntarget_arch=\"x86_64\"\ntarget_endian=\"little\"\ntarget_env=\"gnu\"\ntarget_family=\"unix\"\ntarget_feature=\"fxsr\"\ntarget_feature=\"sse\"\ntarget_feature=\"sse2\"\ntarget_os=\"linux\"\ntarget_pointer_width=\"64\"\ntarget_vendor=\"unknown\"\nunix\n", "stderr":"" }, "1164083562126845933":{ "success":true, "status":"", "code":0, "stdout":"rustc 1.51.0 (2fd73fabe 2021-03-23)\nbinary: rustc\ncommit-hash: 2fd73fabe469357a12c2c974c140f67e7cdd76d0\ncommit-date: 2021-03-23\nhost: x86_64-unknown-linux-gnu\nrelease: 1.51.0\nLLVM version: 11.0.1\n", "stderr":"" }, "551384519178316037":{ "success":false, "status":"exit code: 1", "code":1, "stdout":"", "stderr":"error: `-Csplit-debuginfo` is unstable on this platform\n\n" } }, "successes":{ } } I am also attaching the output.txt of the successful build. output.txt Can you also post the target/.rustc_info.json file if you do not delete rust-toolchain? Hmmm.... :heavy_check_mark: Indeed, uninstalling and re-installing rust fixed the error. The library is build with no errors whatsoever. The only thing I can suppose is that something "broke" during the various rust updates (through rustup update), because apart from the the updates I have not done anything to modify my installation. The initial installation was done through the instructions from the official page (the same as the one you provided). Thanx for debugging this with me :-) No problem, great that it worked out! Out of curiosity do you happen to use vscode and/or the jetbrains Rust plugin? Out of curiosity do you happen to use vscode and/or the jetbrains Rust plugin? I use the JetBrains Rust plugin.
GITHUB_ARCHIVE
Hi, thanks for your input. > I went ahead and increased the memory limit already, but the feature seems broken. Do you mean it does not work or do you mean that the rest of the message explains why it's broken? Without manually increasing the memory limit, I was unable to perform basic operations in Siril. It would just randomly declare that I had insufficient free memory. With the memory limit function disabled, it works as expected. > The OS can page out little-used pages, creating more free memory. Yes, no problem with that, it's still more physical memory that is free to use by our program. > It can even do this without swap, by releasing memory mapped pages. I'm not familiar with this. But as I understand it, it will make more free space than less, so it's not a problem. I think you're missing my point. The OS will not reclaim pages for your use, unless you try to allocate them . You have to apply pressure to the VM subsystem to get it to reclaim. By checking the "free" status, all you're doing is causing Siril to return spurious errors which the OS would have been able to satisfy. If your concern is that users might accidentally initiate an operation which uses an unusual amount of RAM, you could issue a warning like "Warning: this operation will use more than 50% of your physical RAM. Continue? (yes) (no)" We could indeed fix the maximum amount of memory the program will use. I see three modes then: unlimited (= the OS manages the required space as swap, which may be even too large for the swap but that's what Stephen proposed), limited to a ratio of free space like it is now, and limited to some absolute amount of memory as you suggest. Please take a look at how the GIMP manages memory. Like Siril it has to be able to process very large image files. It has a configurable tile cache, where over a certain limit (user configurable) it will start paging portions of images to disk. Because the images are managed in tiles, this is a better fit for paging than the typical row/column buffer. GIMP users would surely be very surprised if it just refused to open a large file when they have more than three tabs open in their web browser (due to a transient free-space check). This sort of approach should have better liveliness compared to the unlimited approach above, simply because it de-prioritizes the graphics application compared to other users of memory. You could also imagine the tile cache being strictly processed in the background without impacting the GUI event loop. You can actually simulate this to a degree on Linux by just putting Siril in a memory cgroup with the limit set to half of RAM (and swap enabled, of course). In this configuration Siril will start being paged to disk even when there's free space, which increases system liveliness. Of course, this isn't applicable to normal users. I don't think Siril really needs to go to these sorts of extremes, but having batch processes fail to execute because of some random system load is highly unexpected. The only other software which I normally expect to fail due to physical RAM allocation is a VM hypervisor.
OPCFW_CODE
Could a Stargate be used to destroy another Stargate? Whenever a stargate is opened, we see some kind of blowback, which I'm told is called a "kawoosh". If there is nothing blocking the surface of the event horizon, this forms and comes out from the Stargate, then snaps into the event horizon and is not seen again until the gate is opened again. We've seen instances where this destroys different types of matter. Could this kawoosh be used to destroy another Stargate? For instance, if you are able to transport a gate to a planet where there's another and deactivate the DHD on the first one, and position the second one correctly, wouldn't the kawoosh destroy the 1st gate? My thought is that if that were possible, it would have been possible to destroy segments of the Ori super-gate using that method. Would that have let them use a stargate itself as a weapon to destroy almost anything? The "kawoosh" is an unstable wormhole. It can supposedly disintegrate "virtually anything". If we go by known physics and assume that this is a real wormhole with a real event horizon, and assume that the Ori super-gate is not made of some sort of exotic matter, it can definitely have a bite taken out of it using this method. However, event horizons in the Stargate universe are not quite right -- if they were, you will never be able to pull your hand out once you put it in. Instead, Stargates use a combination of wormhole plus matter-energy conversion. In this case, it appears to be the matter-energy conversion mechanism working without a stable wormhole. So the matter disintegrates and converts into energy, but goes nowhere (we know it is not just vaporized because once a wormhole activated underground and left a normal-pressure cavity large enough for Teal'C to enter and dig himself to the surface). So either way, this could have been used against the Ori gate. Of course that gate was huge. It probably would have taken a long time to damage it to the point that it could not have functioned any more. @Xantec: Destroying a single ship, which could have been done with one kawoosh, would have broken the chain. I (and I think @TangoOversway as well) assumed that the ring is one giant continuous coil. Breaking it at one point is all it takes, in that case. @HNL: That's close. I figured either that was the case, or they could pick two (or four) strategic locations on the super-gate and destroy those points, leaving separate sections in space that could be pulled apart. It comes down to what it takes to destroy one of the super gate segments. As each segment not only contains what is necessary for the whole gate to function, they also have their own independent power, propulsion, and navigation systems. And knowing the Ori, they also have their uber shields without the primary weapon weakness. All around a tough nut to crack. @TysonoftheNorthwest Yes, it seems we did not account for shields. Shields may in fact be able to repel an unstable vortex. @HNL, except that Tealc used the tactic to destroy that Ori mothership with it's shields up so it's probably just something that was unanticipated since nobody commonly lugs stargates around as weapons. Not to mention it's got such a short effective range the concern for it could've been discarded out of hand. I'm guessing that the supergate would have fallen under the "things the kawoosh can't destroy" otherwise that seems like an easy solution that was overlooked, unless the writers totally forgot about it. On the other hand, if the supergate was destroyed, then the ori would just have constructed one somewhere else. Notice how once the sg-1 team blocked the supergate by dialing in from pegasus no more ori entered the milky way, until it was deactivated. You know how you can't dial into a stargate near one that is already active. I think the same principle holds for intergalactic gates, but one a much larger scale. If there is one intergalactic gate that is active than no other intergalactic gates can be active in proximity, perhaps in the whole galaxy. Therefore blocking the supergate would prevent the ori from getting to the milky way, not destroying it. This seems more like random guessing than anything based on canon @DVK-on-Ahch-To , we know it stopped the Ori the only question is if that's because they couldn't overcome it or the writers dropped the ball. A better question is why didn't the Ori either turn off the supergate or destroy it themselves if it could be. But then they hadn't even bothered to guard it which would've prevented the issue to begin with The Kawoosh is an "unstable vortex created by the formation of a wormhole" and is unrelated to the "event horizon" that performs the matter-energy conversion employed by the stargates. The kawoosh from a supergate was capable of destroying an Ori warship, which would ostensibly be more heavily armored and shielded than ring segments of the supergate itself. This kawoosh would certainly be capable of destroying supergate ring segments. However, this begs the follow-up question of how you get a regular stargate to create a kawoosh next to a supergate. Due to how stargate priority or supersedence works, the supergate would have priority for all inbound wormholes, and the Milky Way has no "puddle jumper DHD" for remote dialing a space gate for an outgoing wormhole. (Side note: there are puddle jumpers in the Milky Way, the time jumper from S8E13 and 1 temporarily brought from Pegasus in SGA S3E10. Both were kept in storage/research and not used in regular missions.) Simply put, dialing an outbound wormhole with a space gate in the milky way is not a trivial task and dialing an inbound wormhole to a nearby standard gate would likely not work at all. Such a solution would have made SG-1 S10E3 last all of about 30 seconds. The priority/supersedence point was also completely ignored in that episode after being firmly established throughout the series. All they had to do to use your solution would be to turn their destination standard gate around to face the supergate and BOOM. Most likely this was simply a solution that the writers overlooked or chose not to employ as it would detract from the series' drama. As I said, an entire episode would have been reduced to 30 seconds. I would make two points. First I don't think we know if the Ori warships are made of a more dense material or shielded more than the supergates. Second, in the episode you mention, s10 e3, the SGC had to get a milky way gate to jump to the supergate to keep it open; which tells us that the milky way gates would actually supersede the supergates, likey because they supergates are "out of network." The reason the episode was more than 30 seconds is because they needed to dial in from Pegasus with a gate near a black hole to establish the longer than 30 minute shut down time. I agree with you that the kawoosh and event horizon are completely unrelated however! @Odin1806 - Interesting points. 1: It's not the density of the Ori ship or supergate material so much as its the engineering. The supergate would not be able to sport armor or shields because the function of the supergate would destroy such. (It's possible the Ori could invent technology to overcome this, but we see no evidence of it in the show.) #2 - The Milky Way gate superseding the Ori gate is exactly why that episode ignores precedent; it's established that newer gates supersede old gates throughout the series. There was no "in universe" explanation given for the inconsistency.
STACK_EXCHANGE
Hi, I'm Kevin. Ten years ago I wrote a note. That led to another, and then another, and soon enough, I had a few thousand of them and an increasingly unhappy DropBox client that refused to sync it all. I worked at AWS and tried to keep on top of everything cloud; I programmed in three different languages and kept notes to help me context switch between them; I also did full-stack development on the side and that, well, it required referencing everything. If I spent more than five minutes figuring something out, those are five minutes I never want to re-live again. But this is difficult to do in practice. My solution is something I call hierarchical note taking. It's a system I've developed over the past ten years that has allowed me to amass a corpus of 30k+ notes. This system has some awesome properties that I haven't been able to replicate with anything else: In July 2020, I launched the preview for Dendron, the first-ever note-taking tool built from the ground up to support hierarchical note-taking. Dendron is open source, local first, Markdown-based, and runs natively on top of VSCode. Dendron lives inside VSCode because I wanted to move fast and focus on the truly novel parts of hierarchical notes without also building all the scaffolding that comes from creating an editor. Living inside VSCode means that users also have access to the thousands of existing extensions that provide everything from vim keybindings to realtime collaboration editing. Over 50 years ago, Vannevar Bush, an early visionary in information science, said something about the field that strikes a deep chord with me. "We are overwhelmed with information and we don't have the tools to properly index and filter through it. [The development of these tools, which] will give society access to and command over the inherited knowledge of the ages [should] be the first objective of our scientist" - Vannevar Bush, 1945. 50 years later, this statement is just as true. The tools haven't changed but the information has only become more overwhelming. Dendron is my attempt at building a tool that will give humans access to and command over the inherited knowledge of the ages. Hi, I'm Kiran. I'm a college friend of Kevin's and now very excited to work full time on Dendron. I'm a born note-taker and have always used journaling as a means for self-motivation and discovery. I've flip-flopped over the years from paper/pen to various digital note-taking tools, until Dendron. I think being able to manage and share knowledge, both individually and as a community, is one of the big unsolved problems of our times. I think we have a shot at doing this with Dendron.
OPCFW_CODE
It really is Your project Help which will satisfy all of your goals with Java investigate service, and help you to definitely become a attainable Java developer. For loops consists of initializer, situation take a look at, modifier and overall body Each individual of these is usually vacant. A while loop, may have a situation both at the start or the tip of a loop. At TutorXpert raspberry pi programming project help Help solution, you will definitely Identify experts which are innovative, exceptionally skilled along with knowledgeable in raspberry pi programming project help all sort of study raspberry pi programming project helps specifically custom made for yourself. You need to submit an buy to Get the referral code. This code will be distinctive for you and might be shared with your buddies. Earning Revenue The duties are referred to by quantity-- problem established one, problem established 2, and so forth. These numbers tend to be the undertaking numbers utilized all through the phrase that every one particular was specified at MIT, and you'll probably want to alter them. I acquired my assignment in time and it had been location on. Though I gave him extremely considerably less the perfect time to do my programming assignment he did it perfectly and with no solitary mistake. Quite spectacular. In the event you are trying to find another person to complete ur assignment last second and confirmed superior get the job done then glimpse no far more. You'll be able to truly feel baffled and stressed out when you have a tough time with a difficult java project. Java project help is an average prerequisite due to the problems in establishing working java that doesn't contravene other courses or that may immediately pack into internet pages for audiences. The world wide web college pamphlets you will definitely find that it's extremely helpful if you are searhing for info about economic help. Java contains a list of assortment courses, which might be comparable to the STL in C++. You will find summary collections, Source such as Established, and Record which give an interface and implementations for example TreeSet and ArrayList. You will discover strategies for instance is made up of which happen to be furnished by all of the collections, although the pace of examining incorporates is determined by the sort of selection, a TreeSet is considerably faster than an ArrayList. Sets are unordered while Lists are purchased, which means if you insert the values one,2,3 into a Set and into a Record, You'll be able to get them back again in a similar get from a List, but from a Established the order is not really preserved, in order to show you have These values, but you can't say everything with regard to the purchase they were included into the Established. We are searhing for a qualified devops engineer to help kickstart our AI pushed position System. The project is predicated on MongoDB, Elasticsearch and [url taken off, login to watch] and Python Microservices. The programming languages are probably the most challenging to comprehend and when a scholar would like to skillfully profess a language then he has to work fairly really hard. Locate documentation, code samples, how-to posts, and programming references to help Establish applications with the Office Retail outlet or A personal app catalog and to customize and combine Project Server plus the Project clients with lots of other desktop and organization applications for enterprise project management. The Java Programming has five essential concepts that happen to be: common, simple and points oriented; Protected and robust; architecture-neutral and portable; carries out high general performance and; dynamic, threaded and analyzed. Our proficient packages exploration assist specialists comprehend the basics of courses languages, and therefore it is straightforward for them to manage C#, C++, C, Java or other applications languages.
OPCFW_CODE
package org.las2mile.scrcpy.model; import java.nio.ByteBuffer; /** * Created by Alexandr Golovach on 27.06.16. */ public class VideoPacket extends MediaPacket { public Flag flag; public long presentationTimeStamp; public byte[] data; public VideoPacket() { } public VideoPacket(Type type, Flag flag, long presentationTimeStamp, byte[] data) { this.type = type; this.flag = flag; this.presentationTimeStamp = presentationTimeStamp; this.data = data; } // create packet from byte array public static VideoPacket fromArray(byte[] values) { VideoPacket videoPacket = new VideoPacket(); // should be a type value - 1 byte byte typeValue = values[0]; // should be a flag value - 1 byte byte flagValue = values[1]; videoPacket.type = Type.getType(typeValue); videoPacket.flag = Flag.getFlag(flagValue); // should be 8 bytes for timestamp byte[] timeStamp = new byte[8]; System.arraycopy(values, 2, timeStamp, 0, 8); videoPacket.presentationTimeStamp = ByteUtils.bytesToLong(timeStamp); // all other bytes is data int dataLength = values.length - 10; byte[] data = new byte[dataLength]; System.arraycopy(values, 10, data, 0, dataLength); videoPacket.data = data; return videoPacket; } // create byte array public static byte[] toArray(Type type, Flag flag, long presentationTimeStamp, byte[] data) { // should be 4 bytes for packet size byte[] bytes = ByteUtils.intToBytes(10 + data.length); int packetSize = 14 + data.length; // 4 - inner packet size 1 - type + 1 - flag + 8 - timeStamp + data.length byte[] values = new byte[packetSize]; System.arraycopy(bytes, 0, values, 0, 4); // set type value values[4] = type.getType(); // set flag value values[5] = flag.getFlag(); // set timeStamp byte[] longToBytes = ByteUtils.longToBytes(presentationTimeStamp); System.arraycopy(longToBytes, 0, values, 6, longToBytes.length); // set data array System.arraycopy(data, 0, values, 14, data.length); return values; } // should call on inner packet public static boolean isVideoPacket(byte[] values) { return values[0] == Type.VIDEO.getType(); } public static StreamSettings getStreamSettings(byte[] buffer) { byte[] sps, pps; ByteBuffer spsPpsBuffer = ByteBuffer.wrap(buffer); if (spsPpsBuffer.getInt() == 0x00000001) { System.out.println("parsing sps/pps"); } else { System.out.println("something is amiss?"); } int ppsIndex = 0; while (!(spsPpsBuffer.get() == 0x00 && spsPpsBuffer.get() == 0x00 && spsPpsBuffer.get() == 0x00 && spsPpsBuffer.get() == 0x01)) { } ppsIndex = spsPpsBuffer.position(); sps = new byte[ppsIndex - 4]; System.arraycopy(buffer, 0, sps, 0, sps.length); ppsIndex -= 4; pps = new byte[buffer.length - ppsIndex]; System.arraycopy(buffer, ppsIndex, pps, 0, pps.length); // sps buffer ByteBuffer spsBuffer = ByteBuffer.wrap(sps, 0, sps.length); // pps buffer ByteBuffer ppsBuffer = ByteBuffer.wrap(pps, 0, pps.length); StreamSettings streamSettings = new StreamSettings(); streamSettings.sps = spsBuffer; streamSettings.pps = ppsBuffer; return streamSettings; } public byte[] toByteArray() { return toArray(type, flag, presentationTimeStamp, data); } public enum Flag { FRAME((byte) 0), KEY_FRAME((byte) 1), CONFIG((byte) 2), END((byte) 4); private byte type; Flag(byte type) { this.type = type; } public static Flag getFlag(byte value) { for (Flag type : Flag.values()) { if (type.getFlag() == value) { return type; } } return null; } public byte getFlag() { return type; } } public static class StreamSettings { public ByteBuffer pps; public ByteBuffer sps; } }
STACK_EDU
OK. First of all lets set the scene: In the Intel SS4200 NAS box 4 drives have been installed (2TB each) in a RAID5 architecture. Worked for a while as a samba server, then problems started with the hardware. We decided to change hardware completely. So: I built an ubuntu 9.04 server on an intel motherboard. I used one ATA drive for the root filesystem and the 4 PREVIOUS HDDs each one connected to each sata controller. uname -a reports Linux NAS 2.6.28-17-server #58-Ubuntu SMP Tue Dec 1 19:58:28 UTC 2009 i686 GNU/Linux The RAID5 architecture was detected and rebuilt. Now /sbin/mdadm --detail /dev/md0 reports root@NAS:/etc# /sbin/mdadm --detail /dev/md0 Version : 00.90 Creation Time : Wed Jan 27 22:06:31 2010 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Feb 1 00:11:26 2010 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 42dcb4dd:20227bfb:cced5de7:ca715931 (local to host NAS) Events : 0.44 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 (NOTE: The physical order of the drives have been changed since the mobo itself has been changed. However I assume that since the Superblock is persistent this did NOT corrupt the data...Please correct me if I am wrong with this....) Now I discovered that Intel SS4200 box has probably installed an lvm2 volume on top of raid: Reading all physical volumes. This may take a while... Found volume group "md0Container" using metadata type lvm2 LV VG Attr LSize Origin Snap% Move Log Copy% Convert md0Region md0Container -wi-a- 5.46T PV VG Fmt Attr PSize PFree /dev/md0 md0Container lvm2 a- 5.46T 0 root@NAS:/etc# fdisk -l /dev/md0 Disk /dev/md0: 6001.1 GB, 6001188667392 bytes 2 heads, 4 sectors/track, 1465133952 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table ...I don't know if this constitutes a problem or not....) Now I tried to mount the lvm2 volume: root@NAS:/etc# mount /dev/md0Container/md0Region /mnt mount: wrong fs type, bad option, bad superblock on /dev/mapper/md0Container-md0Region, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so [97848.417567] EXT4-fs warning (device dm-0): ext4_fill_super: extents feature not enabled on this filesystem, use tune2fs. [97848.417577] EXT4-fs: dm-0: couldn't mount because of unsupported optional features (2000000). I tried to tune2fs -O ^extents /dev/md0Container/md0Region ( I don't know if this is the correct command or not...) root@NAS:/etc# tune2fs -l /dev/md0Container/md0Region tune2fs 1.41.9 (22-Aug-2009) tune2fs: Filesystem revision too high while trying to open /dev/md0Container/md0Region Couldn't find valid filesystem superblock. I tried almost everything: mke2fs -t ext4 -n /dev/md0Container/md0Region mke2fs 1.41.9 (22-Aug-2009) OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 366288896 inodes, 1465131008 blocks 73256550 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 44713 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 root@NAS:/etc# e2fsck -b 98304 /dev/md0Container/md0Region e2fsck 1.41.9 (22-Aug-2009) e2fsck: Bad magic number in super-block while trying to open /dev/md0Container/md0Region The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> THIS HAPPENS WITH ALL BLOCKS !!! I TRIED EVERYTHING I KNOW: root@NAS:/etc# dumpe2fs /dev/md0Container/md0Region dumpe2fs 1.41.9 (22-Aug-2009) dumpe2fs: Filesystem revision too high while trying to open /dev/md0Container/md0Region Couldn't find valid filesystem superblock. ...and I don't know any other way to recover the freaking volume. What I would like to do is somehow to recover the files that have been written to the volume when the disks were still on Intel SS4200 NAS box. Any help would be greatly appreciated. Many Thanks to all!
OPCFW_CODE
Multi-line insertion Added Shift-I Vi command in block selection mode to insert text at the beginning of each line of the block. Wonderful! Thank you for this! I just tested it locally, and it works beautifully. I'll have a second look later today or tomorrow and then I'll merge it. Thanks! I just realized backspace and delete keys don't work though. Any idea? Yes, I noticed. But that's a following step. I'd like to merge that separately, first I'd like to be sure that this is absolutely correct. (There are a few other keys in this mode that don't work, like ControlW for instance.) I just committed a fix for the backspace/delete handling. I just committed the same thing for Shift-A (insert at the end of the selection block). FYI: I left a few comments, but I'm about to start merging, with some changes. My code was probably not following good practices, so thanks a lot for your comments. Looking forward to seeing this feature in next release! Don't worry! The pull request saved me some work, so it's was valuable anyway. I thing I probably also have to do is create a unit test for this. (Not much work.) @davidbrochart, Thanks again for this! I merged your first commit, and made some changes for the rest. Normally, everything should be in prompt_toolkit. Thank you for merging and for the enhancements! This last commit is (not exactly) an implementation of the Vi substitute command (search and replace). I just took advantage of the multi-line insertion commit to go a bit further. In selection mode, it is now possible to enter the search mode (with /) and then to enter the replace mode (with a second /). Once the replace mode is entered, the matching text (in the selection) is deleted, and the user can enter the replacing text. The replace mode is exited by pressing Esc. So it doesn't follow the Vi :s/foo/bar/g syntax, but at the same time it still seems quite logical: you start with a search and you go on with a replace if you want. One limitation is that now the / character cannot be used in the searched text. It is just a proof of concept, let me know if you find it interesting/acceptable (and sorry for using global variables!). Or, instead of the second /, we could just hit Enter, just like in pure search mode. This way, there wouldn't be any limitation of not having a / character in the searched text. And because replacing is active only in selection mode, this behavior doesn't interfere with the pure search mode. So, the sequence would be: / -> enter searched text -> Enter -> enter replacing text -> Esc Hi @davidbrochart, About searching in selection mode, this has been implemented, but slightly different. In commits: https://github.com/jonathanslenders/python-prompt-toolkit/commit/b2171606e60a5e1d60cc355b57e9611c18e86e8e and https://github.com/jonathanslenders/python-prompt-toolkit/commit/e84abde00ffb6039de58e4aa555acbee81c766af About the search/replace mode. I'm not a big fan of using enter to go into replace mode. It doesn't work like that in Vi or other editors, right? Just one question, if you'd like to propose more improvements (which are always very welcome), could you start from a new pull request? That makes it easier to review changes (and I could close this one). Thanks! Sure, I will create a new pull request, so that you can close this one. I know it is not the Vi syntax, I will try to implement the real substitute command then.
GITHUB_ARCHIVE
Linux has been around since the mid-90s and now serves as the most widely used operating system in the whole world. You can find this OS on your smartphones, computers, servers, cars, home appliances and many more. This is one of the most popular platforms on the planet. Linux is mostly used in servers and more than 90% of the internet we have today is powered by Linux servers. That’s a long way it has come from the 90s. Linux is an open source and free software which makes it possible for anyone to make changes to the code and redistribute it under a different name. Some of the most famous Linux based operating systems are: - Kali Linux - Red Hat Linux Bottom line is Linux is everywhere! So sooner or later you will come across this OS. Linux is mostly used without a GUI (graphical user interface) and this results in absolute beginners to get lost going about it. However we did the research and compiled the 40 most basic commands that will get you to master the Linux environment. To start our journey we need to get familiar with the “terminal” or “shell”. What is Linux Shell? In the simplest terms, this is the place which receives the commands from the user, gives it to the operating system and shows the output. Most Linux distributions come with a CLI (Command line interface) and you use the terminal/shell to input the commands. Note that there are Linux distros that come with a GUI also but this tutorial is dedicated for the beginners who wants to familiarize with the Linux command line. Now let’s open the Terminal and let’s get started with the commands. Note that the Linux shell is case sensitive so you should be careful when entering commands. Basic Linux commands 1. PWD command The pwd command will show you the current working directory (also known as folder) you are in. It will show the absolute path with all the directories. An example is /root/folder1 2. LS command The ls command is used to show all the content inside a directory. The default settings is to show the contents of the directory you are in. If you want to see the content in other folders you should type the folder path after ls. For example ls /root/folder1/books to view the files inside books folder. The ls command has many variations: - ls –R : List all files in sub directories - ls –a : List hidden files - ls -al : List detailed information of files and directories (permissions, size, owner, etc.) 3. CD command The cd command is used to navigate between files and folders. For this to work you will need to type the full path or the name of the directory (depends on your current directory). There are some shortcuts to help you navigate quickly: - cd .. (with two dots) to move one directory up - cd to go straight to the home folder - cd- (with a hyphen) to move to your previous directory 4. MKDIR command You can use the mkdir command to create new directories. For example to make a directory called pictures you can type mkdir pictures. Here are some tips on using the mkdir command: - To generate a new directory inside another directory. Use this format mkdir pictures/europe - Using p to create directories in between two existing directories. Here’s an example: mkdir –p pictures/trips/Europe 5. RMDIR command If you need to delete a directory, use the rmdir command. However, rmdir only allows you to delete empty directories. 6. RM command You can use the rm command to delete files. However if you want to delete all the files in a directory use rm –r. 7. TOUCH command The touch command is used to create empty files directly from the command line. For example: touch docnew.txt will create a new text file with the name ‘docnew’. Also you can create files in different location by specifying the path: touch /home/1gbits/docs/mywebsite.html which will create html file under the docs directory. 8. FIND command The find command is used to search for files and directories. The find command is used to find files within a given directory. An example is: find docnew.txt 9. GREP command The grep command lets you search for all the text in a given file. For example, grep dummy testdoc.txt will search for the word “dummy” in the notepad file. Lines that contain the searched word will be displayed highlighted. 10. HEAD command The head command will make your life much easier when going through documents. This command is used to view the first lines of any text file. By default it will show the first 10 lines of text. 11. TAIL command This one has a similar function to the head command, but instead of showing the first lines, the tail command will display the last ten lines of a text file. 12. DIFF command The diff command compares the content of two files line by line. This is useful when you need to make program alterations, instead of writing the entire code again. An example would be diff code1.txt code2.txt 13. MAN command There are a lot of commands in Linux and it’s normal that we don’t remember what each of them does. This is where the man command comes to help. It shows the manual pages of the command. For example, “man ls” shows the manual pages of the ls command. No matter command, with the man command you can know everything about each of them. 14. MV command The mv command has two functions: - To move files - To rename the files In order to move a file you can type mv test.txt /home/1gbits/document. As shown in the example you should set the name of the file that needs to be moved and then to set the path. To rename files you can use mv file1.txt file5.txt. The name of the file will be changed to file5.txt 15. NANO command The nano command is used to create and edit text files. This is an easy to use command line text editor which includes all the basic functionalities of a regular text editor. 16. SUDO command The sudo command will let you perform tasks that requires root privileges. However it is not advised to use this command for daily usage because if an error will be made it will be hard to make it right. There are a lot more commands that are out there but we believe that with these simple codes you can start getting familiar with the world of Linux. Here some tips: - You can use the clear command to clear the terminal - Enter TAB can be used to fill the rest of the text of the command. For example after you type cd doc and press TAB it will automatically fill it as cd documents - you can close the terminal by typing in exit - You can turn the computer ON and reboot the computer by using the sudo halt and sudo reboot If you liked this article let us know in the comments below and if we are missing some basic commands let us know.
OPCFW_CODE
KMediaNet is a set of applications to use more easiest the Network Mutltimedia Middleware (NMM) into KDE. The official web of NMM is http://www.networkmultimeida.org. What is NMM? The Network-Integrated Multimedia Middleware (NMM) provides an architecture that allows to create distributed multimedia application easily: local and remote multimedia devices or software components can be controlled transparently and integrated into a common multimedia processing flow graph. NMM is both a research project and an Open Source project. NMM runs under Linux, is implemented in C++, and distributed under the LGPL and GPL. KMediaNet now has two applications: - kio_nmm : A kio slave for konqueror that is used to see the resources (DVD, TV Card, files, ...) that a remote or local computer is sharing. - nmm_dcop : Is the player. When you clic at a resource into kio_nmm this call to nmm_dcop to launch a player. You can launch a player without use kio_nmm using kdcop or dcop to do the correct call. Using it you can play a tvcard (v4l2 driver), the dvd player or a file as if this resources would be installed at the local computer. The screenshot 2 is a dvd play from a remote machine and you can manipulate menus. This applications can be improved but now is a usable stage. You must be installed at your system the NMM cvs version that you can find at: You must be installed "libextractor" and "avinfo" at the system that is running serverregistry. When you finish installation of NMM then you can install kio_nmm and nmm_dcop. Both applications are need to a correct work. You must set the enviroment variable NMM_DEV_DIR at /etc/profile.local or other but this variable must be see by KDE. The directories that contains the nmm libs must be added to /etc/ld.so.conf. The remote machine must run the serverregistry application from NMM. Review your firewalls because a connection from remote to local is done and if the firewall is blocking it the player doesn't work. Then at local machine, launch a konqueror and put at location bar: nmm://"remote_machine_name". You cannt't put the ip. If you want to use ip do this: and at remotemachine_name put the name that is used at the remote machine as linux name. I'm searching collaborators to do the web page of these applications and to continuate the develop of this project. 11 years ago - Bug fix release - Now LADSPA Plugins are used. By this you have to set LADSPA enviroment: How do I use LADSPA with NMM? (1) setup the environment variable LADSPA_PATH; otherwise no LADSPA plugins were found by the LADSPANode. There are som LADSPA plugins in /lib/ladspa (2) in the directory /resources exists a XML file named LADSPAPlugin_presets.xml containing presets for some plugins. It is possible to create additional 'profiles'. To create a new profile, simple copy another profile for the plugin of choice and change the values between the and tags. The default profile is set up to make the plugins my appear neutral. (3) there is a graph description (mp3ladspa.gd). Additional parameters are 1 or more mp3 encoded file(s). New many improvements: - kopete - kmedianet : Is a Jabber protocol plugin for kopete. With it you can controlate and manipulate all parameters of kmedianet. The remote user configure all parameters at KMediaNet Kopete and the local users receive it by clicking at the remote user with right click and selecting "Request NMM Resources". - New graphs: Video Conference, Call Conference and VNC Viewer.
OPCFW_CODE
In the series of articles on the front-end application development of Vue + element step by step, the processing and implementation of various functions of the system are introduced earlier. This essay introduces the integration of front-end and back-end development from a main line. Let’s make a complete introduction from the processing of query interface at the back-end of ABP framework, the encapsulation of front-end API interface call, and the whole calling process of front-end element interface. We introduced earlier that the interface of Vue + element front-end is to call the API interface service published by the ABP framework of the back-end. The API interface service publishes the corresponding interface information through swagger for our front-end development and use, providing very good convenience; When we use the front-end framework of Vue + element, we also need to encapsulate a simple abstract base class for the call of the back-end interface API, so that conventional interfaces such as addition, deletion, modification and query can be used by inheriting the base class without cumbersome and repeated code; In addition, in the process of obtaining data through the page, Vue + element can pass in the corresponding API parameter conditions, such as paging information and query condition information, so as to basically meet a conventional operation of querying and obtaining data list, and the obtained data can be bound to the table control of the interface. 1. Back end interface implementation of ABP framework I sorted out an architecture graph according to the relationship between projects, as shown below. In the figure above, the orange part is the classes or interfaces we add to each layer. The serial numbers on the layers are the contents we need to deal with step by step. Let’s interpret the contents of each class or interface one by one. The ABP framework backend project solution view is shown below. Swagger is integrated in web.host, and ABP + swagger is responsible for the release and display management of API interface. The following is the management interface of API interface. Further check the API interface description of getall. We can see the corresponding condition parameters, as shown below. These are processed as query conditions to obtain the corresponding condition information for the back end, so as to filter the returned data records. Then our front-end interface also needs to construct the query interface according to these parameters. We can process it through some conditions, where maxresultcount and skipcount are parameters for paging positioning. In the application service layer interface class, rewriting createfilteredquery can set the query rules of getall, and rewriting applysorting can specify the sorting order of the list. Or the processing of the menu module is as follows. Subclasses are generally used to rewrite these functions, because the base class processing functions have integrated their respective call logic and condition functions such as condition processing, paging and sorting. The following is the getall function implementation of the base class crudappservice. 2. Encapsulation of front-end framework API class of Vue + element The front-end and back-end separated Vue + element is introduced as the front-end technical route, so the front-end and back-end boundaries are very clear. The front-end can build the front-end application by obtaining the corresponding JSON through the network. Generally speaking, our page module may involve the store module to store the corresponding status information, or it may directly access the API module to call and display the data. In the process of page development, in most cases, the store module is not required to interact. Generally, it is only necessary to store the corresponding page data in the global data state to enable the processing of the store module. Through webproxy proxy processing, we can easily realize cross domain processing in the front end. Different paths can call different domain name and address APIs, and finally convert them to local API calls, which is cross domain processing. The front end encapsulates the classes of the front end JS end according to the interface of the ABP back end. The concept of ES6 class is introduced to realize the unified encapsulation of the business base class interface and simplify the code. The permission module involves user management, organization management, role management, menu management, function management, audit log, login log and other business classes. If these classes inherit baseapi, they will have relevant interfaces, as shown below. The baseapi of JS class has a conventional add, delete, modify and query interface, as shown below. 3. Page query and data display of Vue + element’s front-end framework The main frame interface is dynamically generated based on the menu configured in the background. The menu is on the left and the specific navigation bar and content area are on the top of the right. The development of the system main interface is basically a standard interface. The system menu is placed on the left side of the system and the list display content is placed in the middle area on the right. However, when there are many system menus, the menu needs to be divided into several levels. A custom menu component list is placed on the left side of the system menu, so that many menu functions can be placed by shrinking and folding the tree list. In the rapid development framework of ABP + Vue + element, the front-end menu of BS and the menu of CS are separated respectively. We maintain the menu content in the background permission module system and assign it to users with corresponding roles. After users log in to the system, they dynamically load the menu display, and jump to the corresponding route for page display through the menu configuration information. The list interface of menu resource management is as follows: The user list includes pagination query and list display, and you can use buttons to add, edit, view user records, or reset passwords for specified users. For example, for the menu management list, a form is defined at the front end for query processing, which can be queried according to the display name and creation time, as shown in the following code. Earlier, we introduced the encapsulation class of front-end API call, as shown in the following structure. Then we need to introduce the corresponding menu API class into the front-end page, which can be referenced as follows. We defined the data attribute in the JS of the page module, including the list used to carry the data list and pageinfo, as shown in the following code. According to the conditions entered on the page and the paging information stored in data, we can request data on the server side according to the conditions. The above conditions are constructed, and then according to the conditions, directly call the menuapi class for processing to obtain the list. In order to facilitate readers’ understanding, I list the links of the previous essays for reference:
OPCFW_CODE
yahoo error 999? Why everytime I try to answer a question confidently for yahoo answer, yahoo will give me a wet blanket by giving error 999? I ever try to answer another question that i'm not so sure after i encounter error 999, it went through. When i go back to the 1st question again after answer the second question it give me error 999 again... it seems yahoo use error 999 as indication of error that yahoo not sure what is the problem, just something is wrong. Please note I received error 999 when i answer first question, before i decide to answer the other question successfully. When i go back to the first question it give me error 999 again. so i do not exceed my limit. thks... read before answer... sigh... - Anonymous1 decade agoFavorite Answer it has nothing to do with the limit of questions / aswers, it has to do with what you are typing. Error 999 comes from higher up on the Yahoo ladder then answers .. much higher up, and answers being a mere little section of yahoo, has inherited this error .. this will never be fixed, i do not even think yahoo can afford to have it finetuned. Error 999 IS a bann. The most common reason for receiving Yahoo Error 999 is due to some sort of bandwidth limiting system that Yahoo has put in place on their servers. Once you have exceeded your allotted bandwidth for a specific period of time Yahoo gives you this Error 999 message and doesn't allow you to access the service. and yes their is a reason, and yes yahoo are very aware of it Error 999 is a code yahoo has in place, a censoring code. A code to stop whatever you are browsing / posting . Yahoo / and answers being part of Yahoo, use error 999 as a censoring tool. you are placing something in your answer that Yahoo does not want others to see, if a link is not yahoo related, and it is posted alot, it automatically goes on a bann list, yahoo have this list of URLS and if posted too much that list automatically banns those URLS. Yahoo can remove the links from the list so they cannot b banned but i do not see them doing that in the near future If you are using a helpful link in your answer you want to share with people that makes their cyber life easier, remove the link and repost, it will go through ..continue answering questions this way till the bann is lifted ... the bann will last about 48 hours, then be lifted , and keep returning if you keep posting the same link. or remove the http:// from the link and just post with www. You can do a few things if you are impatient ( re cleanup computer, dump temp files etc) but this will not help error 999 the best option if you have a dynamic IP is to do a full "flick of the switch", and when you power back up you will come back with a new IP, ( with some turn off comp for 4 hours), but usually you get a new IP when you power up straight after the shutdown, you can continue posting the link under the new IP, this seems to be the best option. Below are 3 answers from people that eliminate the problemSource(s): all info from the freecreed site on profile.This helpful link has been banned from being posted on answers - error 999 .. and yes error 999 IS a bann www.freecreed.com - 4 years ago Glad I found this page tonight, because I was trying to post in one Yahoo Group a number of times today, and no go, On two occasions after sending detailed notes, I received the generic auto reply with Yahoo's help desk suggestions and recommendations to contact my ISP. I tried whatever fixes I could that I had read elsewhere, such as deleting temporary files on my browser, trying different browsers, and shutting down and booting up again. To no avail. This obviously seems like a major bug. I also notice that the default page it goes back to isn't my normal Yahoo Groups page, so I suspect they did some changes and something backfired. Hope they address it soon! - 4 years ago I am trying to check my EmialAddress but error 999 want let me check it, this is a problem that i have every time i get on a computer to check my EmialAddress. So could yall please fix it for me so I want keep having this problem when I get on a computer to check my EmailAddress. thank you very much. - 4 years ago I'm not trying to answer questions. I'm trying to retrieve my conversation history. I also get "can't process, try back later" I don't even get an I'm sorry. - How do you think about the answers? You can sign in to vote the answer. - Anonymous1 decade ago As you know, until you reach level 5 you are limited on the number of questions you can answer each day. Are you sure you didn't hit your limit? Sometimes they tell you "Sorry, you hit your limit" but sometimes you get that annoying "error 999" message.
OPCFW_CODE
format currency in PHP and JavaScript I'm doing some calculations on both client and server, and I've found a difference in the final result. What am I doing wrong and what would be the correct way for obtaining a 2 decimal float for representing currency. Consider the following code (final number without format is 1,785): JS var sum = parseFloat(8.50); var tax = parseFloat(21.00); var total = parseFloat(sum * (tax / 100)); var test = total.toFixed(2); console.log(test); PHP $sum = (float)"8.50"; $tax = (float)"21.00"; $total = (float)($sum * ($tax / 100)); $test = number_format($total, 2, ".", ""); echo $test; In JS I get 1.78 and in PHP 1.79 you should make the title a bit more clear. format currency in PHP and JavaScript suggests that you want a method which works with currencies, but in question you ask why different parsing/number formatting methods return different results. consider changing the title to something like why does different PHP and JS format numbers differently if your question is only working with currencies/formatting currencies, @HorusKol gave you the perfect answer. JS var sum = parseFloat(8.50); var tax = parseFloat(21.00); var total = Math.round(sum*tax) / 100; PHP $sum = (float)"8.50"; $tax = (float)"21.00"; $total = round($sum*$tax/100, 2); The mathematically correct rounding for 1.785 is 1.79 so the code above gives you what you want. its not mathematically correct rounding, mathematically correct is 1.78 Do your research sir! I know and just confirmed xD, 5 only updates the odd digits* on the left common dude we are working with floats here. The convention you share is rounding 1.25 to 1, in the first example. No need to argue. Just study your math. In this example it was the ceiling function. W/e learn also the notations believe me I've studied my fair share of maths and Computer Science already. well, ceiling and floor functions are only applicable to Real to Integers conversion. Rounding functions are more general, the take a float and return a float. in maths rounding function takes real and map to real, while ceil/floor takes Real and maps to Integer. Possible duplicate of You can use function RoundNum(num, length) { var number = Math.round(num * Math.pow(10, length)) / Math.pow(10, length); return number; } You should not to use floats for storing and calculating currency values - use integers, with a resolution of 1/100 of a cent (or penny or whatever). Some financial applications go further - 1/10 000. So, $1 is stored as 10 000 in your database. After you have calculated taxes and totals, and rounded the result, then you can convert into a dollar amount for presentation. var sum = 85000; // $8.50 var taxRate = 0.21; // 21% var tax = sum * taxRate; // $1.785 console.log(Math.round(tax / 100) / 100); // $1.79 var cents = tax / 100; // 178.5 cents var wholeCents = Math.round(cents); // 179 cents var dollars = cents / 100; // $1.79 This is not answering my question at all nice answer, I have seen this convention nearly in all the codebases that I have worked with. Seems like an industry wide convention. @MatíasCánepa I'm trying to save you the headache of imprecise floating point representation when trying to work with precise currency calculations That's because PHP's number_format is rounding the numbers, if you don't want that, consider this function: function numberFormatNotRound($n, $decs, $decPoint = '.', $thousandsSep = ',') { $decs ++; $n = number_format($n, $decs, $decPoint, $thousandsSep); $n = substr($n, 0, -1); return $n; } PHPFiddle: http://phpfiddle.org/lite/code/457c-00nv
STACK_EXCHANGE
How can I improve my iOS MREs? I've been a member for quite some time now, mostly in Java programming, trying to move into iOS programming now. When asking questions in Java, I exactly knew what was needed to count as an MRE, basically a main method with a way to isolate the problem, but when asking questions for iOS I always get confused as to what would be considered OK to post and what not, and what might just be noise into the question. I currently have a question for a game I'm making, so if any of you could look at the format of it: SpriteKit keep moving player in current direction while falling after touchesEnded and the previous question I asked: SpriteKit scrolling background image showing line between images, and tell me what can I improve in any future questions I might have. Another question I asked before that basically contained a ton of classes is this one, where I had to post a GitHub link to the Interface Builder files (.xib): How to show the standard number keyboard without a UITextField in Swift on viewDidLoad If my issues were about logic, non-UI related a playground might be enough, but when dealing with UI problems, how can I post those Interface Builder files or storyboards or how could I show it without linking to a GitHub repository with the MRE there? Or in the case of my game, are the assets needed? I'm just trying to improve how I ask questions in this new environment. No @SecurityHound I'm asking this question in order to improve my future questions, rep means nothing to me, but I'm curious about how to ask better questions when working in iOS projects. Actually I'm glad to have bounties when I really need an answer to any question I have but I'd like to help others to help me respecting their (and my) time I don't see any way to improve the two questions you have asked about. The newest question asked 13 hours ago, was only asked 13 hours ago, probably should give that one more time. Not looking to promote the questions here, genuinely trying to improve, as I find myself posting a ton of code every time I ask a question in iOS here, because I don't know what should I post especially when dealing with UIStoryboards, among other UI things For the current question, I would suggest putting the specific code for the jumping and left/right arrows first, then the full code. For the previous question, not really anything to say about it. I flagged it as "Not reproducible or caused by a typo" since its solution was unrelated to the code. The dot question looks perfect to me. Wrong meta? @Sinatr: The questions were posted on SO, where they're just as good a fit as on gamedev. I don't see why this would be asked on another meta. @BoltClock, OP wants to improve his experience and I believe gamedev is the place to ask questions about games. Perhaps I should write comments more clearly, sorry. I'm not too familiar with gamedev community @Sinatr however while my latest questions were more related to gaming, my question goes beyond that, I've find myself posting a wall of code (like the latest link I posted), when trying to show a MRE, which to me in my experience with Java is not ideal, everything was contained in some methods and a single class but what about when you have UI issues and you're working with storyboards and xibs? Then should I post XML? Link to a GitHub repo? I don't find an easy way to post proper MREs for iOS programming No, you shouldn't post a link to a GitHub repo. Your question needs to be self contained. And that is where my question comes in, how to show UI issues where you need to show constraints or that are related to UI + logic but the UI isn't created programmatically but with storyboards. You can't self contain them completely @mason Sure you can. It's all in the help center documentation. Include the minimal code amount of necessary to demonstrate the issue. No more, no less. If you've done that properly, you've got a proper MRE. If there's something you left out that was necessary, then it's not Complete. If there's something you included that wasn't necessary, then it's not Minimal. @Sinatr Just because something is on-topic at another SE site doesn't make it off-topic here. If it fits within Stack Overflow's scope, then it's on topic and fine to ask about here. See this meta. Yes, in Java that was easy as if working with desktop apps the UI was built programmatically, with Swift there are 2 ways: programmatically which I rarely use and storyboards which are huge and weird XMLs that when read make no sense until put into the Xcode IDE and seen as storyboards, that's basically where my question lies, I've not seen people posting the XML for their storyboards, and when they post screenshots of their UIView then they don't usually receive answers or are too vague because people can't reproduce the problem without having a look at all the constraints Imagine showing you understand the differences between Java and iOS, the whole idea of MREs, and your decision to ask on SO, by stating it all upfront... and still having to reply to multiple people clarifying things that any iOS developer would've understood and not questioned. Tbh I've checked most of the profiles of people commenting here @BoltClock and none of them show any iOS question / answer, and if I had to guess, I bet 90% of the people who have upvoted either the comments or this question are mostly iOS developers who have encountered themselves in a similar spot than me. I wish there was a way in SO to probably upload those kind of files so we don't depend on external sites just to show them off, I mean, JS devs have fiddles / snippets, Java is mostly backend nowadays or homework tasks, android has their XML syntax clear but iOS... is a pain for SO's MREs @Frakcool: Yeah, and I wish people were more understanding of that - that some techs are incredibly non-conducive to SO's rules and vice versa, and question askers are not to blame. I'm not sure what your overall impression of the SO community is, but I'm sure you know it doesn't exactly have the greatest reputation (heh) among developers. That said you seem like a pretty patient person, which is commendable. I've just been around much longer so I'm starting to show my exasperated-boomer side (I'm actually under 30). @BoltClock well from what I've seen showing anger or being rude when asking for recommendations just gives you the opposite from what you want. And yeah I sure know about the reputation of SO, I experienced it when I wasn't even a jr programmer and was trying to get some concepts in my head and had some rude comments, learnt to deal with them but I started being more active 5-6 years ago with Swing framework that I love but not longer used outside of academics. And hey! I'm also under 30, 27 to be precise haha But you're right, maybe we can't apply the same MRE rules to all languages and I'll have to learn to live with posting walls of code when asking iOS questions (or answers) and probably those will be longer than the Java ones I'm used to, at least until SO does something to improve the experience for those other techs Is there anything to be learnt from current well received related iOS questions that you can use as a guide? If nothing else, it can lend further credence to a hypothesis as to whether SO tooling is sufficient to provide the standard you have delivered in java? The latter part it sounds like you kinda already have a strong feel for based on your SO iOS experience. Perhaps start a community wiki on what an MRE would look like for this; bit like this: https://meta.stackoverflow.com/questions/405791/what-should-a-minimal-reproducible-example-include-for-problems-with-automati#comment829803_405791 I wonder what iOS has to do with it? You seem to have moved to asking more UI related questions, and UI might require a higher amount of example code, but is this specific for iOS or would it be just the same for all platforms? As for the MRE, I'm not an iOS programmer but I wonder, are all private variables of the GameScene needed for the example? Do you really need a background and a floor? Is the name property necessary (maybe there could be only one entity)? Are two directions needed? One may be enough to show the effect. Is the up arrow needed for the example? These are all questions one might ask oneself when preparing a MRE. It really is not a simple task to create a truly minimal MRE. @QHarr that sounds like an amazing idea! Thanks @Trilarion UI with iOS is different than UI with Android or for the web, the way you build UI with either of them is different, so, this question is related to iOS UI building and how to post the MREs for when we have those kind of issues. As for the properties, perhaps the background isn't needed, but for the controls I thought left and right for testing was needed, otherwise you'd have to relauch the app over and over again as the node would be out of the screen (and removed from the calculations by the OS), and the up arrow is needed for jumping in this case where the problem was present @Frakcool If the background is not needed, it should be removed. Relaunching the app might not be such a big problem, the MRE doesn't need to be comfortable, just working. I even wonder if an MRE here would need an UI at all. The problem seems to be with controls and behavior, not layout. A simulation of the control part alone showing a certain behavior and the description of the desired output might be enough; output could maybe also be given as debug logging output. In general, the MRE should be sufficient so that an expert in the field can recreate the problem and as succinct as possible. @Trilarion Usually that's the case, but in a previous (now deleted) question, when I didn't include UI components I was told that they were lacking, so, as I mentioned, with Java Swing I was able to reproduce problems because the UI is built with code, not an interface builder, I'm on the phone, later I'll edit the question removing the background. You may be right that the UI could not be needed in this case, perhaps just a video showing what the issue is then the controls, and if asked for it, then and only then post UI elements So, after reading some of the last comments here are some things that should improve iOS MREs, I'll eventually create a Community Wiki as suggested by QHarr in the comments above There are 2 types of questions we can ask when dealing with iOS issues UI related questions These are the questions where the layout is the problem, in such cases we need to provide screenshots of our interface builder of the culprit UIView along with its constraints, trying to isolate the issue with as few components as possible. Behavior related questions These are the questions where we're only changing values on the data, in case our code is modifying the UI (such as in a game) based on a state, we should post a video / gif showing the error and the code that is creating the problem, always trying to isolate the problem in a brand new project, in order to reduce complexity. With the above recommendations, our questions should be self-contained.
STACK_EXCHANGE
This blog post comes to you courtesy of Mike Gualtieri at Forrester, who has demonstrated that either they’re completely clueless or they’re possibly generating analysis at random. Who knows. You can read the offending piece here, but I don’t recommend it if you’re either a developer, programmer, QA, QC, familiar with any of these people’s roles, or don’t have a strong stomach for ignorance. In this piece, Mike advises people to fire their QA teams and make developers directly responsible for code quality, arguing that it will improve code quality. Fun fact: it absolutely will not. If it did, that means the QA team was not doing their jobs. And that your organization failed to implement something it should have already. Let’s talk about some metrics – much more important metrics – that Mike left out of his article. Bugs get fixed faster, okay, and maybe there are a few less. Now what about feature requests? What about next version? What about improving existing features? All of these critical development functions just got more or less thrown out. What about code produced per programmer per day? This is a metric people are fanatic about (despite that it’s arguably the worst metric) and most assuredly dropped dramatically. First and foremost; QA’s job is to serve two key functions. One, they test the product thoroughly to ensure a minimum of bugs and problems. Two, they serve as an interface between developers and customers, so that developers can focus on their work. A third function that some shops implement is to give QA prioritization authority; that is QA decides which bugs are break-fix, showstopper, and which are just feature requests. Again; SOME businesses do this. Some let programmers decide which bugs need priority. Most have a dedicated resource or resources to prioritizing bugs in some form though. And you want to dump ALL of this onto programmers? The only word for you is “completely clueless.” First and foremost, talk to a programmer sometime. Ask them what they want most in their office environment. Answer number one? Less distractions. Programmers do not want to be interrupted constantly by meetings, phone calls, and other minutae; it disrupts their workflow and concentration. Writing code is not just “bashing on a keyboard for hours.” It requires focus to recall which function does what, where you are exactly in the execution stack, and so on. Ability to handle interruptions is not a measure of skill of a programmer, either. Code is complex. Nobody can keep everything in their head while handling an irate customer on the phone. UPDATE: It was pointed out to me that I neglected to mention that QA/QC ALSO is responsible for creating test cases, iterative testing routines, and so on based on customer feedback. These are some of the most time consuming tasks in any QA/QC department, by far. Second, if programmers are not already directly responsible for the quality of their code, that will never be the fault of having a QA department. It is a management failure on your part, period. Developers should and must always be directly responsible for the quality of their output, not their quantity. (Again, goes back to my arguments against using “lines per day” as a performance metric.) I won’t waste time on the dozens of ways to implement it; suffice to say that if you haven’t implemented it, then the blame is at management’s feet. Third, it creates a culture of fear. This is absolutely the worst, most incompetent and wrong-headed method of management out there. The amount of popularity it is gaining is just horrifying. People who are afraid for their jobs may work harder, but you can be damn sure they are not happy. And they will look for the nearest exit as fast as they can. Churn in developers for a complex application doing anything is a bad thing. Not to mention that study after study after study shows that unhappy employees produce less, are less healthy, are more likely to take action against their employer, and will not stick around. So let’s summarize: Mike’s advice is to A) increase distractions for programmers B) implement something that should already be implemented C) create a culture of fear in your workplace. And that by doing these three things you will magically be endowed with better quality code. Here’s a hint for you: absolutely not. (Okay, so it’s not a hint as much as a statement of fact.) Doesn’t matter what you’re doing. Fix your management structure to make developers responsible for code quality instead. So what’s MY advice? A) REDUCE distractions for programmers; do you REALLY need daily status meetings at 10:30AM? Reschedule. B) Implement responsibility for code quality with your QA/QC department. C) Reward programmers genuinely for accepting responsibility and improving their code quality. Recognize the people who work to make your product or business possible. D) If you’re using “lines per day” throw it away. Rewarding 50 buggy lines over 20 quality lines; whyyyyyyyyyyy?! E) Encourage risk-taking. Innovation depends on, starts with, and ends with your developers. If someone suggests a new feature that’s unrequested, don’t dismiss it out of hand. You may have a new product on your hands. I just cannot believe that anybody would think this is legitimate much less trustworthy advice. Does nobody bother to talk to the people who it affects? Or do they just talk to management about their “predicted cost savings”? And for the record; yes I do programming sometimes. But I actually consulted with several people who program at major companies for a living. While most of them have a love/hate relationship with QA/QC, they agreed with me. And all of them said that if they were subjected to the culture of fear Forrester encourages, they’d immediately start looking for work elsewhere.
OPCFW_CODE
Operations Extensions. Apply Async(ISnapshotOperations, Guid, String, SnapshotApplyMode, CancellationToken) Method Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Submit an operation to apply a snapshot to current subscription. For each snapshot, only subscriptions included in the applyScope of Snapshot - Take can apply it.<br /> The snapshot interfaces are for users to backup and restore their face data from one face subscription to another, inside same region or across regions. The workflow contains two phases, user first calls Snapshot - Take to create a copy of the source object and store it as a snapshot, then calls Snapshot - Apply to paste the snapshot to target subscription. The snapshots are stored in a centralized location (per Azure instance), so that they can be applied cross accounts and regions.<br /> Applying snapshot is an asynchronous operation. An operation id can be obtained from the "Operation-Location" field in response header, to be used in OperationStatus - Get for tracking the progress of applying the snapshot. The target object id will be included in the "resourceLocation" field in OperationStatus - Get response when the operation status is "succeeded".<br /> Snapshot applying time depends on the number of person and face entries in the snapshot object. It could be in seconds, or up to 1 hour for 1,000,000 persons with multiple faces.<br /> Snapshots will be automatically expired and cleaned in 48 hours after it is created by Snapshot - Take. So the target subscription is required to apply the snapshot in 48 hours since its creation.<br /> Applying a snapshot will not block any other operations against the target object, however it is not recommended because the correctness cannot be guaranteed during snapshot applying. After snapshot applying is completed, all operations towards the target object can work as normal. Snapshot also includes the training results of the source object, which means target subscription the snapshot applied to does not need re-train the target object before calling Identify/FindSimilar.<br /> One snapshot can be applied multiple times in parallel, while currently only CreateNew apply mode is supported, which means the apply operation will fail if target subscription already contains an object of same type and using the same objectId. Users can specify the "objectId" in request body to avoid such conflicts.<br /> - Free-tier subscription quota: 100 apply operations per month. - S0-tier subscription quota: 100 apply operations per day. public static System.Threading.Tasks.Task<Microsoft.Azure.CognitiveServices.Vision.Face.Models.SnapshotApplyHeaders> ApplyAsync (this Microsoft.Azure.CognitiveServices.Vision.Face.ISnapshotOperations operations, Guid snapshotId, string objectId, Microsoft.Azure.CognitiveServices.Vision.Face.Models.SnapshotApplyMode mode = Microsoft.Azure.CognitiveServices.Vision.Face.Models.SnapshotApplyMode.CreateNew, System.Threading.CancellationToken cancellationToken = default); static member ApplyAsync : Microsoft.Azure.CognitiveServices.Vision.Face.ISnapshotOperations * Guid * string * Microsoft.Azure.CognitiveServices.Vision.Face.Models.SnapshotApplyMode * System.Threading.CancellationToken -> System.Threading.Tasks.Task<Microsoft.Azure.CognitiveServices.Vision.Face.Models.SnapshotApplyHeaders> <Extension()> Public Function ApplyAsync (operations As ISnapshotOperations, snapshotId As Guid, objectId As String, Optional mode As SnapshotApplyMode = Microsoft.Azure.CognitiveServices.Vision.Face.Models.SnapshotApplyMode.CreateNew, Optional cancellationToken As CancellationToken = Nothing) As Task(Of SnapshotApplyHeaders) The operations group for this extension method. Id referencing a particular snapshot. User specified target object id to be created from the snapshot. Snapshot applying mode. Currently only CreateNew is supported, which means the apply operation will fail if target subscription already contains an object of same type and using the same objectId. Users can specify the "objectId" in request body to avoid such conflicts. Possible values include: 'CreateNew' The cancellation token.
OPCFW_CODE
Basic administration is a skill that should be mastered if you are to have a happy Slice. As with all aspects of administering a server, try to learn the basics before using web based applications - there may come the day when the web server is not working. Memory Management with Free: Monitoring of memory gives an easy and instant overview of the state of your Slice. Use 'free' to give you basic details of RAM usage. System monitoring with top: Using top gives a real-time overview of system processes and shows precisely what is using system resources. Solve Slice or Website 'Down' Issues Quickly: When your website is down or your slice is unreachable, you can run through a handful of routine tests to identify which of the most common causes could lie behind the problem. Using iotop to check I/O and swap: The iotop utility provides an easy-to-use interface for monitoring swap and disk I/O on a per-process basis. Using dstat to check I/O and swap: For a flexible reporting tool that can yield information ranging from CPU use to the top I/O-consuming process look no further than dstat. Using dstat with scripts and external modules: Here we look at the basic scripting options for dstat as well as an overview of its external modules. Understanding logrotate: It's no fun when log files grow out of control. In this two-part series, learn how to use logrotate to keep those logs in check. Linux file permissions: Linux file permission concepts: Linux file permissions are strange and wondrous things. Start down the path of understanding by looking at the core concepts behind them before moving on to practical applications. Checking Linux file permissions with ls: Once you understand Linux file permissions, the next step on the road to enlightenment is learning how to check the permissions for a file or directory. Using chmod, part 1 - symbolic mode: We've done the thinking and the seeing, now to start the doing. Continuing our series on file permissions, we look at using the chmod command. Using chmod, part 2 - octal mode: Now we look at the other way chmod can be used - with numbers. This is the more commonly-used format, but also the least user-friendly. Umask and unusual file permissions and types: In this last entry in our series on Linux file permissions we look at the umask and some more advanced file permissions. We also throw in some discussion of other file types you may see in a directory listing. Cron and task scheduling: Basic Linux task scheduling with cron: Sometimes you want to run commands nightly or weekly. You could just log in and run them yourself, but scheduling those tasks with cron is less hassle in the long run. Fine-grained task scheduling with cron.d: If you need more control over scheduling jobs than using the standard intervals like hourly and monthly, you can put a file in cron.d and tell it exactly how often to run. Multi-user task scheduling with crontab: The crontab works much like cron.d, just with its own command for access and more flexibility when you want multiple users setting their own schedules. Using ServerDensity to monitor a slice: You have a number of options for monitoring your slice. Commercial services like ServerDensity can be easier to set up and maintain than free monitoring applications. Installing munin overview: Anticipating problems and resource shortages on a slice can be more valuable than fixing them after they've happened. A monitoring tool like munin lets you watch your slice's resource use over time. The graphs will highlight issues before they cause downtime or bandwidth quota overages. Installing additional munin nodes overview: Following up on the article about installing a munin master slice, if you want to monitor additional slices you'll need to install a munin node service on each. Enabling munin node plug-ins overview: Munin uses plug-ins to determine what data is gathered and reported. It includes several plug-ins for the types of data most people would be interested in, but not all of those plug-ins are enabled on a fresh installation. Capturing packets with tcpdump: Tcpdump is a powerful network debugging tool which can be used for intercepting and displaying packets on a network interface. Resize and backup optimization: Speed up resizes - Part 1: This guide will help you shorten slice resize times, slice moves, and slice backup times. Speed up resizes - Part 2: In this second part we examine another use case and look at the resize process in general. How to use Rescue Mode: Rescue Mode grants you full access to a non-bootable slice's filesystem. You can use it to modify problem configuration files or to use scp to copy data from the slice to a remote location. Secure FTP Transfers: Using FTP to transfer files to and from your Slice can cause issues with security - let's look at SFTP as a safe and secure method. Introduction to dig: Creating new DNS records is one thing, but what if you want to check them before they are fully propagated? Using dig with external nameservers: Checking your DNS on external servers after your changes have propagated. Getting more out of dig: Looking at other information dig can return about a domain. Network Time Protocol Using NTP to sync time: Keeping your system's date and time accurate is easy to do using NTP. Introducing iptables part 1: This article provides an overview of how to understand the Linux kernel firewall for ipv4 using iptables and the Filter table. It is intended for beginners to intermediate linux users and provides an insight on basic configuration concepts. Introducing iptables part 2: This article continues our introduction to iptables, focusing on syntax, adding and deleting rules. Introducing iptables part 3: The third part of our introduction to iptables wraps things up by looking at launch at startup configurations and useful examples. Linux Server Migration: Migrating a Linux server from the command line - preparing the servers: Sometimes you need to move everything from one server to another. It can take a while for all those files to transfer, but in this article we try to make the rest of the process as painless as possible. Migrating a Linux server from the command line - running the sync: With our preparations done it's time to start copying files between your Linux servers. Linux migration tips and tricks: In this article we give some advice on how to make your Linux server migration safer and discuss migrating on a per-package basis. Downloading and using kernel source code (220.127.116.11 and newer) overview: If you know you need to get your kernel source (or at least its headers) for a kernel version 18.104.22.168 and newer, you can find instructions to do so here. Using pv-grub to run custom kernels overview: The pv-grub kernel option allows your slice to boot from your own kernel instead of one of ours. Please feel free to request articles or comment with any suggestions or ideas of your own.
OPCFW_CODE
Note: A couple of years ago, Engine Yard gave Mitchell Hashimoto an OSS grant to work on Vagrant. A few weeks ago, Vagrant 1.0 came out and Mitchell wrote a blog post about his journey and how he got to 1.0. With his permission, we’re reposting it here. I just released Vagrant 1.0, exactly two years since I showed Vagrant to the world for the first time. I’ve made an official announcement but I think it is only appropriate to share a deeper, more personal story of the road travelled to reach this milestone, and what I’ve learned throughout the process. My goal in sharing is to give others an inside look at the guts of a successful open source project and perhaps offer a different point of view of the open source world. Today is one of the proudest days of my life. I’ve released the first stable version of Vagrant, the software project I started with John Bender over two years ago while I was still in college. Vagrant is currently used by Mozilla, RackSpace, LivingSocial, Shopify, OpenStack, EventBrite, and many, many more. Both the project and the ideas behind the project have been far more successful than I could’ve ever dreamed of. But the road to this point is an interesting one, filled with highs and lows, and I’d like to share it with you. The original idea for Vagrant came in 2009, when both John Bender and I were employed by Ruby development shops where it was routine to see a new project every 6 to 8 weeks. It was becoming increasingly frustrating to setup our development environments for every new project, which would be _slightly _different from previous projects. And it was absolutely infuriating when we had to go back to do maintenance on an old project since it was always a nightmare to get the environment back to that state. In the winter of 2009, I was frustrated enough to try to come up with a solution to this problem. At this point in my life (and even still today), I viewed open source very romantically. People like Yehuda Katz and John Resig were my idols, because they did their work in the open passionately and successfully. I wanted nothing more than to find my own “jQuery” or “Ruby on Rails.” Basically: I was eager to start something which I thought would change the landscape of some field, just as jQuery and Rails did theirs. I don’t actually believe that Vagrant is as influential as jQuery or Ruby on Rails, but this gives you an idea of how I was thinking at the time. In Janurary, 2010, I approached John Bender with some initial thoughts for what would eventually become Vagrant. John immediately saw value in the idea and offered to join in on the project, which I gladly accepted. We got started working immediately. It was January 21, 2010. We both worked furiously, and within a week we had a working prototype that could bring up virtual machines we could SSH into. Below, you can see a screenshot from January 31, 2010, about a week after we started hacking, showing a functional version. Yes, Vagrant started life named “hobo.” The initial development went fairly smoothly. Much of the essence of what Vagrant is today was molded during those initial weeks, and I’m happy to see that the ideas John and I had have been validated worldwide. Some are starting to show their age, but the fact that so many early decisions have lasted this long I think shows we had the right idea. Fun fact: John came up with the entire idea of the “box” system about a week before the public release. Prior to that, I had planned to release Vagrant 0.1 only supporting a single Ubuntu image. The “box” system has been one of the most critical pieces in making Vagrant as successful as it is today, so my hat is off to John here. While Vagrant was ready for release around mid-February, I was concerned that an open source project coming from two unknown/unproven developers would hurt our initial adoption. I decided that to be as successful as we could, we’d have to have amazing documentation and a mascot. Yes, a mascot was_critical_. I don’t know why, but I just find projects that have mascots to be more trustworthy. So, Vince was born (shown below), and I spent a week only working on documentation. I can’t stress how important this week was. Vagrant 0.1.0 was released on March 7, 2010 with a mostly positive response. There were a handful of individuals who immediately grasped onto the idea and began using Vagrant with their projects right away. Most of these individuals still use it today, and deserve recognition for being so brave to adopt a new technology so early. Some of these early adopters even border on fanatical, creating things like Vagrant pins they distribute anywhere they go. I love this: The first few months of Vagrant were reasonably uneventful. Besides the initial rush of early adopters, growth mostly stagnated, and each release of Vagrant was getting averaging around 100 downloads. While I loved the project and still whole-heartedly believed in it, seeing very little growth was hugely discouraging. At this point, I actually started to view Vagrant as a potential “failure.” Despite this, John and I and our respective companies used Vagrant every day and saw the value first hand. And I was still passionate about the project, so I decided to just keep going, believing that if this were truly a good idea, something good would happen. And something good did happen, something great: Carl Lerche discovered Vagrant. At the time, Carl Lerche worked at Engine Yard and was a Ruby on Rails core developer. He also specifically pair programmed withYehuda Katz. Carl popped in and out of the Vagrant IRC channel for a few weeks, asking for help and offering ideas here and there, and even contributed a few times. After a few weeks, he private messaged me. I don’t remember the exact words he used, but it was something along the lines of “How would you feel if Engine Yard sponsored Vagrant?” I vividly remember shaking with excitement at this point, despite the uncertainty. Engine Yard had long been known for being huge supporters of open source in the Ruby community, and backed important projects such as Ruby on Rails, JRuby, Rubinius, and a few more. I saw this potential sponsorship opportunity as huge idea validation as well as an outlet to better spread the word about Vagrant. On October 14, 2010, Engine Yard announced that I had joined their OSS grant program. The specifics of the deal are private, but the basic idea is that they would help me in any way possible, as long as it was reasonable. This was a really exciting day because it was the first time with Vagrant that I could say “Mom and Dad, look! See! I told you I’m not just playing on my computer.” At this point I was still in college, as well, just to put things in perspective. The Engine Yard sponsorship changed everything. Just having Engine Yard supporting me spurred a huge interest in Vagrant, and blasted Vagrant into “small-time popularity:” The personal horsepower I put behind Vagrant went up to over 9000 at this point. Since I was still in college, I was spending 8 or more hours per day on Vagrant. At the same time, I was sending speaking proposals_everywhere_ I could to educate people about Vagrant and try to gain some more interest in the project. I spoke at a handful of conferences, pushed many releases, and by March, 2011, the average page views per day on vagrantup.com had gone from around 200 to over 500. Success! Unfortunately, this success came at a price: burnout. By March, although I refused to admit it for many more months to come, I was completely and utterly burnt out. This is clear to see from the release dates of various Vagrant versions: It took 6 months to release Vagrant 0.7.0 from Vagrant 0.6.0 (although there were various bug fix releases between). This was the lowest point in my personal involvement with the project. During these times, I would let bugs pile up to around 20 or 30 before triaging them all in one go. I’m not proud of this, but it was an important part of the history of Vagrant. While I was burnt out and busy feeling sorry for myself, Vagrant only grew more and more popular. A great community built around Vagrant, a healthy set of plugins, and I gained a small fan club. It is the community that brought me back. I simply started getting more and more tweets, emails, etc. telling me how Vagrant had changed people’s lives, how they couldn’t imagine working before Vagrant, etc. I was flattered, and the praise was highly motivational. By the fall of 2011, I was working on Vagrant again, though not as much since I was out of college at that point and had a full time job. In October, 2011, I travelled halfway around the world to Sweden for DevOpsDays, where I gave a talk on DevOps. It was here that something big happened: About 15 seconds into the talk, I introduced myself as the creator of Vagrant, in case anyone would recognize me that way. I thought maybe a handful would care, but instead the entire room, filled with around 200 people, erupted in applause, which you can hear in the video. This single act of kindness, again by the community, showed me just how much people cared about what I was doing, and motivated me even further. I consider this an extremely important moment in Vagrant history, and would be the first of many amazing events I’d witness in the following months. Vagrant has been a full time job for the past 2 years. I work 8 hours at work not on Vagrant, and then spend at least 4 hours at home working on Vagrant, and typically also work on it on the weekends. I’m incredibly proud to finally ship a 1.0, and I’m proud of what the project has taught me and the community that has grown around it. I hope my story shows how much work, luck, and passion has gone into Vagrant. If I could go back in time, I wouldn’t change a thing, since as they say, “it’s all about the journey!” I’m looking forward to see where this journey continues to take me. And finally, last but not least, thank you so, so much to the Vagrant community and early supporters. Patrick Debois, Christian Trabold, Kieran Pilkington, and so, so many more: You make it a joy for me to work on Vagrant every day. Open source is all about the community. And, of course, thank you to Engine Yard for all their support, which continues to be critical in educating the world about Vagrant.
OPCFW_CODE
I backed the Pine A64. I just wanted a 64-bit SBC that's just about as cheap as a RPi with better specs. I'm betting on Tamil getting in good shape and subsequently open-sourced in the near-future for the board to be useful. A low-power, ARM-based media server backend is something I want for various reasons, and I think the Pine A64+ may be able to pull it off. Tbh, I would've preferred backing the LimeSDR since it's a lot more meaningful. I really wanna back it now that I know of it via these forums, but it's not in the budget. Discovered Ian's site last December, and I don't tie my shoes the same way anymore. Changed the way I think about shoelaces and lacing forever, so much so that none of my shoes are laced as they come. He even has an iOS app, but he said he won't update it anymore, nor will he work on an Android app/port; the website's light enough. This has some great specifications compared to other smart-watches Atari classics to play on the go Terraria game included MicroSD slot, for storing media including games PixelFurnace launcher installed/installable on the MicroSD - Which can run on Linux, Win, MacOS Some talk of a possible Steam edition too, which could be interesting Hmm, two months for protyping seems quite short, even if this is a chinese knockoff or something. But this wouldn't be the first kickstarter project with a wildly optimisic schedule, so that's probably nothing against it. Well, I strayed from my hardline on not doing crowdfunding a little while ago. I funded this mostly because a my keycaps are slightly yellow, secondly since I bought a Portuguese Amiga I didn't have a US layout. I instead found an old AT keyboard that happened to have keycaps that work, but not quite the right shape. I backed some miniatures which came though, the first pebble and then the Time2 that got canceled when they were bought out by fitbit. I got 100% of my money back and I love my pebble time but the battery won't last forever and now I'm screwed on smart watches. I don't really want anything beyond telling time and showing me notifications, the iwatch and all the Android watches do way too much. I also backed the Ouya, man, that was a mistake. I have put 5 bucks in a few projects just because I believed in them but didn't have the money right then to back enough to get the thing. I'm lucky though, the only failure I backed that fell through I got my money back. EDIT: Oh yeah, and the Pandora and Pyra. Just as I wrote this I realized it is 'what your you backing, not backed. Oops. I came across the developer in the bridgesim forum, and figured that game was worth to reactivate my Kickstarter account after about 5 years. it kinda combines the asthetic of sunless sea with the gameplay of bridge simulators, so unlike most bridge sims, it does not take place aboard of a starship, but in a steampunk submarine instead. The campaign will end in 3 days. And for everyone that may only know artemis(or maybe not even that one) of that genre, but like the concept of a bridge simulator, that game and also the bridgesim forum might be worth a look to discover that niche genre a bit more. http://bridgesim.net I just backed Mr Biffo (Paul Rose from Digitiser on 4Tel)'s new youtube series: 3 days left and we're only 100 quid away from them having to do destruction derby for real! I've been following Biffo's digitiser2000 blog/review site/funny crap site since it launched (having missed out of bubblegun while that was a thing), and his Found Footage comedy youtube series proves he can put together youtube videos. Some of the humour in that was a bit beyond me if I'm honest, although I certainly enjoyed the songs, so I didn't pledge for that, but this sounds more up my street.
OPCFW_CODE
Finding Optimal New York Subway Stations For Gala Signatures A tutorial on how to explore and analyze data to solve a business problem. NOTE: This was written on my personal GitHub website on July 19, 2019. I'm republishing here, as this is still very useful for personal data science projects. I've stated before in a prior post: Not all business problems need machine learning. Some questions can be answered through exploratory data analysis or statistics. As a data scientist, your job is to determine when machine learning is applicable. This post will go over how to effectively analyze data. There are many different ways to analyze the data or solve a problem. My goal isn't to provide the correct way to do so. My goal is to show how I analyze data and how I solve data problems. Even though I'm a data scientist, I think of myself as a data analyst first, then data engineer second. This was an email we got from a fictional client. As we mentioned, we are interested in harnessing the power of data and analytics to optimize the effectiveness of our street team work, which is a significant portion of our fundraising efforts. WomenTechWomenYes (WTWY) has an annual gala at the beginning of the summer each year. As we are new and inclusive organization we try to do double duty with the gala both to fill our event space with individuals passionate about increasing the participation of women in technology, and to concurrently build awareness and reach. To this end we place street teams at entrances to subway stations. The street teams collect email addresses and those who sign up are sent free tickets to our gala. Where we’d like to solicit your engagement is to use MTA subway data, which as I’m sure you know is available freely from the city, to help us optimize the placement of our street teams, such that we can gather the most signatures, ideally from those who will attend the gala and contribute to our cause. The ball is in your court now—do you think this is something that would be feasible for your group? From there we can explore what kind of an engagement would make sense for all of us. Our goal is to use New York’s Metropolitan Transportation Authority (MTA) data to analyze which stations are the best to place WTWY’s street teams in. We’re breaking this problem into 3 parts - What resources does WTWY have for the street team? - What time and day are optimal for most subway station traffic? - What station generates the highest traffic? Resources and Assumptions WTWY is a non for profit. So it is very limited in resources. Here are the assumptions we made. - WTWY can place at most 2 people per station - WTWY can only occupy 2 hour time blocks - WTWY can have a street team at most 3 days per week Data Wrangling and Cleaning We accessed all MTA data from this website. Looking at the field description, we noticed that the stations keep track of turnstile counts rather than users. We got the users for each time period by grouping each station's turnstile device (a station can have more than 1 turnstile device, with each device set to a different turnstile counter) and calculating the difference of turnstile counts. This gave us the number of users per time period for each station turnstile device. Optimal Time and Day We loaded in MTA data from 5/25/2019 to 6/28/2019. As of now, we wanted to see a pattern with a small time frame before loading in more data. In addition, the data shows users in intervals of 4 hours per day. So we have users at midnight, 4 am, 8 am, 12 pm, 4 pm, and 8pm. For optimization, we focused on morning rush hours (8am-12pm) and afternoon rush hours (4pm-8pm). Those have the highest traffic out of other hour intervals. Below are the average number of users taking the metro during that time frame. There are two takeaways - Afternoon rush hour traffic exceeds morning traffic by a lot - Some days have lower traffic than others. Our guess is that those are weekends and there’s not many people using the subway then. We dug deeper to plot the average number of users taking the metro by day. As predicted, Saturday and Sunday have the lowest users. The users then shoot back up on Monday and continue for each weekday. Given that WTWY has limited resources, we excluded turnstile data from Saturday and Sunday. Below are the top 10 stations with the highest traffic during morning rush hour and afternoon rush hour (excluding Saturday and Sunday). While these averages are good, we also wanted to see how much the data varies. Do we have any strong outliers per station? Below are boxplots for the top 10 stations during morning and afternoon rush hours. Morning Rush Hour Afternoon Rush Hour So there isn’t much variabilitiy within our data, which is good. Although some data points have a lot of outliers. We also noticed that for morning and afternoon rush, the top 5 stations are the same (34 ST-PENN STA, GRD CNTRL-42 ST, 34 ST-HERALD SQ, TIMES SQ-42 ST, 23 ST). Those are the stations we should focus on. Below is a bar graph zoomed in on the 5 stations per weekday. If WTWY can get 4-10 volunteers for only 1 day, they should follow these recommendations - Assign 2 volunteers to each station in order of importance: GRD CNTRL-42 ST, 34 ST-PENN STA, 34 ST-HERALD SQ, TIMES SQ-42 ST, 23 ST - Take a 2 hour time slot between 4pm - 8pm - Choose either Tuesday, Wednesday, or Thursday Lead Machine Learning Engineer with experience in Technical Project Management and Data Science in NLP. Technical writer of TowardsDataScience, a popular Medium publication for data science and machine learning. I blog to aspiring data scientists and machine learning engineers on career advice and tutorials to get their feet wet in the field.
OPCFW_CODE
The changing epidemiology of COVID-19: a retrospective modeling study on two years of pandemic in Italy Background. The difficulty in identifying SARS-CoV-2 infections has been a major obstacle to control the COVID-19 pandemic, but also to quantify changes in the proportion of infections resulting in hospitalization, intensive care unit (ICU) admission or death. Methods. We developed a mathematical model of SARS-CoV-2 transmission and vaccination informed by epidemiological surveillance data to estimate the daily number of infections occurred in Italy between February 2020 and February 2022. Model outcomes are used to assess changes in the SARS-CoV-2 infection ascertainment ratio (IAR), infection hospitalization ratio (IHR), infection ICU ratio (IIR), and infection fatality ratio (IFR), in five different sub-periods associated with the dominance of the ancestral lineages, and Alpha, Delta, and Omicron BA.1 variants. Results. We estimate that, over the first two years of pandemic, the IAR ranged between 15 and 40% (range of 95%CI: 11-61%), with a peak value in the second half of 2020. The IHR, IIR and IFR consistently decreased throughout the pandemic with 22 to 44-fold reductions between the initial phase and the Omicron period. At the end of the study period, we estimate a IHR of 0.24% (95%CI: 0.17-0.36), a IIR of 0.015% (95%CI: 0.011-0.023) and a IFR of 0.05% (95%CI: 0.04-0.08). Conclusions. Since 2021, changes in the dominant SARS-CoV-2 variant, the rollout of vaccination, and the shift of infection to younger ages have reduced SARS-CoV-2 infection ascertainment. The same factors, combined with the improvement of patient management and care, contributed to a massive reduction in the severity and fatality of COVID-19. Joint work with Giorgio Guzzetta, Francesco Menegale, Chiara Sacco, Daniele Petrone, Alberto Mateo Urdiales, Martina Del Manso, Antonino Bella, Massimo Fabiani, Maria Fenicia Vescio, Flavia Riccardo, Piero Poletti, Mattia Manica, Agnese Zardini, Valeria d’Andrea, Filippo Trentini, Paola Stefanelli, Giovanni Rezza, Anna Teresa Palamara, Silvio Brusaferro, Marco Ajelli, Patrizio Pezzotti, and Stefano Merler.
OPCFW_CODE
Chapter 456 What Happened Just Then? An hour later, a grin crossed Lu Ze’s face as he looked at the gra.s.sy plain tens of kilometers away. Found it! On that particular plain, 12 earth-colored rabbits could be seen. Among them, the strongest was a level three mortal evolution state range rabbit. Two of the rabbits were at level two of the mortal evolution state while the other nine were at level one of the mortal evolution state. Lu Ze discovered that a small group of rabbits, as compared to a large pack, were typically stronger. In a relatively large group of rabbits, such as one with 12 members, a level three mortal evolution state wouldn’t normally appear in their ranks. Still, this was beneficial for Lu Ze. At least, he could hunt level three mortal evolution state rabbits in peace now. Thinking about this, Lu Ze’s eyes slowly turned cold. A silver light flashed across and he disappeared from the spot. Thereafter, he appeared in front of that happily-grazing level three mortal evolution state rabbit. Purple-red lightning went to surround Lu Ze’s entire body, burning the air and making it sizzle. Quickly, the lightning condensed and transformed into a lightning spear-a terrifying chi came along with it. The range rabbit was rather alert while it grazed the land. The moment Lu Ze first appeared, the long ears on the back of its head stood up. The rabbit then turned to look at Lu Ze. The instant the lightning spear was formed, the rabbit’s body flickered with an earth-colored light. Lu Ze, who had just condensed the lightning spear, suddenly felt the violent fluctuation under his feet. It was the earth spear of the range rabbit! Once more, Lu Ze disappeared from his position as the light flashed in his eyes. Simultaneously, the lightning spear in front of him was launched toward the range rabbits. Sensing the threat of the incoming lightning spear, a thick earth barrier emerged to cover the entire pack of rabbits. Consequently, the lightning spear collided heavily against the barrier. Rumble!! The impact produced a deafening sound, and the purple-red sparks spread across all directions as incessant sizzling sounds filled the area. The sparks left a trail of burnt gra.s.s in its wake. At the same time, the violent spirit force ravaged the earth, sweeping past toward the distance. One rabbit that was at level two of the mortal evolution state whined as the residue of the lightning swept across the barrier, causing the latter to fluctuate. Without hesitation, the rabbit ran off into the distance. However, at this juncture, another lightning spear was launched across the air, instantly piercing through the runaway rabbit. Since the rest of the rabbits didn’t possess G.o.d art, they were fried upon being hit by the lightning, quickly turning into charcoals and instantly dying! At the center of the collision, the barrier erected by the level three mortal evolution state rabbit was shattered. Even though the lighting spear was greatly weakened, a piece of it still remained. The surviving spear struck heavily on the rabbit’s body. The huge body of the rabbit was forcefully thrown away due to the impact. It plowed through dozens of kilometers of land before gradually stopping. The body of the rabbit left off a smell of burnt meat, which wafted across the air. Lu Ze couldn’t help but swallow his saliva. This smelled really good! He really wanted to eat it, but he couldn’t! However, after seeing the body of that rabbit turn to dust, Lu Ze smiled. He managed to kill a level three mortal evolution state rabbit with G.o.d art! During his previous hunting experience, he was confined to being chased around by those kinds of range rabbits. Soon, Lu Ze realized something. The stronger the beasts were, the more vigilant they were. As soon as Lu Ze appeared, the rabbit reacted accordingly. It directly sent an earth spear toward Lu Ze, who, in turn, failed to form his lightning spear at its peak state. Lu Ze’s ability to kill the rabbit could only be attributed to his much-improved power. However, it would be an entirely different outcome if the opponent was a level four mortal evolution state rabbit. It wouldn’t work if he wanted to use s.p.a.ce G.o.d art to get closer. It seemed that he couldn’t probably ambush a level four mortal evolution state rabbit. Even so, Lu Ze still wanted to try and see whether he could fight a level four mortal evolution state boss. While Lu Ze was having such thoughts, the corpse of the rabbit turned to ashes, and a group of orbs was revealed. Six red ones and five purple orbs, as well as an earth G.o.d art orb. Lu Ze happily collected all his loot. Then, he flew off into the distance. Two hours later, Lu Ze looked at a group of 23 rabbits pa.s.sing by in the distance. His eyes exhibited a solemn expression. In a span of two hours, he came across three more waves of rabbits and possums, but the strongest among those groups had only reached level three of the mortal evolution state. Although this loot could be considered good for Lu Ze, it was not sufficient to satisfy him. But this time… The leading rabbit of this pack was a level four mortal evolution state. It was over 20 meters tall. This rabbit possessed a powerful chi. It was a boss for sure! Lu Ze grinned and put up his 1st s.h.i.+eld. He reappeared above the head of the level four mortal evolution state boss that was more than 100 kilometers away. A lightning spear was formed. The rabbit suddenly stopped. Suddenly, Lu Ze felt a powerful ripple from the ground. Four of the twenty-three rabbits flashed with an earth-colored right. Other than one rabbit that was at level four mortal evolution state, there was also one that was level three and two ones that were level two. The earth spears instantly flew out and shot toward Lu Ze. The powerful forces twisted the air. In an instant, the lightning spear proceeded to face the strongest earth spears. At the same time, a breeze circulated around Lu Ze. Instantly, he disappeared from his spot. When both of the spears clashed with each other, a violent vibration was created. At this moment, Lu Ze appeared before the level four mortal evolution state rabbit. Right then, he let out a punch. Star crippling punch! Rumble! The black and gold fist force hit the earth barrier heavily. Even with a perfect level of mastery, the star crippling fist could only make a ripple on the barrier. Lu Ze frowned slightly as he planned to attack again. Suddenly, he felt another ripple from the ground. Once more, Lu Ze managed to dodge it. Multiple spears plunged toward his original position. Lu Ze’s battle against the level four mortal evolution state rabbit made the surrounding few thousand-kilometer areas tremble. The center of their battle had turned into a hundred-kilometer wide ditch. Only the rabbits with G.o.d art managed to survive from that group. The rest of them had all died. After another collision, Lu Ze frowned. His full-powered lightning spear crushed the earth spear, but the remaining lightning spear could only create a fluctuation on the barrier. In the end, he couldn’t kill it. Its defenses were too strong. Lu Ze suspected that even if the earth spear didn’t collide with his lightning spear, its power would only be enough to break the barrier. Its defense was near level eight of the mortal evolution state. Lu Ze considered killing the level three rabbit first, but each time, his lightning spear would be weakened to the degree that they couldn’t sufficiently kill level three rabbits. This was quite embarra.s.sing.If this continued further, he felt that he would be fatigued to the point of death. Lu Ze’s mouth twitched as he flashed with a green light, immediately flying off into the distance. The two furious rabbits wanted to keep chasing, but they could only watch as Lu Ze went farther and farther away. They might be rabbits, but earth G.o.d art could only contribute a little boost to speed. It was impossible for them to catch Lu Ze. Lu Ze heard the furious roars behind him. He then raised his mouth slightly. Catch me if you can! However, a light swept up Lu Ze’s vision, and a painful sensation overtook his body. The next thing he knew, he was already back in his room. What happened just now?
OPCFW_CODE
I would really appreciate some help in troubleshooting/setting up my PX4 Flow Sensor Kit. I am following the instructions located at the following link: https://docs.px4.io/master/en/sensor/px4flow.html I am using the latest QGroundControl (Linux) and a custom drone with Pixhawk 4 and PX4 Flow Sensor Hardware, both running the latest firmware. (Yes, I have uninstalled the modem manager to let QGC work as intended on linux.) I have updated the firmware on the PX4Flow Sensor successfully but have trouble completing any of the following steps, namely - 1) Changing the parameters and 2) focusing the lens. Issues in Detail - I am able to connect to the PX4 flow sensor and see the parameters list. Nevertheless, it seems the most important parameters are missing from the list. Namely, SENS_EN_PX4FLOW (to enable the sensor) and SENS_FLOW_ROT (To set the rotation of the sensor relative to the drone.) Please look at screenshot below for more info. 2)To focus the lens, you need to be able to see the video feed of the camera. It seems there is a tab titled “PX4FLOW” in QGroundControl that should display the camera feed, but in my case nothing is displayed. Please look at the screenshot below. 1+2) I have followed the same steps on Linux and Mac with the same outcomes for both platforms. I would really appreciate any help. Thank you in advance. If this is using the latest Stable 4.0 release then can you move this information to a GitHub Issue so it doesn’t get lost and then I’ll take a look. If not move to the latest stable and try it again. Thank you for the prompt reply. I do not know exactly what release it is but it does seem to be the latest Stable version. When installing the PX4 Flow Sensor firmware on QGroundControl, there are only two decisions to make. - PX4 Pro or Ardupilot (I Selected PX4 Pro) - Standard Version (Stable) or Custom Firmware File (I Selected Standard Version) Where exactly should I post this as a Github issue? Additionally, if this is a problem with the latest firmware, how can I access/download an older firmware as a temporary solution? Thank you very much for the help. Nevermind. The fix for this will be available in a new Stable and Daily builds by tomorrow. Unfortunately, I just tried updating the PX4 Flow drivers via QGroundControl and I am still having the exact same issues, no change/improvements. (I tried on Mac and Linux as before.) Would you kindly provide a link to the updated driver? or a link to a previous driver that was functional? I would really appreciate any help you could offer to get this sensor functional. I cannot continue my product development without this sensor, or something similar. I look forward to hearing from you.Thank you very much for all of your help. The problem has nothing to do with drivers. It is a bug in QGC. You need the latest version of QGC. Thank you for the quick reply and incredible support. After updating QGC I can now see the footage of the PX4 Flow Sensor. Issue 2 has been solved but it seems Issue 1 is still a problem. I worry that the sensor will still not work in my application. It seems that the parameter SENS_FLOW_ROT is still unavailable to edit in QGroundControl. As shown in the screenshot below, it seems that parameter is important for setup. In our design, the x-axis of the sensor is facing towards the front of the drone, not the default y-axis, so it would seem that I need to change a parameter at some point in setup. Did this parameter change name? or perhaps it is not important anymore? I would appreciate any help. Thank you very much for the great support so far. SENS_FLOW_ROT is a PX4 Pro firmware parameter. It is not a parameter which is on the PX4 Flow itself.
OPCFW_CODE
On-line training courses are preferred nowadays since they enable individuals to take courses at their very own rate, outside the boundaries of a class. You can likewise register in courses for a restricted time, so you do not need to withstand the long-lasting dedication of standard colleges. On-line training courses are additionally more affordable, so trainees can conserve without needing to fret about the price. Visualize, momentarily, that you are an expert that requires to discover Appium. The trouble is that you are not thinking about simply being a novice, you wish to end up being a Appium specialist. Excellent information, you can do it today! Perhaps you can locate details in tutorials as well as YouTube video clips, yet allowed’s admit it, you will not find out points in an organized means, as well as you can quickly obtain shed in the process. What do I advise? Take a program, which is well structured, and also which lots of people have actually formerly evaluated as well as verified. - 1 The most effective Appium training course of 2021 - 2 The most effective Appium Total training course of 2021 - 3 The most effective Appium Quick program of 2021 - 4 One solitary training course to Master Mobile automation screening for IPHONE and also Android 2021 - 5 The very best Appium Practical training course of 2021. - 6 One solitary program to Master Mobile automation screening for IPHONE and also Android 2021. - 7 The most effective Appium program for Novices in 2021. The most effective Appium training course of 2021 ** Training Course Last Upgraded – August 5th – Appium Identical implementation on Cloud Servers *** *** Leading Rated/Ranked Popular APPIUM Program with Life time Q/An Assistance on Udemy with highest possible Pupils Registration ” Find Out Every Little Thing You Required to Find Out About Mobile Automation (Android+ IOS) Also If You have actually Never ever Configured Prior To. 50,000+ Prospective trainees currently registered as well as 5 beginning examines all the wayOn training course conclusion You will certainly be Grasped in Mobile Automation Screening in both Android & & IPHONE as well as can execute Efficiently it in your work environment or will definitely come down on High Paying Task.” Make a decision the Top Quality naturally by enjoying Sneak peek Lectures as well as Pupils Evaluations” Amongst the primary subjects of the program, you will certainly discover: - ***Framework Learning Plan*** (Must Watch) - In Depth understanding of Android Native Apps automation with Appium - Appium Hybrid Framework design part -2 Building Utilities - Reporting and CI/CD Integration for the Appium Framework – Part 3 - Understanding Version Control System GIT (Bonus Section) - Hybrid App Automation with Appium to switch from Apps to Web browser - Bonus!! Student Special -Must WATCH - Automate IOS Advance Gestures functionalities with Appium - Framework Part -3- Jenkins- Continuous Integration Tool - Appium Framemwork- Part 1- Learn TestNG Basics The most effective Appium Total training course of 2021 Dec 2020: The Program material is upgraded for the year 2021! Why this program? Amongst the major subjects of the training course, you will certainly discover: - Native Apps Automation - Important: When you need help - TDD Framework Design [Appium + TestNG] - Java Essentials - Appium Setup on Mac – iOS - Appium Driver Commands - Complete CI/CD Implementation Step By Step - Appium Setup On Mac – Android - First Appium Project - About the Course The most effective Appium Quick program of 2021 One solitary training course to Master Mobile automation screening for IPHONE and also Android 2021 *********** FIRST PROGRAM TO COVER OPTIMUM TOPICS ON ANDROID AND ALSO IPHONE WITH MOBILE AUTOMATION FRAMEWORKS EXTENSIVE *******. Update: New Talks included based upon Most recent Appium 1.20 (2021) for IOS 14 with XCUITest and also Android 10.0 upgrade. Amongst the major subjects of the training course, you will certainly discover:. - TESTNG FRAMEWORK – Parameterization - BONUS LECTURE - Locator Strategies and Simulating Android Hardware Keys - NEW – Online Live Batch on Appium from Oct 2016 - IOS LECTURES STARTS HERE – Basic Installation – MAC OSX - UPDATING TO LATEST CUCUMBER 6 VERSION - TestNG Basics - Appium Android – Testing Android Native Apps on Windows - Basic Installation on MAC OSX – Latest Lectures from Appium 1.8.2 version - Appium 1.17 – Touch Actions – Utility for Swipe / Scroll – Handling Gestures The very best Appium Practical training course of 2021. One solitary program to Master Mobile automation screening for IPHONE and also Android 2021. *********** FIRST PROGRAM TO COVER OPTIMUM TOPICS ON ANDROID As Well As IPHONE WITH MOBILE AUTOMATION FRAMEWORKS EXTENSIVE *******. Update: New Talks included based upon Most current Appium 1.20 (2021) for IOS 14 with XCUITest and also Android 10.0 upgrade. Amongst the major subjects of the program, you will certainly discover:. - Environment installation - Bonus lectures - Introduction to Cucumber - Jenkins Server - Introduction to Appium The most effective Appium program for Novices in 2021. All the subjects of Appium (Android, IOS and also Crossbreed Application), Selenium, BDD, Jenkins and also Basic Python we have actually described in this program. All the coding documents whatever we have actually clarified in the video clip, We have actually published to udemy. To ensure that you download and install and also experience all those points. Below are the Appium( Android as well as IOS) Subjects gone over in the program. Amongst the major subjects of the training course, you will certainly find out:. - Course Content – Must Watch - Waits in Appium - Install Android Demo APK - Actions Chains Class - Appium FrameWork Part – 2 (Page Object Model) - Python BDD (Behavior Driven Development) - Selenium Framework Part – 1 - Waits in Selenium - Python Logging
OPCFW_CODE
Why I prefer the script exporter for exposing script metrics to Prometheus Suppose that you have some scripts that you use to extract and generate Prometheus metrics for targets, and these scripts run on your Prometheus server. These metrics might be detailed SNTP metrics of (remote) NTP servers, IMAP and POP3 login performance metrics, and so on. You have at least three methods to expose these script metrics to Prometheus; you can run them from cron and publish through either node_exporter's textfile collector or Pushgateway, or you can use the third part script_exporter to run your scripts in response to Prometheus scrape requests (and return their metrics). Having used all three methods to generate metrics, I've come to usually prefer using the script exporter except in one special case. Conceptually, in all three methods you're getting metrics from some targets. In the cron-based methods, what targets you're getting what metrics from (and how frequently) is embedded in and controlled by scripts, cron.d files, and so on, not in your Prometheus configuration the way your other targets are. In the script exporter method, all of that knowledge of targets and timing is in your Prometheus configuration, just like your other targets. And just like other targets, you can configure additional labels on some of your script exporter scrapes, or have different timings, or so on, and it's all controlled in one place. If some targets need some different checking options, you can set that in your Prometheus configuration as well. You can do all of this with cron based scripts, but you start littering your scripts and cron.d files and so on with special cases. If you push it far enough, you're basically building your own additional set of target configurations, per-target options, and so on. Prometheus already has all of that ready for you to use (and it's not that difficult to make it general with the usual tricks, or the label based approach). There are two additional benefits from directly scraping metrics. First, the metrics are always current instead of delayed somewhat by however long Prometheus takes to scrape Pushgateway or the host agent. Related to this, you get automatic handling of staleness if something goes wrong and scrapes start failing. Second, you have a directly exposed metric for whether the scrape worked or whether it failed for some reason, in the form of the relevant script_success metrics. With indirect scraping you have to construct additional things to generate the equivalents. The one situation where this doesn't work well is when you want a relatively slow metric generation interval. Because you're scraping directly, you have the usual Prometheus limitation where it considers any metric more than five minutes old to be stale. If you want to do your checks and generate your metrics only once every four or five minutes or slower, you're basically stuck publishing them indirectly so that they won't regularly disappear as stale, and this means one of the cron-based methods.
OPCFW_CODE
Splinter cell conviction updating launcher alexis denisof and alyson hannigan start dating A scheduled task is added to Windows Task Scheduler in order to launch the program at various scheduled times (the schedule varies depending on the version). The software is designed to connect to the Internet and adds a Windows Firewall exception in order to do so without being interfered with. The software installer includes 50 files and is usually about 74.07 MB (77,663,633 bytes). So I bought the game 3 months ago- and then started my case mod/WC'ing system.. I reopen and says updating launcher again with the given eta.. If you are unable to do so then save some money for it or obtain it another way at your own risk. Blacklist introduces a new gameplay mechanic called Killing in Motion, allowing the player to highlight targets and take them out in quick succession while on the run. The pad would work if you forced your computer to install Beta drivers from 2007. As of yet, there has been no official fix: a patch launched today that seemed to work somewhat for me, yet many are still reporting the game incompatible with the latest drivers. Crash Fixes- AAR screen- Loading into CO-OP match- fix alt-tab crashes- fix alt-tab on windows 8- fix alt-tab issues on machines with multiple GPUs- fix window minimization issues- fix texture crash on incorrect texture info- fix crash on thread storage release- fix active sound crash- fixed crash when resolution is less than 2x2- fixed device lost crash- fix crash in paladin- fix asset load crash- fix crash when running and defending the teammate who had the intel- fix deadlock between ui and loading- fix loading hang- fix crash opening doorway to hallway leading to tunnel- fix host crash in hadron collider- fix crash in voron station- fix hang in AAR screen in multiplayer- fix getting stuck in shadownet interface- fix crash after falling under elevator- fix crash sliding down ladder- fix crash on quit to desktop- fix crash with multithreaded physics- fix crash on checkpoint reload- fix crash in silo Controls- fix hold not activating vision modes- don't allow WASD/mouse when controller is on- mercenaries no longer prioritize capturing over killing when using mouse & keyboard- allow back button when input is blocked for MP scoreboard- fix reset to default for gamepad- allow controller connection on SMI screen- mouse position no longer reset every tick- scrollbars can now be dragged by mouse- set correct scrollbar positions in friends party list and key bindings- fix for not being able to melee in certain locations- fix intel grab priority- fix for losing control of input when disconnecting controller during loading- fix for not being able to switch weapons during countdown- fix windows key usage UI- hide mouse cursor between transitions- select button removed when mouse and keyboard are default controllers- hook up mouse clicks correctly in co-op lobby- mouse movement no longer triggers skip prompt- quit to desktop option added for ADV/COOP modes- fixed flicker on selection wheel when using mouse/keyboard- prevent overlap in chat text- fix gadget ui- show clan tags in text chat- fix mouse interaction with gamma slider- fix missing button prompt- fix rogue agent messaging- fix back button alignment- fix loadout display visibility on controller disconnection- fix missing skip prompts- fix tutorial buttons- fix text in progress menu- fix disappearing ui when resetting to defaults- correctly localise video settings- update mode option on mission preview when cycling through modes- fix loadout menu- fix geotips getting clamped- fix pulsing icons Graphics- fix erroneous camera movement when moving mouse and exiting SMI- fix slowdown when SSAO and MSAA is on- fix object flickering- fix white outlines on particles in MSAA mode- fix vision mode with occlusion system- fix MSAA causing some objects to have black squares in DX11- fix field AO flickering in DX11- fix bullet casing flickering in DX11- fix SSAO glow on characters in DX11- fix black textures in DX9- fix occlusion flickers- fix laser dot sight- fix refresh rate issues- fix character rendering on customization screen in MSAA modes- fix streamable texture issues- fix incorrect viewport sizing on first launch Optimizations- low spec CPU optimization- multi-GPU optimizations- fix framerate drop issues Configuration- detect outdated drivers- correctly set resolution based on system validation Misc- fix music sequencing in ADV modes- fix erroneous audio restart when alt-tabbing during video playback- fix achievement unlocks- fix assist scoring- fix ladder playlists- network replication fixes Well this is just the launch patch that steam must download automatically but something is wrong with the uplay autopatcher.I had tried playing with the keyboard and mouse, but the sloppy mouse smoothing removed all accuracy. First impressions were good, my new PC could handle the game on nearly full graphical settings. This game should be smooth as butter on a new computer but it seldom feels that way.
OPCFW_CODE
What is the fastest way to upcast std::shared_ptr<std::unique_ptr<T>>? Suppose class D is derived from class B. What is the best way to convert a std::shared_ptr<std::unique_ptr<D>> to std::shared_ptr<std::unique_ptr<B>>? The solution should not increment / decrement the std::shared_ptr counter and should not make copy of the std::unique_ptr. The naive approach does not work: #include <memory> struct B { virtual void msg() { printf("BASE\n"); } virtual ~B() = default; }; struct D: B { D() = default; void msg() override { printf("DERIVED\n"); } }; int main() { std::shared_ptr<std::unique_ptr<D>> derivedPtr = std::make_shared<std::unique_ptr<D>>(std::make_unique<D>()); std::shared_ptr<std::unique_ptr<B>> basePtr = static_cast<std::shared_ptr<std::unique_ptr<B>>>(std::move(derivedPtr)); basePtr->get()->msg(); } The compiler says that this static_cast is impossible. Why do you have a shared pointer to a unique pointer in the first place? Do you know you can convert a unique pointer to shared? same as a D** cannot be converted to a B** ... see Conversion of pointer-to-pointer between derived and base classes? The solution should not increment / decrement the std::shared_ptr counter... Why not? ...and should not make copy of the std::unique_ptr. Why not? I second both questions. Also note, that shared pointer managing a unique pointer is effectively a shared pointer managing a move-only object. What was your intention in writing something like this in the first place? This is impossible (in any form) because std::unique_ptr<B> and std::unique_ptr<D> are not in any form related by inheritance. Conceptually this is wrong and this is also the first time I have seen a shared pointer holding a unique pointer. That itself seems like a big red flag on the whole approach and makes it sound like a XY problem. Why does the shared pointer hold the unique pointer instead of holding the object that the unique pointer holds directly? This answer is OK but it doesn't explain anything. Look at the comment above if you want to understand the unfixable conceptual problem. I agree with the others that the premise is unlikely, but granting the premise, let us assume you have: std::shared_ptr<std::unique_ptr<D>> derivedPtr = /* ... */; The reason this does not make conceptual sense is that shared_ptr manages the lifetime of the unique_ptr which manages the lifetime of D. The question becomes: do you need two null states? The shared_ptr can be null or non-null and manage a null unique_ptr. If you do need two null states, I would consider replacing it with std::shared_ptr<std::optional<D>> to be clearer. If you do not, and it is an artifact of the construction or how the object is handed to you, it is straightforward to get a more useful type: std::shared_ptr<D> betterDerivedPtr = std::shared_ptr<D>(derivedPtr, *derivedPtr ? derivedPtr->get() : nullptr); This shares ownership with the original (D is not copied, it's the same D) but removes the likely useless double-null. This is the aliasing constructor. Then, you can use std::static_pointer_cast. Despite having a code structure that doesn't make much sense, I'm going to attempt an answer that should at least compile. Smart pointers are used to control the lifetime of an object. If you just need a temporary copy of the pointer that won't affect the lifetime, dumb pointers are much more flexible. int main() { std::shared_ptr<std::unique_ptr<D>> derivedPtr = std::make_shared<std::unique_ptr<D>>(std::make_unique<D>()); B* basePtr = static_cast<B*>(derivedPtr->get()); basePtr->msg(); } Not sure, but I think Andreev was looking for std::shared_ptr<std::unique_ptr<B>> basePtr = std::make_shared<std::unique_ptr<B>>(std::unique_ptr<B>((*derivedPtr).release())); Assuming the code is supposed to transfer ownership, which is what I intuited from the not working example.
STACK_EXCHANGE
Hello, my name is Yaujj75 and I am the admin of this wiki. Right now I am trying to edit to improve the wiki with the mess created in the first place. This might take longer but I will prevailed as I managed single handily fix three wikis. Another thing you can put ideas in my To-Do List, just put your username and explain the reasons in my message wall. (If I am inactive and need help, contact me.) History with Max Payne and this Wiki My interest on Max Payne begun when my favourite gaming YouTuber, Pharaoh2091 mentioned Max Payne game series in his stream on the game Virginia on 12th December 2017. I searched the game series and intrigued about it, I heard about Max Payne in the past and saw a Stickmen short about it. From there, I download the games from Ocean of Games and play them myself. I gave it a tried and it was fun to play with the gameplay and Max Payne quotes. I also check the wiki but it was a mess. However, I didn't edit because i wasn't editing that time and I was just reading. As I started playing Max Payne 3 and become admin in three wiki, I decide that I would do another turnabout on this wiki and do reforms. Now I am admin of this wiki and making reforms and fixes around the wiki although not as extensive with my main wiki, Brothers in Arms. - Remove all Redlinks in the wiki as possible - Watch the terrible Max Payne movie to collect information - Fixing the complicated problems in the wiki itself. My Favorite Pages - Alpha Protocol Wiki (Joined: 25th November 2018, Promoted: 4th December 2018) - Brothers in Arms Wiki (Joined: 11th September 2018, Promoted: 28th March 2019) - The Lyosacks Wiki (Joined: 24th April 2016, Promoted: 30th July 2019) - Max Payne Wiki (Joined: 15th July 2019, Promoted: 2nd September 2019) - Monster House Wiki (Joined: 4th July 2020, Promoted: 26th September 2020) - Shaun the Sheep Wiki (Joined: 3rd January 2021, Promoted: 29th January 2021) - Supa Strikas Wiki (Joined: 12th April 2019, Promoted: 4th May 2021) Point of Interest Wikis - Regular Show Wiki (Joined: 4th July 2015) - Sniper Elite Wiki (Joined: 9th May 2016) - Mount & Blade Wiki (Joined: 30th December 2017) - Sift Heads Wiki (Joined: 14th April 2018) - Saw Wiki (Joined: 2nd January 2019) - Detective Grimoire Wiki (Joined: 24th April 2020) - Hitman Wiki (Joined: 6th May 2020) - W.I.T.C.H Wiki (Joined: 4th August 2020) - Attack on Titan Wiki (Joined: 15th October 2020) - Villains Wiki (Joined: 12th December 2020) - Fanganronpa Wiki (Joined: 31st March 2021) - Guardians of Ga'Hoole Wiki (Joined: 13th July 2021) - Wolves of the Beyond Wiki (Joined: 7th September 2021) - The Raid Wiki (Joined: 10th December 2019) - Sniper Assassin Wiki (Joined: 6th April 2020) - Inquisitor Wiki (Joined: 28th July 2020) - Battlefleet Gothic Armada Wiki (Joined: 21st August 2020) - Charlotte's Web Wiki (Joined: 30th May 2021)
OPCFW_CODE
I have just installed chicken and it is always timing out. I am on a dialup and running Panther to a remote computer on dsl running panther. I don't know or understand why "chicken" doesn't work for me. I am on OS10.3. and the remote machine is using OSXvnc and is running OS10.3.3. Both machines are using DSL. I just keep on timing out. Any thoughts would be much appreciated. Make sure that you can ping the server machine. It sounds like you've got a firewall in between you and the server. I guess I'm the server machine since I'm the one running Chicken 2.0.2. I don't know anything about double clicking on my mac i book. also when I start my machine it starts through Mac OSX 10.28, it the opens OS9 and everthing disappears. then my desktop shows up and I open IE 5.2 and go from there. Is there instruction manual on this program "Chick", because I know nothing was said about double clicking on OSX 10.28. I didn't know you had to install chicken on their computer. All my computer says is couldn't connect, timed out. I just wish some one would go over the installation with me step by step. ((((JASON))).... Morticia, I'm pretty confused by your post. Chicken of the VNC is a VNC _client_ for Mac OS X. Chicken does not work with Mac OS 9. You use Chicken to connect to a VNC _server_ that can be running on any operating system. I'm not sure what you're referring to when you mention double-clicking. Here are the nuts-and-bolts instructions for connecting. - Verify that the machine you wish to control is running a VNC server. - Determine that machine's hostname (like google.com) OR IP address (four numbers separated by periods). You can find this information by asking whoever maintains the server machine. - You must be able to communicate with the server machine in order for Chicken to work. This means that if the server machine is on a remote network and is behind a router, you probably cannot communicate with it. If it is firewalled, you must open port 5900 on the firewall. - Determine the server machine's VNC display number (usually zero) and password (set by the server machine's administrator). - Enter this information into Chicken of the VNC. The most common reason for a connection to time out is that the server machine is unreachable - you cannot communicate with it. Resolving this is something you'd need to discuss with whoever maintains the server machine, or with your ISP. the only time OS9 appears is when I try to open something that requires a OS9 program. After I'm done it automatically goes back to Mac OSX. Iread somethng here that said something about double clickng on the Chicken program, fast click vs slow click, it made a difference they said. I don't know if they are running a VNC server, they are in Tennessee. They are on Prodigy.net wich goes directly to Yahoo, I think Yahoo bought out Prodigy, they have accellerated dial-up through Yahoo, but their e-mail address is at Prodigy. When you send a message to them it doesn't go to prodigy, it goes straight to Yahoo. What is a VNC server? They have a PC that runs Windows XP. They take the machine to a business if theres any problems to maintain. Aren't I the server machine, I'm trying to moniter them. They are not running on a Remote network, I don't think, what is a remote net work and router. I thought I was the server machine and I put my information in the login window. They don't have an administrator. I tried to discuss this whith my ISP when I thought a was the server machine, lol. What would I put in the host area Prodigy which is his in his e-mail address, but goes straight to Yahoo. I have an sbc address which also goes straight to Yahoo. I guess I'll talk to my ISP provider. I know their IP address through e-mails form them. I thought I could remotely monitor them from my mac with this program, I didn't know they had to have a VNC server. Instructions didn't come with this program, and I'm not real computer literate. When I get off this chemo, I plan on taking some cources though. Thanks for any input. I don't even know what Value means when a program asks for it, or Zone. I guess I should get a Computer Dictionary? P.S. If you have anymore input, go ahead and post it , or e-mail me. Log in to post a comment. Sign up for the SourceForge newsletter: You seem to have CSS turned off. Please don't fill out this field.
OPCFW_CODE
From Model to Code: Event Modeling & Axon Framework How do I design an application? What’s the process like? Where do I even begin?! Well, the ‘old me’ would design an application "on the fly," AKA, create a repository on GitHub, open it on IntelliJ, and BAM! So you’ve got an application a few hours later with some backend functionality, MongoDB, and UI/UX designed with the great helping hands of Bootstrap and SemanticUI. Success? Sure… maybe for a simple application. But what about designing a large-scale application or a modular application with individual parts that can scale and evolve easily? Then how should I start something like that? When I first learned about Axon Framework, the concept of DDD and Event Sourcing were very new to me. I attended several training sessions and read tirelessly about the structural designs, messaging systems, microservices, and more. I read Eric Evans’ blue book, Vijay Nair’s Practical Domain-Driven Design in Enterprise Java, and many articles and Q&As on StackOverflow and AxonIQ Google group (now discuss.axoniq.io). Unfortunately, the amount of information (and at times, the lack thereof) felt overwhelming and sometimes overly technical. The help came from my colleague, Ivan Dugalic, who explained the concept of Event Modeling to me. At that time, I had heard about Alberto Brandolini’s Event Storming and Adam Dimytruk’s Event Modeling that had been derived from Event Storming. Still, I had not yet used either concept. I am a visual learner and have to do things myself to grasp how they work. Ivan showed me the Hotel Demo application, inspired by Adam’s Hotel Application model shown in his blog. So, I decided to create my own small Music Lesson Scheduling application using Event Modeling. As I mentioned before, in the past, I had created applications on the fly without much of an upfront design in mind. I would add components and classes without thinking about how these components would communicate to each other later on or whether or not they were loosely coupled and cohesive. As a result, I did not thoroughly examine if these components would satisfy the overall requirements effectively. I also was not really concerned if I could easily add more features in a day or a year from now or if the system could evolve in time. But as I am generally an organized person and like to plan things, it made perfect sense to plan carefully and in advance when creating an application. Additionally, as a parent, I am all about simplifying life, so Event Modeling seemed like the ideal choice. A New Tool in the Toolbox “Event Modeling uses 3 moving pieces and 4 patterns based on 2 ideas.” Of course, based on your application, the moving parts can be more than 3 (as you will see below). - In this section, the user is given the ability to affect/change the system. (blue sticky notes) - What events were (notice that events are always in past tense) stored in the system as we move forward in time? (orange sticky notes) - The information needs to be available to the user (dates for a lesson are available on the calendar for a student to book). - They can also be retrieved at a later date. - This is the visual part of the story-telling or the visual of the web page. This part comes at the top of the model. - The swim lanes show different people who are interacting with our system. - At this point, we have enough information to work on the UI/UX part. Aggregates (a tactical DDD pattern) - Aggregates are another moving part of my design, but I will focus on those in more detail in the next blog. 1. State Change - Commands to Events. - Given events (previous state) - When a command (new intent) - Then, a new event is published (new state) 2. State View - Informing the users about the state of the system is shown here. - Given event(s): A lesson time is available to be booked, except for the week of spring break when the school is closed. - Then view(s): The calendar should show all the dates except for March 12-19th. Integration: Systems can receive and send information to other systems. These integrations do not have the visible aspects and need higher-level patterns, which are Translation and Automation. - It is helpful to translate the information (Event) from another system into a format (Command) that is more familiar in our system. This integration component is sometimes implemented as Saga, regular event handler, process manager, etc. Essentially, it is a simple translator that acts as an Anti-Corruption layer. - Queries are also part of the API (not only events and commands). For more complex integrations (for example, with some 3rd party Payment provider), one could utilize queries/projections as a starting point of this integration. For example, the idea of a to-do list/view. - This integration pattern is different from Translation, and it could be considered in integration with systems that do not provide messaging API (command, events, queries). These systems provide REST endpoints, and you could query your own projection (for example, in some time-frequency - batching- pooling ) and send an HTTP request to that other service/payment provider (to-do list) Exploration exercise in 7 steps - The Blueprint Adam explains this process as a ‘Workshop Format’ in 7 simple steps: 1. Brain Storming: - Use OrangePost-its - Use Past Tense - Events (the first moving part) are described as something that happened in the past - They are immutable - Only state-changing events need to be specified 2. The Plot - The storyline (Events) - The concept of TIME is introduced in this step, and the events are carefully planned based on the timeline. - Wireframe (the second moving part) is shown from the user’s perspective on the system representing the source and destination of the information. - UI: Wireframes are usually put on top of the blueprint 4. Identify Input (Commands) - Command (the third moving part) is the intent to change the state of a system. - The transactions are both on the business and technical sides. 5. Identify Output (Views or Read-Models - the fourth moving part) - Access to information or data is key - We want to know if payment went through on a certain pay period - As stated above, views are passive, and they cannot change the event after it’s been stored in the system 6. Organizing events into swimlanes - Allow a system/app to exist as a set of autonomous parts owned and managed by different teams. - In my case, the swimlanes are there to group the events by concepts/aggregates. So basically, this BIG stream of all events is divided into small event streams, each belonging to a specific aggregate. 7. Elaborate on scenarios: - Given-When-Then or Given-Then allows for rapid review by various representatives - GIVEN events = current state - WHEN new COMMANDS = new intent - THEN new events are published For example, in this application: - GIVEN Lesson Added - WHEN Book Lesson - THEN Lesson Booked - Just have to be careful and sure that each specification belongs to one command or one view. In Axon Framework, we use aggregates to organize the commands and events belonging to a certain part of the business. This will allow for different parts of an application to grow independently. In the diagram above, each square with a yellow sticky note is an example of an aggregate. As you can see, the events, commands, views, and event “no events” are organized and can be identified with this simple visual diagram. Seeing the boundaries and components so clearly will allow us to translate this model into code quickly without losing any information. For instance, as seen above, writing “Acceptance” tests are very easily done here. I will talk more about aggregates in the next blog post… but in the meantime, you can listen to my podcast with Allard Buijze, “All about Aggregates.” In short, Event Modeling helps create a transparent system for all departments in a business to view how the system is going to work and what can be easily changed. It helps provide a simple solution to designing and evolving complex systems, and I highly recommend it. Once the design is completed, translating the model into code becomes easily manageable. In the next article, Ivan and I will discuss different tools that Axon Framework and Server provide to make our application's coding process easier. Until then… happy coding! Many thanks to my colleague Ivan Dugalic for his help with this project. To view the full Miro board for this project. For more information on Event Modeling, please visit Adam's article detailing his process and Vijay Nair’s interview with him on InfoQ. I will also have a podcast on Exploring Axon coming up with Adam later this month.
OPCFW_CODE
Abstraction for Database access I need to iterate through a list of abstract class's instances which each implements a method to load its data from a database. For simplicity lets say the Abstract class contains only that method: Abstract Class public abstract class AManagerBase { public abstract Response InitiateManagerResources(IDataManager dbManager); } When a class that derived from this base class is written, the method is implemented by calling the IDataManager's method ExecuteCommand which accepts an SQL command to retreive data from that class's table in the DB. For example: var tableData = dataManager.ExecuteCommand($"SELECT DC.* FROM {CLASS_TABLE_NAME} DC"); I want to use some sort of abstraction to the DB access instead of sending SQL command. I read about Repository Pattern, but the problem in my case is that each class that is written in the future will have its own table in the database with its own data and i can't add entities to the shared module each time a class is written. The bottom line is that i want to be able to replcae the DB in the future (currently using Access DB) to some other DB (maybe xml file) without changing all the written libraries. Some more examples: // Independent library A.dll // A.dll has a table named A_Table in the DB public class ClassA : AManagerBase { public override Response InitiateManagerResources(IDataManager dbManager) { var loadedData = dbManager.ExecuteCommand("Select ..."); // create my own defined entities from the returned data } } // Independent library B.dll // B.dll has two tables named B_Table1 and B_Table2 in the DB public class ClassB : AManagerBase { public override Response InitiateManagerResources(IDataManager dbManager) { var loadedData = dbManager.ExecuteCommand("Select ..."); // create my own defined entities from the returned data } } Thanks. Why you don't use some ORM and just switch then to another DB provider? wouldn't Entity Framework handle this for you ? @kamo can you please show a simple example, so i will understand the idea? @auburg thanks but Entity framework doesn't work with MS access DB and if in the future i will want to change DB to say an XML file it is also not supported. Plus the tables are added manually to the DB every time a new derived class is written. To begin with, in your dbManager ExecuteCommand method i would remove the SQL like SELECT statement and have that done internally rather than passed in by the client classes in their InitiateManagerResources calls. I would have a configuration file that maps each dll to whatever table(s) they're accessing i.e. A=A_Table B=B_Table1,Table2 ... When each class from that dll is instantiated,either pass in the table list in the constructor or have each class read it in themselves. Then pass this table list as a string to the dbManager.ExecuteCommand method i.e. dbManager.ExecuteCommand("A_Table") Hopefully the db manager should have enough information to execute the query and return results. If / when you move to XML, then the only thing that changes is the format of the data in your mapping file (depending on the layout of your XML file). You could have another implementation of IDataManager that understood how to parse this XML file (using Linq to XML perhaps?) but your client classes wouldn't change because they're just passing in data from your mapping file. Hope this helps... Sorry for the delay in accepting your answer. I've tried to find a more convenient solution but apperently i can't find one. I've ended up using part of you answer. Each dll class init method will call the ExecuteCommand method with "section" names and the data manager will use those names to pull those names from the Database (and in the future from the xml file). I've skipped the mapping file part for now. If we will decide to transfer to xml files we'll keep the same structure as the existing tables. Thank you.
STACK_EXCHANGE
Disentangle Modflow objects as much from the MetaSWAP objects as possible At present we are in a state where the MetaSWAP objects contain Modflow objects. I decided to do this, just for conceptual reasons: The relevant modflow well data is contained in the metaswap sprinkling object. I didn't think this trhough enough, as this creates all kinds of headaches: it means these packages also have to be regridded, dumped, clipped in some way. Also if we want to regrid modflow packages from MetaSWAP objects, this means grid agnostic wells have to be assigned, meaning wells have to be assigned to layer, row, column twice: Once for Modflow, once for MetaSWAP. It is a lot easier to only ask for Modflow data when it's really needed: Upon calling .write. See my earlier comment: Doing some more thinking on this: It is probably better to refactor the MetaSWAP code base in such a way, that the MODFLOW objects are only necessary upon writing, instead of already upon initialization. For example for the Sprinkling package: Current state: class Sprinkling: def __init__( self, max_abstraction_groundwater: xr.DataArray, max_abstraction_surfacewater: xr.DataArray, modflow_wel: WellDisStructured ): self.well = modflow_wel ... def write( self, directory: Union[str, Path], index: np.ndarray, svat: xr.DataArray, ): ... Initial idea: class Sprinkling: def __init__( self, max_abstraction_groundwater: xr.DataArray, max_abstraction_surfacewater: xr.DataArray, modflow_wel: Well ): self.well = modflow_wel ... def write( self, directory: Union[str, Path], index: np.ndarray, svat: xr.DataArray, ): ... Proposed: class Sprinkling: def __init__( self, max_abstraction_groundwater: xr.DataArray, max_abstraction_surfacewater: xr.DataArray, mf6_wellname: str, ): self.mf6_wellname = mf6_wellname ... def write( self, directory: Union[str, Path], index: np.ndarray, svat: xr.DataArray, modflow_wel: Mf6Wel ): ... This propsed approach has the following advantages over the initial idea: Modflow objects are only used when really necessary. This has the advantage that we can directly use the more low-level Mf6Wel object instead of the grid agnostic Well and LayeredWell packages, as the latter still have to be assigned to cells. We avoid having to conduct this twice. Mutations of Modflow data, for example regrid_like, do not have to be done twice. In the initial idea, regridding would have to be called once for the MetaSWAP model and once for the Modflow6Simulation, as we do not check wether the Modflow package assigned to the two different models is a copy or the same package for each model. The proposed approach also means no calls to Modflow6Package.regrid_like are done outside the mf6 module, reducing clutter a bit. The same approach can be taken for the CouplerMapping object in iMOD Python, and the NodeSvatMapping, RechargeSvatMapping, WellSvatMapping objects in primod. The MetaMod.write method would be the place fetch the Modflow packages and pass them on through to MetaSwapModel.write Originally posted by @JoerivanEngelen in #728 Just a somewhat related frustration: Right now 5 files are required for coupling Modflow models to MetaSWAP: 2 for MetaSWAP, 3 for iMOD Coupler. In an ideal world, the 2 files for MetaSWAP wouldn't be necessary, and the software would receive its cellids to couple to directly from iMOD Coupler.
GITHUB_ARCHIVE