Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Why’s my new outlet tripping the switch in the panel? So I bought new outlets to replace the old ones in my house and everything was going fine until I reached the kitchen. The panel switch for this particular plug is split between two (a/b) (if that makes any difference). When I put the new plug in, I can’t turn the switch in the panel on as it trips instantly. As soon as I put the old one back, it had no problem. Why is this happening? I don’t think there’s any difference between the two of them other than the fact that this ones newer. Any ideas? Try reading this. Is that thing called a "plug" in Canada? I would have called it a "receptacle", "outlet" or "socket", but a "plug" means the (male) counterpart to me. We’ll refer to it as a “wall plug” loosely but was writing quickly and missed that in the title. As you can see at the start of my question, I referred to it as an outlet. Usual term is receptacle, but outlet is OK - especially if you pick up last year's model at...the outlet store :-) Off-topic but that loop the the ground wire looks too tight. Should OP uncoil that before reassembly? @Freiheit that’s an optical illusion through the photo. There’s no tight loops. I'd be concerned about the bare wire, which would be far better covered with insulation. @Tim you mean the ground wire which is always bare? Or the wire wrapped around the screws? Which I cover with electric tape when done? @ACanadianCoder - I do. In UK at least, that must have sheathing - green/yellow. As the socket gets pushed back into the patress, it could easily contact or short the other terminals. Maybe not regs in Canada? Can't think why not. Insulting tape will work - sheathing is better. @Tim In the US/Canada, they keep ground/earth wires bare. I know, it's confusing at first. Would there be any problem with sheathing them? I'd do it anyway. belt, braces - piece of string TL;DR Remove the tab on the hot (red/black) side "A/B" plus the symptoms sounds like you have a Multi Wire Branch Circuit or MWBC. With an MWBC, you can have the top receptacle's hot on one part of the circuit and the bottom receptacle's hot on the other part of the circuit. Each receptacle is then 120V hot-to-neutral but the two hots are 240V apart. With the tab removed on the hot (red/black) side, the two circuits are separate on hot but share neutral. With the tab in place (factory default), you have a short circuit between the two hots - 240V at maximum possible current, which quickly (and correctly) trips the breaker. Remove the tab, only on the hot side, and everything should work. While you're at it, check the "A/B". There should be a common shutoff - i.e., either they are part of a double-breaker or have a "handle tie" between the two parts. If you are not sure, upload a picture of the breaker panel. GFCI Because this is in the kitchen, it should be (must be on many new installations, depending on location) GFCI-protected. With non-MWBC circuits, this can be done at the receptacle or breaker. With MWBC, this can be done (practically speaking) only at the breaker. If your breaker has a "TEST" button, then you are probably fine. If not, upload a picture to get some advice. Ah this makes sense. Thanks for the thorough answer. Also, there’s a handle tie between the two parts in my panel. In some jurisdictions within Canada, kitchen outlets need to be gfci-protected only within a certain distance, about 4 ft of a sink (different provinces have settled on a different precise number of cm); outlets on other counters may be split 15A or single-circuit 20A T-slots at the homeowner's choice. The OP seems to be in Canada (where the split circuits seem to have been rather more common). Given that this pair of breakers have tripped on some number of dead shorts, it might be a good idea to replace it with a new one. "With the table in place" should say "tab". It's only a 2-character change, so I can't edit it myself. I can't believe the number of MWBC posts I've seen recently. A decade of never seeing it and then wham. I wonder if this was due to construction cost reductions or code changes that now, say 15 years after built, everything is being redone. @J.Hirsch Probably more to with Covid-19. A lot of people are remodeling/renovating as they have time to spare because they can't go anywhere due to lockdown and money to spare because they can't go on holiday. DIY stores do brisk business these days. @CCTO "Given that this pair of breakers have tripped on some number of dead shorts" got a reference for that? I was not aware and I now worry about the number of times I may have tripped mine. @Jeffrey Good question; as you may suspect this is community wisdom that's worth tracing back to sources. Here's one: (http://www.ncwhomeinspections.com/Circuit+Breaker+Replacement) "If it a maximum rated short has caused the breaker to trip more than once it is most likely time to replace the breaker." But there is more analysis in that article, well worth reading. You need to remove the tab between the brass screws on the outlet. This will separate the two hot feeds. But only between the brass (hot) screws. Don't remove the tab between the silver (neutral) screws.
STACK_EXCHANGE
Solved Latest version of Qt cant compile anything in .c with MinGW (Win 10 x64) I have a problem with my fresh installation of Qt 5.6.0 with the latest MinGW listed in the Qt installer. It does not compile my files, if I am set to the debugging mode of a project (the computer icon above the run icon, in the bottom left corner of the Qt Creator). But if I switch it to the deployment mode, it suddenly does work. But debugger doesnt do anything in that case. When I try to compile & run a new, just written project in debugging/developing mod (I am not sure, since it is still written in my language - I am not using it in English) it gives me the folloving: error: INTERNAL: readdir: No such file or directory. Stop. When I open the error, I see this: 20:37:48: Configuration unchanged, skipping qmake step. 20:37:48: Starting: "C:\Qt\Tools\mingw492_32\bin\mingw32-make.exe" mingw32-make: *** INTERNAL: readdir: No such file or directory. Stop. 20:37:48: The process "C:\Qt\Tools\mingw492_32\bin\mingw32-make.exe" exited with code 2. Error while building/deploying project Matice_pointer_na_pointer (kit: Desktop Qt 5.6.0 MinGW 32bit) When executing step "Make" 20:37:48: Elapsed time: 00:00. I did some digging on this forum, but sadly, I didnt find any solution yet. No help came with google, either. I checked my PATH: there is no MinGW or something like that in there (in case you would be wondering) Hi and welcome to devnet, Can you share your .pro file ? TEMPLATE = app CONFIG += console CONFIG -= app_bundle CONFIG -= qt SOURCES += main.c Here it is. @SGaist Any ideas? Just tested it with a hello world main.c both in debug and release mode and not problem at all. Do you have your project in path with spaces ? @SGaist Apparently, having the project in a folder with "&" symbol in its filepath was an issue. I moved all my projects into a folder without spaces and special characters, and it works. Thank you, I don't think I'd figure it out without your help. :) Paths can be tricky on Windows You're welcome ! Since you have it working now, please mark the thread as solved using the "Topic Tool" button so that other forum users may know a solution has been found :) asthana.ujjwal last edited by @SGaist I am getting the same error. This is my project path: As you can see, I dont have any spaces or special characters in my path, except for the colon of C drive, which cannot removed by my knowledge. Can you please help me out? Thanks in advance. Really appreciate the help. P.S. I am a newbie here. So, please try to help me in layman terms. Hi @asthana-ujjwal and welcome to devnet, Can you show your .pro file ?
OPCFW_CODE
NuGet 3.0 Preview On November 12, 2014, as part of the Visual Studio 2015 Preview release, we released NuGet 3.0 Preview. This is a big release for us (albeit a preview), and we’re excited to start getting feedback on our changes. Visual Studio 2012+ This NuGet 3.0 Preview is included in Visual Studio 2015 Preview. We are working to get preview drops out for Visual Studio 2012 and Visual Studio 2013 very soon. We previously shared our intent to discontinue updates for Visual Studio 2010, and we did make that difficult decision. Brand New UI The first thing you’ll notice about NuGet 3.0 Preview is our brand new UI. It’s no longer a modal dialog; it’s now a full Visual Studio document window. This allows you to open the UI for multiple projects (and/or the solution) at once, tear the window off to another monitor, dock it however you’d like, etc. Beyond the usability differences because of abandoning the modal dialog, we also have lots of new features in the new UI. Perhaps the most requested UI feature is to allow version selection for package installation and update–this is now available. Whether you are installing or updating a package, the version dropdown allows you to see all of the versions available for the package, with some notable versions promoted to the top of the list for easy selection. You no longer need to use the PowerShell Console to get specific versions that are not the latest. Combined Installed/Online/Updates Workflows Our previous UI had 3 tabs for Installed, Online, and Updates. The packages listed were specific to those workflows and the actions available were specific to the workflows as well. While that seemed logical, we heard that many of you would often get tripped up by this separation. We now have a combined experience, where you can install, update, or uninstall a package regardless of how you got the package selected. To assist with the specific workflows, we now have a Filter dropdown that lets you filter the packages visible, but then the actions available for the package are consistent. By using the “Installed” filter, you can then easily see your installed packages, which ones have updates available, and then you can either uninstall or update the package by changing the version selection to see change the action available. It’s common to have the same package installed into multiple projects within your solution. Sometimes the versions installed into each project can drift apart and it is necessary to consolidate the versions in use. NuGet 3.0 Preview introduces a new feature for just this scenario. The solution-level package management window can be accessed by right-clicking on the solution and choosing Manage NuGet Packages for Solution. From there, if you select a package that is installed into multiple projects, but with different versions in use, a new “Consolidate” action becomes available. In the screen shot below, Newtonsoft.Json was installed into the SamplesClassLibrary with version 6.0.4 and installed into SamplesConsoleApp with version Here’s the workflow for consolidating onto a single version. - Select the Newtonsoft.Jsonpackage in the list - Use the Versiondropdown to select the version to be consolidated onto - Check the boxes for the projects that should be consolidated onto that version (note that projects already on the selected version will be greyed out) - Click the Consolidatebutton to perform the consolidation Regardless of which operation you’re performing–install/update/uninstall–the new UI now offers a way to preview the changes that will be made to your project. This preview will show any new packages that will be installed, packages that will be updated, and packages that will be uninstalled, along with packages that will be unchanged during the operation. In the example below, we can see that installing Microsoft.AspNet.SignalR will result in quite a few changes to the project. Using the PowerShell Console, you’ve had control over a couple of notable installation options. We’ve now brought those features into the UI as well. You can now control the dependency resolution behavior for how versions of the dependencies are selected. You can also specify the action to take when content files from packages conflict with files already in your project. We used to get quite a bit of feedback on our UI having both the scrolling and paging paradigms when listing packages. It was pretty common to have to scroll to the bottom of the short list, click the next page number, and then scroll again. With the new UI, we’ve implemented infinite scrolling in the package list so that you only need to scroll–no more paging. Make it Work, Make it Fast, Make it Pretty We are excited to get this new UI out for you to try out. During this Preview milestone, we’ve been following the good old adage of “Make it work, make it fast, make it pretty.” In this preview, we’ve accomplished most of that first goal–it works. We know it’s not quite fast yet, and we know it’s not quite pretty yet. Trust that we’ll be working on those goals between now and the RC release. In the meantime, we would love to hear your feedback about the usability of the new UI–the workflows, operations, and how it feels to use the new UI. There are a couple of functions that we’ve removed when compared to the old UI. One of these was intentional, and the other one just didn’t get done in time. Searching “All” Package Sources The old UI allowed you to perform a package search against all of your package sources. We’ve removed that feature in the UI and we won’t be bringing it back. This feature used to perform search operations against all of your package sources, weave the results together, and attempt to order the results based on your sorting selection. We found that search relevance is really hard to weave together. Could you imagine performing a search against Google and Bing and weaving the results together? Additionally, this feature was slow, easy to accidentally use, and we believe it was rarely actually useful. Because of the problems the feature introduced, we received a number of bug reports on it that could never have been fixed. We used to have an “Update All” button in the old UI that isn’t there in the new UI yet. We will resurrect this feature for our RC release. New Client/Server API In addition to all of the new features in our new package management UI, we’ve also been working on some implementation details for NuGet’s client/server protocol. The work we’ve done is to create “API v3” for NuGet, which is designed around high availability for critical scenarios such as package restore and installing packages. The new API is based on REST and Hypermedia and we’ve selected JSON-LD as our resource format. In the NuGet 3.0 Preview bits, you’ll see a new package source called “preview.nuget.org” in the package source dropdown. If you select that package source, we’ll use our new API rather to connect to nuget.org. We’ve made the preview source available in the UI while we continue to test, revise, and improve the new API. In NuGet 3.0 RC, this new API v3-based package source will replace the v2-based “nuget.org” package source. Despite the investment we’re putting into API v3, we’ve made all of these new features also work with our existing API v2 protocol, which means they will work with existing package sources other than nuget.org as well. New Features Coming Between now and 3.0 RTM, we are also working on some fundamental new NuGet features, beyond what you’ll see in the UI. Here’s a short list of salient investment areas: - We’re partnering with the Visual Studio and MSBuild teams to get NuGet deeper into the platform. - We’re working to abandon installation-time package conventions and instead apply those conventions at packaging time by introducing a new “authoritative” package manifest. - We’re working to refactor the NuGet codebase to make the client and server components reusable in different domains beyond package management in Visual Studio. - We’re investigating the notion of “private dependencies” where a package can indicate that it has dependencies on other packages for implementation details only, and those dependencies shouldn’t be surfaced as top-level dependencies. Please keep an eye on our blog for more progress and announcements for NuGet 3.0!
OPCFW_CODE
Every now and then, I like to intersperse my regular writing on startups, business, strategy and other serious stuff with something a little more light-hearted. This is one of those times. I’ve been watching the world of Web 2.0 and many of the Internet startups with both excitement and angst. Excitement because it’s great to see all the entrepreneurial activity and people finally coming out from under their desks and starting cool companies again. Angst, because I still have this nagging feeling in the back of my brain that a lot of what is starting to happen harkens back to the age of the dotcom (or the dot-bomb). I can’t help but do some pattern matching to see whether the current crop of web startups are not just a semi-clever remaking of the dot-coms of the past bubble. Here are some random (and intended to be semi-humorous) thoughts on some things that have changed, and others that haven’t. Disclaimer: I’m involved in a few Internet startups as either early investor or advisor, so I’m drinking the cool-aid this time a bit as well. (Last time around, I was running a profitable software company, which was the easiest way to not get invited to the Web 1.0 party). So, enough of the preamble… Is Web 2.0 Really DotCom 2.0? Then: Many startups were focused on “eye balls”. Now: Many startups are focused on the acquisition and monetization of Internet traffic. (hint: this sounds awfully similar). Then: Startups threw big launch parties to celebrate a launch. Now: Startup founders eat lunch away from their desks to celebrate a launch. (this is much better). Then: It’s all about building a user base, we’ll worry about revenues later. Now: It’s all about creating a critical mass so we can leverage the network effects and social dynamics. (hint: this sounds awfully similar) Then: Startups spent big money on advertising (remember the Superbowl Ads)? Now: Startups get TechCrunched and get initial visibility for free. (this is good). Then: Every big success (defined as a company that raised top-tier VC) had 20+ copycats. Now: Every big success (defined as a company that sold for millions) has 20+ copycats. (a little better). Then: VCs invested in guys wearing black turtlenecks and every third person you met was head of “biz-dev”. Now: VCs invest in geeks that actually do real work. (this is a lot better). [Note: Business geeks do real work too]. Then: We had things like the Pets.com sock puppet. Now: We have shiny and rounded logos, company names with missing vowels and fant.abulo.us domain names. (frankly, I liked the sock puppet and my brain still has to pause every time I try to type one of those fancy domain names). This time, I’ve bought front row tickets and am having the time of my life. It’s going to be an interesting couple of years…
OPCFW_CODE
Hi Till and Katharina. Please find attached .CSV files for the outside data, and WTB biome. Data is from midnight 07th September 2018 to midnight 08th September 2018 at 8 minute intervals. Hope this helps. Thank you so much for the data! We parsed it with SuperCollider (our sound synthesis programming language of choice) and created a selection of sonifications. There are many ways of turning the data into sound, and for now we decided to use relatively simple approaches. Each of the sonifications we made is a “parameter mapping sonification”, i.e. that parameters of a sound synthesis engine are controlled by the data points. To hear actual changes over time, we sped up the playback from the actual recording time by a factor of 10 000. This means that the 24 hours of data you provided to us turn into 8.64 seconds of sound. To hear the periodicity of the (circadian) rhythm, we play the data 4 times, i.e. the complete sonifications are about 35 seconds long. We hope you’ll get something out of this, if only a smile 🙂 Till & Katharina This is the mix of all the below sonifications. Since each of them emphasises different aspects of the data, here, you can hear them all at one go. We strongly recommend the use of headphones or a good loudspeaker setup for listening to the sounds. The most straight forward sonification type is a frequency mapping of all dimensions. This means that the variation in the data collected by one sensor (e.g. hl_temp_F) results in the change of the pitch of one oscillator. There are 24 oscillators, one for each sensor/actor. A variation of the above, using a bandpass filter on noise sources. The sound is much easier on the ears and artifacts from overlapping periodic waveforms are minimised. The maybe second-most straight forward sonification type is an amplitude mapping of all dimensions. This means that the variation in the data collected by one sensor (e.g. hl_temp_F) results in the change of the amplitude of one oscillator. There are 24 oscillators, one for each sensor/actor. Each oscillator has a fixed frequency and position in the stereo field. This sonification is a variation of the previous: it adds a little “bumb” in the amplitude, each time an update of the data arrives, this helps to understand the granularity of the data collection and marks possible artefacts emerging from the sampling rate of the data itself. Both amplitude sonifications have a slight reverb added, emphasising phase shifting which should be noticeable when listening to them with headphones. If you are now curious about data sonification, you might want to look into this (free) sonification handbook. It has lots of information on how to approach the theme of data sonification and helps to interpret the data. Each sonification type has its own synthesis engine. Here is their definition: For the frequency mapping sonification, we set up a data structure that contains information on the range of frequencies in which each data dimension will be mapped. Last not least, there is the player, a Routine that iterates through the rows of data, adjusting parameters for the synthesis engine accordingly.
OPCFW_CODE
Real troubleshooters know there comes a time when you really need to dig in to the details of what is really going on with a system. Wireshark, Process Monitor and TCPView are just a few tools that come to mind. Then there are massive numbers of logs with which we have to contend. It’s not uncommon to have your precious screen real estate taken up by window after window of troubleshooting tools. I’ve spent many late nights switching back and forth trying to correlate a packet in Wireshark with an event log entry while referring to a log file. Wouldn’t it be nice to have a single pane of glass to look at all these manage all these sources of information? Look no further, Microsoft Message Analyzer is here! More than a Replacement for Network Monitor You probably remember some years ago Microsoft had their Network Monitor tool to perform packet captures. It was a confusing mess and generally not very good. Then along came Ethereal, a very powerful packet capture and analysis tool. You know it today as Wireshark, the go-to solution for network capture and analysis. I’ve used Wireshark with success for years. So when I saw the announcement of Microsoft Message Analyzer, I didn’t give it much mind. I dismissed it due to a combination of it being billed as the “Successor to Microsoft Network Monitor” and I was already comfortable with Wireshark. Little did I know this tool can do so much more. Message Analyzer can not only capture traffic and read captures (including Wireshark’s .pcapng format), it can analyze information from a whole host of other sources. These sources include: Windows event logs, *.log files (such as netlogon.log), PowerShell, SQL and Azure. Looking Around the Interface The foundation of data in Message Analyzer is the “message.” A message can be anything from a captured packet or frame to an event from Event Viewer. Messages can be combined or “stacked” into sessions and conversations. For example, here’s a message stack from a TCP packet: Now, look at what happens when all the messages in the stack are expanded: Check out the “Module” column. You are taken through the entire communication hierarchy. You have a message for the TCP conversation, the IP packets, the Ethernet frames, even the binary. You can also set up other “views” to get more detail for every message in the stack. You can review the entire message stack in order, the details of message, and data for each field. Things get even better when there is a back and forth between two hosts. The blue icon indicates there is a two-way conversation. The sub message stacks drill all the way down to binary data. The number of views and layouts is too much to cover. You can reach out to us if you have questions about this. Starting with Windows 8.1/2012 R2, the capture driver for Message Analyzer is baked in to Windows OS. This means that proper WinRM configuration, you can capture from a remote machine. Even better, you can capture from multiple machines simultaneously! When starting a new live trace, you can edit the target computers and enter the name or IP of the computer(s) you wish to capture from. In today’s world encryption is king. Nearly everything is encrypted. This makes our job harder as we’re trying to see what no one wants to be seen. Under Tools > Options, you can import a server side SSL certificate and decrypt all the data. OR, you can just select another scenario for your capture. Options include capturing at the Windows Firewall level before local IPsec encryption or at the application before being encrypted by HTTPS. Built-In Intelligence and Multiple Scenarios Catching IPsec and HTTP traffic before encryption is just the beginning. There are traces/captures you can do to troubleshoot SMB directly, USB and Bluetooth. During the trace set up just select the desired trace scenario (there are currently 21) and you’re off. Additionally, Message Analyzer intelligently checks the message for errors, warnings or anomalies. You can select views to provide you with the information you want to see. There are views that allow you to see the timing of the packets, the response, even the process name and kernel module. Putting it All Together Finally, you make Message Analyzer your “one pane of glass” for troubleshooting with multiple data sources. You can open data sources in separate analysis grids: You can place them side by side: There are so many possibilities for this tool it is essential for your IT bag of tricks. For more information: - Microsoft Message Analyzer on TechNet - Microsoft Message Analyzer Operating Guide on TechNet - Message Analyzer tutorials on YouTube
OPCFW_CODE
How to find out if ListView has scrolled to top Most position? I have a ListView, first its scrolled down, now when we scroll up,it reach top most point. I want to detect that .Is there any way?I am developing application with api level 8.Pls help.. edit See comments below as to why, especially on different API versions (esp later ones), this isn't a foolproof way to see if you're list is at the top (padding etc). However, it does give you a start for a solution on devices below API 14: private boolean listIsAtTop() { if(listView.getChildCount() == 0) return true; return listView.getChildAt(0).getTop() == 0; } As far as my implementation years ago - this worked perfectly at the time. This doesn't always return 0 when at the top of a list for me (You can added ...getTop() > 0 || ...getTop() < getChildAt(0).getHeight() or similar if you need to check if the top item is visible) this doesnt really work for me, while scrolling it will randomly return 0 for getTop of the 0 index child even if the list isnt at the top It will return the top of the first visible child i think. I'm sorry but this solution is not really correct because it doesn't take in account eventual padding that the first element can have. This only tells if the first list element is visible, but such first element might still not be totally scrolled up. I know this question is old, but it shows up top in Google search results. There is a new method introduced in API level 14 that gives exactly what we needed: http://developer.android.com/reference/android/view/View.html#canScrollVertically%28int%29 For older platforms one can use similar static methods of ViewCompat in the v4 support library. See edit below. Unlike Graeme's method, this method is immune of problems caused by the internal view reuse of ListView and/or header offset. Edit: final solution I've found a method in the source code of SwipeRefreshLayout that handles this. It can be rewritten as: public boolean canScrollUp(View view) { if (android.os.Build.VERSION.SDK_INT < 14) { if (view instanceof AbsListView) { final AbsListView absListView = (AbsListView) view; return absListView.getChildCount() > 0 && (absListView.getFirstVisiblePosition() > 0 || absListView .getChildAt(0).getTop() < absListView.getPaddingTop()); } else { return view.getScrollY() > 0; } } else { return ViewCompat.canScrollVertically(view, -1); } } You may need to add custom logic if the passed-in view is a custom view. Be aware that the ViewCompat methods for canScrollVertically and canScrollHorizontally always return false on platform levels below ICS. The implementation sets the those methods to return false unless overridden and only the impl for ICS overrides the methods. @lillicoder You're right. We can't use the implementations on older platforms. I've edited the answer to make it work for all platforms. My friends, combining Graeme's answer with the onScroll method... listView.setOnScrollListener(new AbsListView.OnScrollListener() { @Override public void onScrollStateChanged(AbsListView view, int scrollState) { } @Override public void onScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) { if(firstVisibleItem == 0 && listIsAtTop()){ swipeRefreshLayout.setEnabled(true); }else{ swipeRefreshLayout.setEnabled(false); } } }); private boolean listIsAtTop() { if(listView.getChildCount() == 0) return true; return listView.getChildAt(0).getTop() == 0; } thumbs up for mentioning swipeRefreshLayout. specifically what I needed. You will need to check what is the first visible position then applying Graeme's solution to see if the first visible listview item is at the top position. Something like lv.getFirstVisiblePosition() == 0 && (lv.getChildCount() == 0 || lv.getChildAt(0).getTop() == 0) You can use an OnScrollListener to be notified the position 0 is now visible. Use the onScrollmethod. my list item is really big, so i cant compare to see if firstVisibleItem is 0, as the item covers half the screen. If the item is only half visible (in the top). then also firstVisibleItem is 0, which is not the top most of listView. The asker wants to know about the top most point not when the top item is reached. This will still trigger if on the first item but not at the top most point This question is old but I have a solution that works perfectly and it is possible that works for someone looking for a solution. int limitRowsBDshow = 10; //size limit your list listViewMessages.setOnScrollListener(new AbsListView.OnScrollListener() { int counter = 1; int currentScrollState; int currentFirstVisibleItem; int currentVisibleItemCount; int currentTotalItemCount; @Override public void onScrollStateChanged(AbsListView view, int scrollState) { this.currentScrollState = scrollState; this.isScrollCompleted(); } @Override public void onScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) { this.currentFirstVisibleItem = firstVisibleItem; this.currentVisibleItemCount = visibleItemCount; this.currentTotalItemCount = totalItemCount; } private void isScrollCompleted() { if (this.currentVisibleItemCount > 0 && this.currentScrollState == SCROLL_STATE_IDLE) { /*** detect if there's been a scroll which has completed ***/ counter++; if (currentFirstVisibleItem == 0 && currentTotalItemCount > limitRowsBDshow - 1) { linearLay20msgMas.setVisibility(View.VISIBLE); } } } }); This code is found a time ago (here StackOverflow). But I can not find this to mention Graeme's answer is close but is missing something that user2036198 added, a check for getFirstVisiblePosition(). getChildAt(0) doesn't return the very first view for the very first item in the list. AbsListView implementations don't make a single view for every position and keep them all in memory. Instead, view recycling takes effect to limit the number of views instantiated at any one time. The fix is pretty simple: public boolean canScrollVertically(AbsListView view) { boolean canScroll = false; if (view != null && view.getChildCount() > 0) { // First item can be partially visible, top must be 0 for the item canScroll = view.getFirstVisiblePosition() != 0 || view.getChildAt(0).getTop() != 0; } return canScroll; } For best results on ICS or higher, always use ViewCompat from the v4 support library or View.canScrollVertically(). Use the above method on lower API levels as ViewCompat always returns false for canScrollVertically() and canScrollHorizontally() below ICS. If you can extends ListView directly, then you can use the protected method called "computeVerticalScrollOffset()" inside the override method "onScrollChanged()". With that protected method return 0, means that your ListView is now reached at top. Code Snippet listView = new ListView(this){ @Override protected void onScrollChanged(int l, int t, int oldl, int oldt) { super.onScrollChanged(l, t, oldl, oldt); if( computeVerticalScrollOffset() == 0 ){ // Reach top } } Too late but try this one it works well in RecyclerView. -1 to check if it can scroll to top while 1 is to check if it can scroll to bottom if (listView.canScrollVertically(-1)) listView.smoothScrollToPosition(0); else onBackPressed(); lstView.setOnScrollListener(new AbsListView.OnScrollListener() { @Override public void onScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) { //To change body of implemented methods use File | Settings | File Templates. if (0 == firstVisibleItem){ Toast.makeText(MyActivity.this, "Scroll to Top ", Toast.LENGTH_SHORT).show(); } } }); The asker wants to know about the top most point not when the top item is reached. This will still trigger if on the first item but not at the top most point.
STACK_EXCHANGE
How to do Yeo Johnson feature normalization on test data? I have a training and test data as part of cross validation. As I normalize training data using Yeo Johnson transform, to prevent data leakage, I plan to save the lambada from training data normalization, and use it for test data normalization. I wrote small snippet to test this as below: import seaborn as sns from scipy import stats import matplotlib.pyplot as plt import numpy as np fig = plt.figure() # fig = plt.figure(figsize=(10,10), dpi=600) ax1 = fig.add_subplot(421) xTr = stats.loggamma.rvs(5, size=500) + 5 prob = stats.probplot(xTr, dist=stats.norm, plot=ax1) ax1.set_xlabel('') ax1.set_title('Probplot:Train') ax2 = fig.add_subplot(422) sns.distplot(xTr, color="skyblue") ax2.set_title('Distribution of Training Data') ax3 = fig.add_subplot(423) xt_scipy, lmbda = stats.yeojohnson(xTr) prob = stats.probplot(xt_scipy, dist=stats.norm, plot=ax3) ax3.set_title('Probplot:Yeo-Johnson:Scipy on train') ax4 = fig.add_subplot(424) sns.distplot(xt_scipy, color="skyblue") ax4.set_title('Distribution of Transformed Train Data') ax5 = fig.add_subplot(425) xTst = stats.loggamma.rvs(10, size=500) + 5 # xTst = stats.loglaplace.rvs(7, size=500) prob = stats.probplot(xTst, dist=stats.norm, plot=ax5) ax5.set_xlabel('') ax5.set_title('Probplot:Test') ax6 = fig.add_subplot(426) sns.distplot(xTst, color="skyblue") ax6.set_title('Distribution of Test Data') ax7 = fig.add_subplot(427) xtst_scipy = stats.yeojohnson(xTst, lmbda=lmbda) prob = stats.probplot(xtst_scipy, dist=stats.norm, plot=ax7) ax7.set_title('Probplot:Yeo-Johnson:Scipy on Test') ax8 = fig.add_subplot(428) sns.distplot(xtst_scipy, color="skyblue") ax8.set_title('Distribution of Transformed Test Data') plt.tight_layout(h_pad=0.9, w_pad=0.9) plt.show() This gives following plots. I have following questions: Is normalization step for test data done correctly using Scipy as shown in my code ? How can this be done in SKlearn , using previously computed lambda from training data? The reason I ask is that Sklearn PowerTransformer and fit_transform for Yeo Johnson does not allow passing precomputed lambda. Thank You sedy I think that you are misunderstanding Transformers function. fit_transform() is performed on the train set and compute the lambda and scaling function. Once they are computed, you can use the transform() function to apply this transformation on the test set. Concerning your first question, it is wise to use scikit-learn transformer instead of scipy transformation as they are standard and can be added to a pipeline. For the second question, you can use PowerTransformer without fitting it by setting the lambdas manually as follows: from sklearn.preprocessing import PowerTransformer pt = PowerTransformer(method='yeo-johnson', standardize=False) pt.lambdas_=[1,2] pt.transform([[10,20]])
STACK_EXCHANGE
The Lua programming Language has been around for a while but this book by Roberto Ierusalimschy will be a mark in its history. The book managed to surpass every expectation I had for it, and I was eager! From someone with no Lua knowledge to those with Lua klocs in their backs, this book will be a great companion in a nice to read trip down the Lua 5.0 lane. The book begins with the basic Lua elements and structures and then advances through control structures, functions, iterators and coroutines. Iterators and coroutines are one of those language features that may confuse the first timers, but the author manages to show the concepts and inter relations between them in a way that clarified the issues even for a seasoned Lua programmer. Alas, make no mistake, the whole first part is totally worthwhile for non beginners. The second part of the book shows one of Lua biggest assets: tables and metatables. I've seen people sneer at Lua at first glance and then convert themselves to Lua evangelists simply for the features of tables and metatables. The author does his magic and makes a whole set of apparently complex concepts flow by the reader as fluid and logical as they can be. By the way, fluency is arguably one of the major benefits of this book. The reader is taken from substrate to substrate of the Lua way of life without even taking notice. Every end of chapter left me with the satisfaction of having been presented with one more facet of Lua and with the tranquility that everything was falling in place at the right timing. After tables and metatables, the book presents the concepts of Packages and Object Orientation in Lua. If you had any doubt ever that Lua was able to sustain "real" Modular/OO programming, be prepared to replace your dogmas. The book not only clarifies how to do it in Lua but also shows how easy and clear the coding gets. The author ends the second part of the book with a great chapter on Weak Tables. I have to admit that I was somewhat refractory to Weak Tables before I read this book, but after this single chapter I was converted. May the name "weak" not influence your judgment on those Weak Tables. They are great, and the book showed more about them than I was expecting. The third part of the book focuses on the standard libraries. Those would be the Table, String, I/O, Operating System and Debug libraries. Instead of repeating the contents of the Lua reference manual, the author manages to show lots of new information about the libraries by the use of examples and clear explanations. There are some points in Lua that can indeed be quite idiosyncratic at a glance, but this book is more than enough to clarify every one of them. The fourth and last part of the book brings us the Lua C API. For the beginner Lua programmer this part will probably be skipped, but for the average programmer and most of all for the hardcore Lua explorer, this part will be pure delight. C programming is not for the faint of heart, but having a Lua interface for your C library is akin to the jackpot of embedded languages in my opinion. This part of the book shows that the task of wrapping C code for Lua is not only feasible, but easily done once you grasp the fundamentals. Have one thing in mind, this was no small task for the author. Describing such an plethora of resources and how to use them in six chapters demands a clear yet straight to the point approach, and once again the book shines through. Step by step the author shows how to deal with the Stack, to get arguments from and return values back to Lua, to handle tables (even those big ones), to call Lua functions from C code, to call C functions from Lua code, to handle strings, to handle state (using the registry, references and upvalues), and last but not least to use userdata types and metatables in C. The last chapter of the book brings two examples of the use of the C API, one offers a directory iterator and the other a really nice example of binding an existent library (expat) for Lua use. Lots of my questions on the C API were dismissed with those two examples. I should also reserve a praise for the book index. Not only I've found it complete but it is easy to understand some details of the Lua structure only by glancing at the index pages. Being one of the first readers of this book was not only a great honor but also a great surprise. As a Lua old timer, I wasn't expecting to be presented to so many novelties, subtleties and jewels of programming in almost every chapter. Was I wrong... If you have not seen Lua until now, this book is THE starting point. If you are acquainted with other versions of Lua but have not studied version 5.0, this book is a great shortcut for your new endeavors. Finally, if you think Lua is your native language and no book could teach you something worthwhile, think again. I was grateful I didn't skip not even one paragraph. We've got the language. We've got the book. Let the revolution begin... :o)
OPCFW_CODE
Why do arrival and departure cards exist? When I clear passport control in every country I've visited so far, one of the first things the officer does is scan my passport and pull up my info on his computer. He then collects an arrival card from me that contains exactly the same info as what's in my passport, plus some info about my trip (duration of stay, address, etc.). When I go to leave the country, I am (usually) required to then submit a departure card, which – unlike the arrival card – is 100% redundant with what's in my passport. These pieces of paper are considered so important that travelers are not permitted to enter/leave the country without them. What is their purpose, if most (all?) of the information on them can be collected from the passport? Do you have some example countries where this is used? (I'm from Germany, and can travel in most of EU without a passport, and actually without border controls.) I've been only in the US other than this, and they only have an entry form for customs purposes (though this wasn't even used at my last entry). No "departure card" or anything. @PaŭloEbermann From memory, I recall I had to deal with this when entering/leaving Chile, Argentina, Indonesia, Thailand, Hong Kong and the Philippines. Just to add, I had arrival and departure cards on my trip to Japan recently. Quite often it's for efficiency. Let's take the Australian arrivals card as an example, as it has quite a few questions compared with some. Sample arrivals card for Australia Some of the questions: Do you intend to live in Australia for the next 12 months? Do you have criminal convictions? Are you bringing in [food/drugs/medicine/money] into the country? Now some of these are going to be simple 'no' answers for the most part, for most passengers. This saves customs people asking the same questions over and over again. Instead, when they see a 'yes', then they can ask. This speeds up the process for all, and more importantly, lets the customs people deal with the 'special cases' - the ones that actually matter and are what they are there for. Do you intend to live in Australia? No = tourist. Yes = time to ask about how they support themselves, where they'll be working, etc. Are you bringing in drugs? This one I occasionally answer yes due to the wording, as I have prescription drugs. I've sometimes been questioned on it, but usually the word 'prescription is enough'. Now you might wonder what idiot with illegal drugs would say 'yes'. Fair enough. However sometimes the person doesn't realise their medicine from Timbuktu (random example) might be illegal or controlled substances in Australia. So it's necessary to ask to clarify, and when the person says yes, they might have a doctor prescribe them something controlled as a substitute which is allowed in the country. Furthermore, in the event of a post-customs search of their bag, there's an additional legal benefit of asking these questions - it's made all passengers legally consider the question. And when you find drugs/food/etc in their bag, they can't say 'I didn't realise I had to declare it' - they literally just signed that they didn't have it on a form. Now not every country has this. Some just want a place of address. I always used to be annoyed by this, especially after seeing someone use the address from Pretty Woman. Until I left my luggage at the airport (took the wrong bag), and they were able to contact me on this. While chatting to them about it (the address was the one on my bag), we were talking about the other ones and the arrival one can be used in case of emergency too. If, given the current Ebola scare, for example, a passenger on your flight was found to have it, this gives them a means to attempt to contact you to check for early symptoms and limit exposure. So yes, they're tedious, but they do still serve a variety of purposes - for record-keeping, efficiency and legal documentation. Since you mention the USA briefly, I think it's worth mentioning that they have recently reduced their paperwork redundancy for visitors on the visa waiver programme. All VWP visitors used to have to fill in a paper form (I-94) that completely duplicated what was on their ESTA; now, as long as they arrive by air, they only have to fill in the customs declaration, which even American citizens have to fill in. (VWP visitors arriving overland still have to fill in an I-94.) This is a great answer; if it contained information on departure cards as well as arrival cards, I would like to mark this as accepted. @DavidRicherby "VWP visitors arriving overland still have to fill in an I-94" I realise this was write years ago, but at most Major crossings ESTA holders are not required to fill in the form - either it isn't issued at all, or it's printed out pre-filled Entry / exit cards are partly legacy and partly legal. Border control is a massive bureaucracy, it is very difficult to get then to change a process unless the responsible minister is an aggressive, autocratic modernizer with Darth Vader's style of office management. Yes, it would be possible to join the departure card process to the passport scan, but they keep the cards - they can't keep your passport. The joining process would cost a lot, benefit a small number of people (mostly non-tax-paying visitors) and eventually result in all the card processing people being laid off. The other part is legal - notice you signed the card, making it a binding legal document. The 'obvious' questions that no one would answer "yes" to, like "are you carrying any prohibited items" are to cement the case against you if you do have any prohibited items. You can arrive in most countries with goods that are illegal there (but legal at your departure country) and declare them with no side effects other than possibly losing the items. If you say you don't have anything they can do you for smuggling. Scanning your passport at a self-service kiosk isn't the same. Canada has an entry card, if you have a Canadian passport you put it straight into a scanner yourself thus automating about half the process. The computer makes the simple decisions and then rolls the dice for random inspection. But you do have to sign it first. Canada gets all their exit data from the airlines (they don't care about much when you leave. Have a nice flight, eh. "Yes, it would be possible to join the departure card process to the passport scan" There's the issue of passport information not always matching: people can replace passports while in the country, and people with multiple nationality can have multiple passports.
STACK_EXCHANGE
using System; namespace Nullspace { internal class ResourceAbLoader : ResourceLoader { protected BundleManager mBundleManager; public ResourceAbLoader(string abDir, string manifestBundleName) { Initialize(abDir, manifestBundleName); } private string FormatAbName(string path) { return path.Replace("/", "_").ToLower(); } protected void Initialize(string abDir, string manifestBundleName) { mBundleManager = new BundleManager(); mBundleManager.Initialize(abDir, manifestBundleName); } internal override T LoadAsset<T>(string path, string name) { // path 变成 AB 名 Bundle bundle = mBundleManager.LoadBundleSync(FormatAbName(path)); T asset = bundle.LoadAsset<T>(name); return asset; } internal override void LoadAssetAsync<T>(string path, string name, Action<T> callback) { Action<Bundle> load = (bundle) => { T asset = bundle.LoadAsset<T>(name); callback(asset); }; mBundleManager.LoadBundleAsync(FormatAbName(path), load); } internal override void LoadAssetAsync<T, U>(string path, string name, Action<T, U> callback, U u) { Action<Bundle> load = (bundle) => { T asset = bundle.LoadAsset<T>(name); callback(asset, u); }; mBundleManager.LoadBundleAsync(FormatAbName(path), load); } internal override void LoadAssetAsync<T, U, V>(string path, string name, Action<T, U, V> callback, U u, V v) { Action<Bundle> load = (bundle) => { T asset = bundle.LoadAsset<T>(name); callback(asset, u, v); }; mBundleManager.LoadBundleAsync(FormatAbName(path), load); } internal override void LoadAssetAsync<T, U, V, W>(string path, string name, Action<T, U, V, W> callback, U u, V v, W w) { Action<Bundle> load = (bundle) => { T asset = bundle.LoadAsset<T>(name); callback(asset, u, v, w); }; mBundleManager.LoadBundleAsync(FormatAbName(path), load); } internal override void UnloadBundle(string path, bool unloadLoadedAssets) { mBundleManager.UnloadBundle(FormatAbName(path), unloadLoadedAssets); } } }
STACK_EDU
I am currently doing some working on cyberinfrastructure/e-science for the humanities that will hopefully turn into an article relatively soon. I am interested in conceptual cyberinfrastructure as well as actual implementations and critical perspectives on the discourse of cyberinfrastructure and e-science for the Humanities. There are some interesting tensions here: models based in the sciences and engineering (seemingly being part of a ‘new’ wave of infrastructure discourse of funding), the epistemic commitments of some of the models being put forward (e.g. a library and collection centric model), sometimes uncritical matching of computing and visualization resources and grand visions/hope for considerable impact in the humanities, and a downplay of existing /cyber/infrastructure in the Humanities (or pointing to the simple ‘digitalization’ of existing resources). One of the good things about the cyberinfrastructure and e-science discourse is a broader sense of what is incuded in infrastructure – more context if you want (middleware, people etc). Here are some resources: Revolutionizing Science and Engineering through Cyberinfrastructure. NSF. 2003. Atkins et al. Pdf available here. Our Cultural Commonwealth: The Report of the American Council of Learned Societies Commission on Cyberinfrastructure for the Humanities and Social Sciences. 2006. Pdf available here. The Future of Scholarly Communication: Building the Infrastructure for Cyberscholarship. Workshop report. NSF, JISC. William Y. Arms and Ronald L. Larsen. 2007. Pdf available here. Cyberinfrastructure For Us All: An Introduction to Cyberinfrastructure and the Liberal Arts. David Green. Academic Commons. 2007. Part of a special issue on the topic. Available here. “Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure”. Digital Humanities Quarterly issue. Available here. “The Institutional Challenges of Cyberinfrastructure and E-Research”. Clifford Lynch, Educause. 2008. Available here. Exploring E-science: An introduction. Nicholas W. Jankowski. Journal of Computer-Mediated Communication. 12(2), 2007. Special issue theme. Available here. Needs of the 3D Visualization Community. Anna Bentkowska-Kafel. 2007. Pdf available here. Digitial Humanities Centers as Cyberinfrastructure. John Unsworth. 2007. Available here. Scholarship in the Digital Age. Information, Infrastructure and the Internet. Christine L. Borgman. 2007. MIT Press. Scientific Collaboration on the Internet (MIT Press 2008). Olsen et al. Of course, I am very interested in actual implementations as well, and in the article I use HUMlab as a case study. I did quite a bit of work on cyberinfrastructure a couple of years ago. Here is a talk at UCSD from 2006 (Cyberinfrastructure Institute) for instance: “Bringing Cyberinfrastructures together:Studio spaces, multiplex visualization and creative interaction” – stream and slides. Also, the current expansion of HUMlab is very relevant in this context and not least actual use. Being away right now I missed the the Independent game evening event. Reports very welcome as as well as photos! Also, if anyone knows about additional useful cyberinfrastructure/e-science resources for the Humanities (or more generally), feel free to comment/contact me.
OPCFW_CODE
import {CurrencyType} from "@/ig-template/features/wallet/CurrencyType"; import {Wallet} from "@/ig-template/features/wallet/Wallet"; import {Booster} from "@/ig-template/tools/boosters/Booster"; import {BoosterTier} from "@/ig-template/tools/boosters/BoosterTier"; import {Currency} from "@/ig-template/features/wallet/Currency"; import {ImpossibleRequirement} from "@/ig-template/tools/requirements/ImpossibleRequirement"; describe('Booster', () => { const wallet = new Wallet([CurrencyType.Money]) let booster: Booster; beforeEach(() => { wallet.money = 10000; booster = new Booster("Example", [ new BoosterTier([new Currency(10, CurrencyType.Money)], 1.5), new BoosterTier([new Currency(100, CurrencyType.Money)], 2, "2x"), new BoosterTier([new Currency(1000, CurrencyType.Money)], 3, "3x"), ], wallet, 1); }) test('Normal usage', () => { booster.selectTier(2); const bonus = booster.perform(3); expect(wallet.money).toBe(7000); expect(booster.currentTierIndex).toBe(2); expect(bonus).toBe(3); expect(booster.bonus).toBe(3); }); test('Normal usage, no money', () => { wallet.money = 0; booster.selectTier(2); booster.perform(1); booster.perform(1); booster.perform(1); expect(booster.currentTierIndex).toBe(-1); expect(booster.bonus).toBe(1); }); test('Requirement', () => { const booster = new Booster("Example", [ new BoosterTier([new Currency(10, CurrencyType.Money)], 1.5, "1.5x"), new BoosterTier([new Currency(100, CurrencyType.Money)], 2, "2x", new ImpossibleRequirement()), new BoosterTier([new Currency(1000, CurrencyType.Money)], 3, "3x"), ], wallet, 1); booster.selectTier(1); expect(booster.currentTierIndex).toBe(-1); }); test('No wallet throws error', () => { const booster = new Booster("", [], null as unknown as Wallet, 1); expect(() => { booster.perform(1); }).toThrow(); }); });
STACK_EDU
SQL*Loader-00465 string directive expects number arguments, number found. Check NLSRTL installation. Forum New Posts Today's Posts FAQ Calendar Forum Actions Mark Forums Read Quick Links View Site Leaders dBforums Database Server Software Oracle Getting an error when using sqlldr If this is When a multiple-table direct load is interrupted, it is possible that a different number of records were loaded into each table. http://cloudbloggers.net/syntax-error/sql-loader-350-syntax-error-at-line-3.php Cause: An error was encountered because a required option was not found or was invalid. SQL*Loader-00268 UNRECOVERABLE keyword may be used only in direct path. Because all further rows will be rejected, the load is discontinued. (If the error were data dependent, then other rows might succeed.) Action: See the errors below this one in the SQL*Loader-628: character set conversion error Cause: A character set conversion error occurred. Is it dangerous to use default router admin passwords if only trusted users are allowed on the network? SQL*Loader-00627 Character set conversion graph not available. SQL*Loader-00252 Sort data sets are not used by SQL*Loader Cause: The SQL*Loader control file contains a SORTNUM statement. It could be misspelled, or another argument (not identified by a keyword) could be in its place. How does it work? Action: See surrounding messages for more information. Action: Verify that the SDF and LOBFILE clauses in the SQL*Loader control file name the correct fields. Sql*loader-350: Syntax Error At Line 2. Illegal Combination Of Non-alphanumeric Characters Action: Check the operating system messages following this message in the log file. Trick or Treat polyglot In order to become a pilot, should an individual have an above average mathematical ability? Sql*loader-350 Syntax Error At Line 1 Illegal Combination Of Non-alphanumeric Characters SQL*Loader-00106 Invalid discard file name on command line Cause: The discard file name specified on the command line was not recognized. Is extending human gestation realistic or I should stick with 9 months? http://www.orafaq.com/forum/t/163937/ Expecting Keyword Load errors then we strongly recommend that you Download (Sql*loader-350 Syntax Error At Line 1. Action: Remove the PIECED keyword or use the direct path load. Action: Edit the SQL*Loader control file to check that all multi-byte character data is valid. Go to Solution 7 Comments LVL 29 Overall: Level 29 Oracle Database 25 Message Accepted Solution by:MikeOM_DBA2003-09-08 You are using the incorrect version of sql loader: SQL*Loader: Release 184.108.40.206.0 - Cause: The secondary datafile clause for the field identified another field that does not exist in the table definition for the SQL*Loader control file. Action: Use a conventional path load for this configuration. http://www.dbasupport.com/forums/showthread.php?6197-can-anybody-tell-me-what-s-wrong-with-my-SQL-loader-ctrl-file Error on table string Cause: A non-empty table is being loaded with the INSERT option. Sql*loader-350 Syntax Error At Line 1. Expecting Keyword Load SQL*Loader-00514 Error getting elapsed time Cause: SQL*Loader could not get the elapsed time from the system. Sql Loader 350 Syntax Error At Line Expecting Or Found End Of File The size of the conversion buffer is limited by the maximum size of a VARCHAR2 column. The length of each variable-length field is embedded in the field, so SQL*Loader knows that more data should have been present. this page Action: Check the errors below this message in the log file for more information. Also, if fixed-length records are in use, verify that no record exceeds the platform-specific length for a single record. Action: Make a note of the message and the number, then contact customer support. Expecting Or Found Number Action: Verify that the data for the sequenced column is numeric. Action: Check the control file's specifications against the log file to ensure that the field location was specified correctly. SQL*Loader-00124 specified value for readsize(number) less than bindsize(number) Cause: The command line argument specified for READSIZE was less than the value of BINDSIZE. http://cloudbloggers.net/syntax-error/sql-loader-350-syntax-error-at-line-11.php It might work better is you do like the following: $ sqlldr control=control Reply With Quote 11-03-03,10:42 #5 akverma View Profile View Forum Posts Registered User Join Date Nov 2003 Posts SQL*Loader-00640 Variable length field was truncated. Results 1 to 2 of 2 Thread: can anybody tell me what's wrong with my SQL loader ctrl file? Check the SQL string used for this column. SQL*Loader ignores this clause. Action: Check that the table exists, its name is spelled properly, and that the necessary privileges on it have been granted. Cause: An error occurred that is independent of the data. Action: If the missing fields should be loaded as null, use the TRAILING NULLCOLS clause. Cause: More than one argument was specified for an OID clause. Table level OPTIONS statement ignored. Action: If CONTINUE_LOAD is necessary, specify a direct load and put the number of records to skip in each INTO TABLE statement. useful reference Powered by vBulletinCopyright ©2000 - 2016, Jelsoft Enterprises Ltd.Forum Answers by - Gio~Logist - Vbulletin Solutions & Services Home Register New Posts Advertising Archive Privacy Statement Sitemap Top Hosting and Cloud SQL*Loader-00600 Bind size of number bytes increased to number bytes to hold 1 row. Action: Verify that you have specified the correct option for TERMINATED BY and verify that the TERMINATED BY option is specified for the correct fields. Action: Check the operating system messages following this message for information on why the open failed. Removing the obsolete keywords will eliminate the message without changing the way in which the datafile is processed. SQL*Loader-522: lfiopn failed for file (name) Cause: LFI failed to open the file. SQL*Loader-00266 Unable to locate character set handle for string. Action: Move the count field to be before the collection data in the data file. The directive specifies a fixed number of arguments, but the SQL*Loader control file contains a different number of arguments. Could someone please help to tell me how to fix it? Action: Use CONCATENATE or CONTINUEIF. Expecting Keyword Loaderror is Only recommended for advanced computer users.Download the automatic repair toolinstead. SQL*Loader-00308 Optional SQL string of column string must be in double quotes. These options cannot be specified for filler fields.
OPCFW_CODE
function moverPeca($scope, peca) { let x1 = $scope.selecionada.x; let y1 = $scope.selecionada.y; let x2 = peca.pos.x; let y2 = peca.pos.y; let campoPeca = $scope.tabuleiro[x1][y1]; let campoDesocupado = $scope.tabuleiro[x2][y2]; let podeMover = podeMoverPeca($scope, x1, x2, y1, y2, campoPeca.jogador); if (!podeMover) { return } else if (podeMover === 2) { let isMovimentoDuplo = movimentoDuplo($scope, x1, x2, y1, y2, campoPeca.jogador); if (!isMovimentoDuplo) return; } console.log(x1, y1, ' para ', x2, y2); campoPeca.pos = {x: x2, y: y2}; campoDesocupado.pos = {x: x1, y: y1}; $scope.tabuleiro[x1][y1] = {}; $scope.tabuleiro[x2][y2] = {}; $scope.tabuleiro[x1][y1] = campoDesocupado; $scope.tabuleiro[x2][y2] = campoPeca; } function removerPeca($scope, x, y) { $scope.tabuleiro[x][y] = addCampoSemPeca(idAtual++); $scope.tabuleiro[x][y].pos = {x: x, y: y}; } function movimentoDuplo($scope, x1, x2, y1, y2, jogador) { console.log('movimentoDuplo', `${x1}x${y1} para ${x2}x${y2}`); if (jogador === 2 && x1 - x2 === 2) { console.log('mov1') if (y1 > 0 && y1 - y2 === 2 && $scope.tabuleiro[x1 - 1][y1 - 1].jogador === 1) { console.log('capturar peça jogador 1'); removerPeca($scope, x1 - 1, y1 - 1); return true; } if (y2 < 8 && y2 - y1 === 2 && $scope.tabuleiro[x1 - 1][y1 + 1].jogador === 1) { console.log('capturar peça jogador 1'); removerPeca($scope, x1 - 1, y1 + 1); return true; } } if (jogador === 1 && x2 - x1 === 2 && (y1 - y2 === 2 || y2 - y1 === 2)) { console.log('mov2') if (y1 > 0 && $scope.tabuleiro[x2 - 1][y1 - 1].jogador === 2) { console.log('capturar peça jogador 1'); removerPeca($scope, x2 - 1, y1 - 1); return true; } if (y2 < 8 && $scope.tabuleiro[x2 - 1][y1 + 1].jogador === 2) { console.log('capturar peça jogador 1'); removerPeca($scope, x2 - 1, y1 + 1); return true; } } console.log('não é movimentoDuplo'); return false; } function podeMoverPeca($scope, x1, x2, y1, y2, jogador) { if (y1 === y2) { return false; } else if ((y2 - y1 === 1 || y1 - y2 === 1) && ((jogador === 1 && x2 - x1 === 1) || (jogador === 2 && x1 - x2 === 1))) { console.log('podeMoverPeca', `${x1}x${y1} para ${x2}x${y2}`); return true; } else if ((jogador === 1 && x2 - x1 === 2) || (jogador === 2 && x1 - x2 === 2)) { return 2; } return false; }
STACK_EDU
http://www.geocities.com/SiliconValley/Way/2686/ - Webpage. Welcome once again. Firstly, I'd just like to stress how useful a program Nico's Commander actually is, with so much dross on the web, this is a commendable attempt to emulate my favourite DOS editor. Nico's Commander uses as the basis for its protection a serial number, although I haven't delved overly into the maths behind the scheme, I suspect there may only be one universal code. The first part of this tutorial ought to present no problem to most visitors to my site, you can easily locate the following from a disassembly listing (just search the String References). :004203DA MOVSX ESI, BYTE PTR [EAX+ECX-01] <-- Serial :004203DF IMUL ESI,EAX <-- Multiply by position. :004203E2 IMUL ESI, DWORD PTR [EBP-20] <-- EBP-20 starts at 00600937. :004203E6 ADD DWORD PTR [EBP-1C], ESI <-- Evidently the store. :004203E9 MOV ESI, DWORD PTR [EBP-20] :004203EC IMUL ESI, 00600937 :004203F2 INC EAX <-- Loop. :004203F3 MOV DWORD PTR [EBP-20], ESI :004203F6 CMP EAX,EDX <-- Check loop. :004203F8 JLE 004203DA <-- Loop a bit more. :004203FA MOV ESI, DWORD PTR [EBP-28] <-- Set EBP-28's value. :004203FD XOR EDI,EDI <-- Clear EDI. :004203FF CMP DWORD PTR [EBP-1C], A2289ADC <-- Compare EBP-28 with the default. :00420406 JNZ 00420421 <-- Jump away bad guy. Well this code evidently isn't rocket science, and many readers can probably think of at least 3 ways in which to patch this. Finding the good code probably isn't a possibility, unless you code a program that simulates the scheme, this of course throws open the possibilities that maybe a combination of numbers and letters generates the desired result. It maybe also be possible to quickly narrow down the range of acceptable values using SoftICE and the g 004203FF command (a game of higher/lower breaking initially with Hmemcpy). You'll find in my key generators pack some quick and dirty code I produced to test all of the decimal numbers between 100,000 and 999,999 for validity, I recommend you assemble and link this just to see how it works, it takes my PC just over 5 seconds to test all of the numbers (none of which is correct unfortunately), this code is also easily modified to test higher numbers although the execution time increases (between 10,000,000 and 99,999,999 took my PC some 15 minutes). The next thing I thought was to simply patch the result of the compare to a result which I knew, say BF6831FF (1234567890), I applied the patch and when I tried to reload Nico's Commander something evidently snapped and the program refused to start, something didn't like my intrusion. When this happens you can usually attribute it too 1 of 2 things, either a type of CRC check or a parity routine. CRC basically involves adding up bytes or a portion of bytes and checking the value against a checksum where as parity routines correct themselves on the fly thus negating the effect of any patch. Both methods have their drawbacks, a full scale CRC check represents a phenomenal waste of CPU time and is vulnerable to attack because the entire file must be copied into memory for the compare. Parity routines on the other hand are trickier to code and may be easier to pinpoint because of their correction capabilities. Needless to say we can try and locate the code using several methods. The first method I used was to search for API calls to either unload a program from memory or unregister a windows class. As it turned out nothing panned out so I decided to use the Symbol Loader and observe the loading process of the good file, you won't need to trace for very long before you can identify the CALL responsible for loading the program and more importantly displaying the opening welcome screen. :0041D0FD CALL 00430980 <-- Check programs O.K. and get called a lot of times. :0041D102 CMP DWORD PTR ,ESI <-- This compare should be 0. :0041D108 POP ECX :0041D109 POP ECX <-- Stack pops. :0041D10A JZ 0041D110 <-- This is a good jump. Although patching here is evidently at a higher level than might be desirable it is evidently effective because the critical flag 00471318 is only checked at this location although I'd guess its referenced with an address relative to the BP or SP. If you look and see how many times 00430980 is called you'll realise why I considered it a risk to try and patch below this level. In order to beat this check I recommend forcing the JZ into a JMP, (remember of course to be elegant and that 00471318 is at [ESP+08]). You can of course now patch the result of the CMP identified at the outset to your desired serial #. As an aside, interestingly once we apply our bogus patch and enter our bogus code, a clean unpatched copy of Nico's Commander will also function as a full version, a little something to investigate perhaps. Greetings. Just surfed by your webpage, and I must say I enjoyed the visit. Its the first time I have seen mention of Nico's Commander on a reverse engineering site. I have been running the program for a week now and just wanted to mention to you that I got rid of the nag and time limit by changing the name. The version I have is 4.03. I was working away on it one night, changed the nc.exe to 1.exe, which is what I usually call a work in progress, and the next thing I new, the program worked perfectly. But add a second character for the name and bam, back into limitations. Easiest crack ever. Must have a call from a dll for that original name or something, but I haven't looked into it further. Anyway, just thought I'd mention it to you since you had a splash about it on your page. (Another interesting aside -- CrackZ).
OPCFW_CODE
Are Visual C++ 2013 binaries compatible with Visual C++ 2017 binaries? In one of our C++ solutions, we use 3rd part libraries. These libraries are compatible to VS 2013. Now we are migrating our solution to VS 2017 and found that some of the 3rd party libraries do not have VS 2017 compatible versions. So we tried to use some of the VS2013 compatible libraries in VS20173 and the tried API calls work fine. Can I assume that the libraries work with VS 2017 executable without any issues? The answer is no for C++ libraries. C libraries are likely to work. I'll wait for somebody to prove me wrong. The APIs are exported as functions and not C++ classes Name mangling happens even for non-member functions. It all depends how thoroughly the programmer avoided taking a dependency on the runtime library in his api. Not so easy to do in C++, throwing an std::exception or using, say, std::string as an argument or return value ruins it. Talk to him about it. If they export C-style API then they are compatible. Note that "C-Style API" means not just "APIs are exported as functions and not C++ classes", but all functions being "extern C", using only C types as arguments / return values and not throwing. @VTT - The functions are marked with extern C. I will check the API parameters and return types in detail @VTT no, in general they are not. They have different runtime DLLs to begin with, and if you create something simple like FILE* in your app compiled with VC++2017 and pass it down to your old library, there is no guarantee it will work. @SeverinPappadeux I guess I should've written "built-in" C types. But it is too late to edit that comment... @VTT what's biggest change in 2015 wrt 2013 is this, IMHO: "Refactored binaries The CRT Library has been refactored into a two different binaries, a Universal CRT (ucrtbase), which contains most of the standard functionality, and a VC Runtime Library (vcruntime), which contains the compiler-related functionality, such as exception handling, and intrinsics." I, frankly, cannot contemplate shipping product depending on two incompatible runtimes and crossing fingers for it to work. See link in my answer for details Just to add to some of the other comments: watch out for memory management. You can't allocate using one runtime support and deallocate in the other. @SeverinPappadeux Actually it not really possible to avoid depending on multiple runtimes. Even If your own executable and libraries are built with the same toolset and setting there will also be Windows NT CRT present. And more sophisticated applications could use dozens of other runtimes from completely different compilers. But there will be no problem as long as they are properly isolated and interop only though a C-Style API. @VTT: I'm not sure what exactly you mean by "NT CRT". The CRT part that ships as part of the Windows API is the Universal CRT. The CRT which was part of Windows NT implementation (but not its API) was a fork of the MSVC6 CRT, and that fork evolved as Windows evolved. And in fact they can also interop via other mechanisms besides a C-style API, the other common choice is a COM style API. @MSalters "Windows NT CRT" is a library description for C runtime dll internally used by Windows (Windows\System32\msvcrt.dll) which get loaded into every process and can exists peacefully along with application-specific runtime libraries. Also COM style API is a variation of C-style API. In general - no. AFAIK, VC++2015 (aka toolset v140) and VC++2017 (aka toolset v141) are stated to be binary compatible. No such statement were made wrt VC++2013, and I believe there are breaking changes (like sizeof(list) etc). It might work, but could lead to hard-to-debug problem Microsoft statement: "A more-severe kind of change, the breaking change can affect binary compatibility, but these kinds of binary compatibility breaks only occur between major versions of Visual Studio. For example, between Visual Studio 2013 and Visual Studio 2015." see https://learn.microsoft.com/en-us/cpp/porting/visual-cpp-change-history-2003-2015 Nothing is guaranteed but binary compatibility of Visual C++ compilers is generally better than officially announced. Just make sure you do not create/destroy object across different runtimes, propagate exceptions and do not pass STL related objects as parameters. If the third party libraries expose C style interfaces and they are compiled as DLLs the task is even easier. So you should review those interfaces and verify how much they vary from the general interoperability guidelines.
STACK_EXCHANGE
"""This module provides indexes of noise data fot the GroupDataset.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from numpy.random import choice def _uniform(amount, size): return choice(amount, size, replace=False) class IndexGenerator(object): """Responsible for generating indexes. The class generates indexes for 'cls_amount' classes. Amount of noise indexes is depends on 'noise_quantity'. When keep_order is 'True' then generated indexes will be the same. That is, get_indexes will be determenistic function. """ def __init__( self, noise_quantity, cls_amount, keep_order=False, distribution=_uniform, noise_index_order=None ): """Construct a new Index Generator. Args: noise_quantity(list) : should be array of size 'cls_amount'. Amount of noise data that one sample should have per class. cls_amount(int) : amount of classes the dataset should have. keep_order(Boolean) : if 'True' then order will be the same across all samples, if 'False' then indexes will be generated by using 'disrtibution' function(default to False). distribution(int, int => list) : function of distribution to sample indexes where noise data should be placed at. Only used when keep_order is false (default uniform) noise_index_order(list) : when keep_order= True, then 'noise_index_order; is used as indexes where noise data should be placed at. Raises: ValueError: If the `noise_quantity` is not one-dimensional array or if no noise_quantity was passed to the contructor. """ if(noise_quantity is None): raise ValueError( 'noise_quantity \ should be passed to the constructor' ) if( not isinstance(noise_quantity, int) and (hasattr(noise_quantity, "__len__") and len(noise_quantity) != cls_amount) ): raise ValueError( 'noise_quantity should be either one-dimensional \ or equal to amount of classes' ) self.noise_quantity = [noise_quantity] * cls_amount if isinstance( noise_quantity, int) else noise_quantity self.cls_amount = cls_amount if(keep_order): self._build_generator_with_order(noise_index_order) else: self._build_generator(distribution) def _build_generator(self, distribution): self.distribution = distribution def generateOneSample(): return [ distribution(self.cls_amount + 1, noise) for noise in self.noise_quantity ] def generate(size): return [generateOneSample() for i in range(size)] self._generate = generate def generate_for_class(class_number): return distribution( self.cls_amount + 1, self.noise_quantity[class_number] ) self.__generate_for_class = generate_for_class def _build_generator_with_order(self, noise_index_order): self._order = noise_index_order self._check_order() def generate(): return self._order def generate_for_class(class_number): return self._order[class_number] self._generate = generate self.__generate_for_class = generate_for_class def _check_order(self): for i, order in enumerate(self._order): if(len(order) == self.noise_quantity[i]): raise ValueError( 'The shape of the indexes object does not match with \ shape of noise_quantity' ) def get_indexes(self, size=1): """Generate indexes for noise data of the size=sise for all classes. Args: size(int): amount of samples to generate. """ return self._generate(size) def get_indexes_for_class(self, cls_number, size=1): """Generate indexes for noise data of the size with respect to the class. Args: cls_number(int): indexes is generated only for this class. Can't be bigger than 'cls_amount' size(int): amount of samples to generate. """ if(cls_number >= self.cls_amount): raise ValueError( 'Index is out of range. \ cls parameter can not be higher then cls_amount: %i %i' % ( cls_number, self.cls_amount ) ) return [self.__generate_for_class(cls_number) for i in range(size)]
STACK_EDU
""" Code for Mass Conserving LSTMs. """ __author__ = "Frederik Kratzert, Pieter-Jan Hoedt" from .ablations import RLSTMModel, NoNormModel, NoNormSum, AlmostMCRLSTMModel, LinearRLSTMModel, NoMCOutModel from .nalu import RecurrentNAU, RecurrentNALU from .baselines import LSTM, LayerNormalisedLSTM, UnitaryRNN from .mclstm import MCModel, MCWrappedModel, MCProd, MCSum from .continuous_prediction import CMCModel, CLSTM, CMCOut def get_model(cfg: dict): if cfg['model'] == 'mclstm': return MCModel(cfg) elif cfg['model'] == 'sum_mclstm': return MCSum(cfg) elif cfg['model'] == 'wrap_mclstm': return MCWrappedModel(cfg) elif cfg['model'] == 'prod_mclstm': return MCProd(cfg) elif cfg['model'] == 'lstm': return LSTM(cfg) elif cfg['model'] == 'lnlstm': return LayerNormalisedLSTM(cfg) elif cfg['model'] == 'urnn': return UnitaryRNN(cfg) elif cfg['model'] == 'rlstm': return RLSTMModel(cfg) elif cfg['model'] == 'nonormmclstm': return NoNormModel(cfg) elif cfg['model'] == 'sum_nonormmclstm': return NoNormSum(cfg) elif cfg['model'] == 'linrlstm': return LinearRLSTMModel(cfg) elif cfg['model'] == 'amcrlstm': return AlmostMCRLSTMModel(cfg) elif cfg['model'] == 'nomcoutlstm': return NoMCOutModel(cfg) elif cfg['model'] == 'nau': return RecurrentNAU(cfg) elif cfg['model'] == 'nalu': return RecurrentNALU(cfg) elif cfg['model'] == "continuousmclstm": return CMCModel(cfg) elif cfg['model'] == "continuouslstm": return CLSTM(cfg) elif cfg['model'] == "continuousdirectmclstm": return CMCOut(cfg) else: raise NotImplementedError(f"model not implemented: '{cfg['model']}'")
STACK_EDU
How to divided number into three continuous parts such that the third part is the sum of the other two? I am trying to write a python program to determine if the digits of a number can be divided into three continuous parts such that the third part is the sum of the other two. e.g. 9999198 can be divided because 99 + 99 = 198. The sum will always be the least significant digit. I am unable to come with approach please help. I am trying to implement it as treating a digit as a single number like in the above case a 7. Then trying to create all subset of three number which adds to 7 and then use these subsets to find the right one. Like 7 = 2,2,3 so my answer is 99,99,198. My problem is that how can we efficiently split these number into a subset of 3 number. Did you have a question? This isn't a code writing service. Did you try to do it? If yes please let us know where are you stuck? Sorry, yes I have a question. I am stuck on how to approach this problem. That's not really a question, and certainly not on topic here. See [ask]. @SanchitKumar , I am unable to come with an approach. @jonrsharpe Please go through my edit . I rectify my mistake. That still isn't a question. See e.g. https://meta.stackoverflow.com/q/284236/3001761. You should at least attempt your own homework before posting here. @jonrsharpe I did and if I can't come with any approach how I am able to post my sol here. The point is that if you can't come up with any approach than you don't have a valid question here. @jonrsharpe I added my approach, now please leave me alone. But you don't have any code. This site is for programming Q&A, not general problem solving advice. I'd recommend taking the [tour] and spending some time in the [help]. Here's my solution, it checks all possible combinations of splitting given number in 3 parts and checks whether sum of first two components are equal the third one. def correct_number(x): str_nmbr = str(x) for result_split in range(len(str_nmbr)-2): part_3 = int(str_nmbr[-result_split-1:]) for components_split in range(len(str_nmbr)-2-result_split): part_2 = int(str_nmbr[1+components_split: -result_split-1]) part_1 = int(str_nmbr[:components_split+1]) if part_1 + part_2 == part_3: return True return False print(correct_number(9999198)) # True As author requested here's visual explanation how determining parts for number works, given number "1234567" 1 2 3 4 5 6 7: First loop chooses second separator 1 2 3 4 5 6|7 Second loop chooses first one 1 2|3 4 5 6|7 1 2 3|4 5 6|7 1 2 3 4|5 6|7 1 2 3 4 5|6|7 . . . Then we move second second separator 1 step back 1 2 3 4 5|6 7 And we continue on moving first separator 1|2 3 4 5|6 7 1 2|3 4 5|6 7 1 2 3|4 5|6 7 . . . @Looioe Thanx for the sol. can u explain the little bit how to print all possible 3 number combination? Like 7 into 2,2,3 and 1,3,3 and so on . I guess you use the same approach. Also FYI What is a help vampire? @user7390332 first loop determines place where we split number into components side and result side for example with number of 7 digits we'd have to split it from 2 to the end (2 because we need at least 2 digits for first components). Second loop finds separator for components, we want it to be from 1 to place where starts our result number - 1. It would look like this: 1 2 3 4 5 6 7: First loop chooses second separator 1|2 3 4 5 6|7 1 2|3 4 5 6|7 1 2 3|4 5 6|7 . . .
STACK_EXCHANGE
I am using DHCPv6 in my local LAN to configure IPv6 on to my client machines (windows) and gateway using router by using M bit 1 in RA. Everything is fine upto that. Whenever I run an script to send rogue RA into my LAN, client machines configure the ipv6 addresses according to that RA but remove IPv6 address taken from DHCPv6. I am bit surprised why the client (windows 7) removing IPv6 address taken from DHCPv6 from it stack. It should keep both the IPs as I could see both the gateways on client. Secondly, Now I am running another script to kill that forge RA by sending RA with 0 lifetime to that prefix (with same source), In that case RA has been killed as interface unassigned that gateway but still client machine don't put IPv6 address on its interface provided by DHCPv6. This shows Client machine prefers RA over DHCPv6. What you are experiencing is exactly how IPv6 Address Autoconfiguration is designed to operate...which is not how IPv4 operates...and what ALOT of folks are going to be really puzzled and concerned about. IPv6 clients listen for RA's, and anytime they hear/see an RA, they act on how the flags are set, regardless of how they (client) are currently operating. The 4 primary flags of configuration concern: (there are other variables too, lifetime timers, etc...see RFC's 4861, 4862 & 3315) A on - use IPv6 prefix in RA to config SLAAC addr (network prefix + client derive host portion), or off - no IPv6 prefix advertised in RA means no SLAAC L on - means router is on-link, or - off means router may be not on-link (Win7 assumes L on regardless of this flag, MAC OS Lion needs L on for DHCPv6) M on - use DHCPv6, or off - don't use DHCPv6 O on - use other DHCPv6 config parms like DNS, or off - don't use DHCPv6 for other parms (but if M on, O doesn't really matter [RFC def]) When client Ethernet interface first initialize, they send up to 3 RS (Mcast to FF02::2) - not waiting for an RA to come around. If they hear an RA, they act on its config. If they don't hear an RA they will simply configure a Link-Local address. Routers will send RAs periodically...it is a min/max setting in each router config. When a client sees an RA with M on, they will send a DHCPv6 Solicit (Mcast to FF02::1:2) looking for DHCPv6 servers. If the client has a DHCPv6 derived address, and receives an RA that has M off, the client will release that DHCPv6 derived address (just like you saw). If the client later receives an RA with M on, they send the DHCPv6 Solicit, etc, etc, etc. For Stateful (DHCPv6) you want A=off, L=on, M=on L=on (L on or off doesn't really matter since M is on). The client will get its def g/w from the RA, and IPv6 addr from DHCPv6. btw, in Win7, even if the config is for DHCPv6, it will not send the DHCPv6 Solicit until it has received an RA with M set to on. Again, this is not how DHCPv4 operates. I am presenting on this exact topic at the 2012 North American IPv6 Summit in Denver next week. http://www.rmv6tf.org/IPv6Summit.htm I also recently finished the chapter of the Guide to TCP/IP 4th edition that is all about this topic. The book will be available late summer 2012. This 4th edition update grew the 50pages of IPv6 content in the 3rd edition to over 400 pages, alot of new content!! - Proposed as answer by MGro Friday, June 15, 2012 7:00 PM I will send off for your book, does it cover Server 2012 and IPAM? Does a server send a RS too even if it has a static IPv6 address and a gateway and DNS servers configured manually? Is there really no way to get rid of the FE80 Local Link addresses? Replies after your Q's: Q1 - your book, does it cover Server 2012 and IPAM? A1 - no, but mostly W2K8-R2 and W2K12 operate about the same for IPv6 Q2 - Does a server send a RS too even if it has a static IPv6 address and a gateway and DNS servers configured manually? A2 - no, but Windows server can be configured to be an IPv6 router, but I wouldn't recommend doing it, very little "tweaking" available. Q3 - Is there really no way to get rid of the FE80 Local Link addresses? A3 - No, that would break the foundation of IPv6 operations/standards. Yes certainly helps. I just don't like the FE80s My servers keep getting DHCP addresses as well as the fixed ones I have given them. On one I could get rid of it with ipconfig /release6 but on another I can't. It seems to me that it would be nice that DCs with DNS installed automatically don't go and get another IP address and publish it. I suppose it keeps us all employed :)
OPCFW_CODE
So i am readying a Mac OS X Server (10.5.6) ready to serve out our web-pages, VLE, Support desk and email. But first i have a few questions. 1. I have currently set up the sites so that the DBs are on a different server from the Web front-ends. So each website (Joomla, Moodle, Support desk) has to connect back to another machine. Is this a good idea. The two servers won't be in the same location. 2. If the first answer is yes it's a good idea, then is it a good idea to use non-standard port numbers? For example moving MySQL's port number from 3306 to something else. Moving IMAP port numbers to something else? and SMTP? Is it worth the extra hassle of configuring? 4.MySQL Permissions for DB access. When configuring Joomla, Moodle etc. You create a user to connect to the DB to. What permissions should this user be given? Do they need all of them or only a select few? 3. The final one (I think). I have numerous sites hosted on the one server and all of them use a log in facility. To protect the passwords i am using SSL. Now the problem is Apache complains about using SSL in conjunction with Virtual Hosts. This is something i need to research. But if this is the case how can i use more than one SSL site on one server? Hope you can help. I think that's everything...for now Thanks in advance. So SSL requires individual IP addresses for each site. Would this then allow me to use as many self-signed certs without errors appearing in the logs? The sites still work but i am unsure as to what issues/vulnerabilities could arise from keeping it this way. Thanks for the advice so far though guys. Most appreciated. You can have 1 SSL address like ssl.webserver.local then have subfolders so: or you have have multiple SSL addresses like: the second would require 3 (static) IPs mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON joomla.* TO 'yourusername'@'localhost' IDENTIFIED BY 'yourpassword'; Some packages are more specific, such as PHPMyAdmin requires a user with very specific rights on the main mysql database, and has different rights on each table. Probably because it is the main mysql database and if you leave it too loose you could compromise the whole server. Have you tried configuring different certificates for each location using <directory> in https.conf? (never tried it, but can't see why it wouldn't work.) You could also use a wildcard certificate which would cover *.blah.com, makes things easier as well as cheaper assuming all your virtual hosts are of the format [host].blah.com. The only way to use differing certificates based on some condition is to have different IP addresses (since these are lower in the OSI stack than the application layer). Last edited by powdarrmonkey; 26th March 2009 at 03:21 PM. Reason: speeling Jay (26th March 2009) Didn't know that, but it explains a restriction we have with a reverse proxy solution that servers multiple sites. I assumed the host address was presented to the server in the initial handshake. (lesson for today, never assume.) https://bombsrus.com/ for example.) I just thought, you could also use one IP address with virtual hosts on multiple ports and therefore multiple SSL sites. Sorry i haven't replied to any of these posts but i never got a mail saying there had been replies. There are currently 1 users browsing this thread. (0 members and 1 guests)
OPCFW_CODE
You must have seen these two names pretty much while looking for Operating System on your Web Server. Now, you don’t need to confuse ever again for selecting the one from these. We are going to compare both of these and will let you know about which one to choose specifically for your case. There are different uses that vary from person to person. There are chances that CentOS might do the job for you smoothly instead of Ubuntu or vice versa. We are going to do a detailed analysis, so you don’t need to get confused ever again. You always go for the best, Why would you sacrifice on the OS part. Whenever you buy a laptop or even a smartphone, first you decide the OS you want. In smartphones, you might look for Android or iOS. In Laptop/Computer, You might look for Windows or Mac. According to your choice and preference, you select the OS first and then look for the devices available. The same is the scenario with the OS on the Web Server. Before we begin to compare both of these OS, you need to have a look at the overview of both. This might give you an idea at first, how generic is the OS or the relevance of it. A Quick Look Firstly, you need to take a quick look at both of these Operating Systems. We are sharing some of the main details and information about both of these. These are just to make things clear in your mind about the two. After this, you get to see the detailed analysis of them both. Talking about Ubuntu, It is one of the most widely used Linux Distribution. It is based on Debian Linux. Most of the developers and programmers prefer Ubuntu as their primary Linux Distribution. This is an Open Source project. Ubuntu gets quick updates and even frequently gets the fixes to bugs. There are multiple functionalities that Ubuntu can do in a bare minimum time span. It has a lot of apps available that provide many more features to the OS. Additionally, You are relaxed about Security while using Ubuntu. You can customize this Operating System as much as you want. It is definitely one of the easiest to use Linux distributions available so far. CentOS is also an Open Source project. Although, It is based on Red Hat Enterprise Linux (RHEL). It was released in 2004. It is a fact that RHEL is the most widely used Linux Distribution in the IT corporate and CentOS is based on the same. Eventually, CentOS is following the footsteps and is a good Linux Distribution in the IT corporate sector. As it is closely focusing on RHEL, it clearly makes it a uniquely different OS from Ubuntu. Talking about features, it is also highly customizable too. Even this OS is secure and stable that makes it a good choice for some. The security measures are according to RHEL, which makes it secure enough to take a lead over Ubuntu specifically. Comparing CentOS with Ubuntu Key Points to Note: - It is based on Debian architecture. - Uses .DEB packages for installations - apt-get package manager - Frequent Updates (within 6 months) - UserBase is HUGE… - Most Widely used amongst beginners - Easier to learn through tutorials and guides - Cannot use cPanel and some specific software - It is based on RHEL architecture (Highly secure) - Uses .RPM packages for installations - yum package manager. - InFrequent Updates (but for longer-term) - Minimal amount of Users (Quality/Advanced User) - Most widely used by professionals - Hard to learn as guide for even basic things unavailable - cPanel is available and some other exclusive software too Now comes the part where we are going to compare and contrast the features from CentOS and Ubuntu. You are going to see a lot of different features that matter the most and they are going to collectively help you for choosing the right one. As we already have mentioned, everything depends on your use case scenario. Both Operating systems are good in their respective areas. All you have to see is what is the most important thing that matters to you. So, check all of the features that you want and see which one is a clear winner among both. Ubuntu is based on Debian architecture while CentOS is based upon Red Hat Enterprise Linux (RHEL). Both of them are good in their area of speciality. So, they both use different package managers that might affect the workload or apps that you would like to use. While Ubuntu is using the apt package manager and accepts the installation from .deb packages only. Whereas CentOS is using the yum package manager and accepts the installation from .rpm packages. But, it is an important thing to note that you cannot install .rpm package on Ubuntu or .deb package on CentOS. Clearly, this makes you check about the packages available for the OS you are going to select. Security & Stability If this is what matters to you, We’ve got you covered. Just to let you know, Ubuntu provides frequent updates as compared to CentOS. You get to see a new version release every six months from Ubuntu. You might see various software that are available with every new update on Ubuntu. There are a lot of things that Ubuntu takes care of when it comes to updates. But with the new and frequent updates, your existing software might end up creating a mess. As the configuration might need some changes while the regular and frequent updates offer the latest technology and features. Whereas CentOS does not provide frequent updates. Rather, It takes more time to include features in the release and update it. As the developer team in CentOS is smaller than Ubuntu. Although you see fewer updates on CentOS but they are supported for 10 years from the date of release instead of 5 years on Ubuntu. Software and Apps While you are talking about the number of Software on both of these, Ubuntu is a clear winner for that. There are definitely more software available on Ubuntu as compared to CentOS. You might need different functionality on your web server, the software is a thing that you will require for that. You might have heard of cPanel. The software is written for Red-Hat based systems. If you see the quality software, CentOS might win. There are different software available that are used widely and are based on Red-Hat systems or specifically CentOS in this case. So, You definitely need to check which are the software that you are going to use on your webserver. Support and Forums If you mess up something, you need someone who can take care of the problem. Support comes handy whenever you face any issue or find a bug. Ubuntu has a great number of active users on the forum, they will answer your query if its an easy or common one. Although you might not get a solution to all your problems through the forum, you will require the support to take action. But, Ubuntu has premium support that means you need to pay some amount of money for such issues. As Ubuntu provides frequent updates, it is generally not practical to solve all the queries raised by direct consumers, testers, beta users & more. Easy to Use You always look for the thing that is easy to use. When it comes to the Operating System, why would someone take a chance? There are different things that make the Operating system easy to use. The experience should be great enough so that the user would like to use the Operating System more. The OS should act as a familiar thing to the user. The user should not look for the documentation or over the web for small things on the operating system. All the important functions should be easily accessible on an operating system. Ubuntu is the operating system that cares for the user experience. There are more numbers of guides, tutorials available for Ubuntu as compared to the CentOS. That clearly makes a valid point for the comparison. Where CentOS presumes that the user knows about the basic commands of an OS like Sudo or other commands, Ubuntu is the opposite. If you are an advanced user in the area of the Operating system, prefer CentOS. But if you don’t know about the commands that are the most basic and common, prefer Ubuntu. It will be easier for you to understand things on Ubuntu than CentOS. Comparison Based on Use If you are a beginner You need a good amount of tutorials and guides in order to use OS with commands. You definitely need frequent updates that will provide you enough features that you can test and deploy for your servers. You need people who understand you, through the forums. Ubuntu is popular amongst the developers though. You will not find any difficulty while trying Ubuntu even for the first time. With the number of updates, easy to use interface, tutorials, and guides, Ubuntu is what you deserve. Although, CentOS is not bad at all if you want to try as a beginner, but it might take some time for you to learn the things. Also, there are reasons for you to not focus on stability. As in the initial stage, you can bear with some bugs at the cost of frequent updates and more features. If you are Advanced User / Business What matters for you is the stability and the features that are already existing for your work. You can easily cross-check the things that are already existing on the Operating system before choosing. You need to make sure that your data is secure enough and the system is stable. Even if you don’t get frequent updates, you rely on existing ones. Although, CentOS provides updates in greater numbers but not too frequently. One of the most important things that you need to know, CentOS is based on RHEL. Most of the useful software is available on CentOS only. Like software that you might have heard of, cPanel is made for Red-Hat oriented OS. CentOS is one of the most stable and secure OS compared to others (that doesn’t mean Ubuntu is not secure or stable enough). All you have to check is the software you are going to use for your business. Otherwise, CentOS will provide more stability and a lesser number of updates are good at some point when it comes to user experience. The final call is yours. What we could do is tell you about the things that mattered about both of these. You have to check your use case scenario and see which one will suit you. If you don’t know anything about the use case and you are trying for the first time, Go with Ubuntu. You will get more features and software available on Ubuntu as compared to CentOS. If you are concerned about the data on your server and want to provide a good user experience to the audience, prefer CentOS. Everything is up to you and your use case. Both of the operating systems are good enough to perform in the longer run. You can definitely rely on any of the Operating Systems for your Web Server. There are many things that you might have to look into the operating system apart from what we have shared. You can anytime look for the features from any of them. Make a checklist of what things matter as a priority. Then, look for the entries from the list in these OS. When you come to a conclusion about which one provides you with the most number of things priority-wise, Go for that one. If you still have some doubts/queries, share them with us in the comments. That’s All, Have a great day!!
OPCFW_CODE
require 'creeper' require 'celluloid' module Creeper ## # The Fetcher blocks on Redis, waiting for a message to process # from the queues. It gets the message and hands it to the Manager # to assign to a ready Processor. class Fetcher include Celluloid include Creeper::Util TIMEOUT = 1 def initialize(mgr, queues, strict) @mgr = mgr @strictly_ordered_queues = strict @queues = queues.map { |q| "queue:#{q}" } @unique_queues = @queues.uniq end # Fetching is straightforward: the Manager makes a fetch # request for each idle processor when Sidekiq starts and # then issues a new fetch request every time a Processor # finishes a message. # # Because we have to shut down cleanly, we can't block # forever and we can't loop forever. Instead we reschedule # a new fetch if the current fetch turned up nothing. def fetch watchdog('Fetcher#fetch died') do return if Creeper::Fetcher.done? begin queue = nil msg = nil job = nil conn = nil conn = Creeper::BeanstalkConnection.create begin job = conn.reserve(TIMEOUT) queue, msg = Creeper.load_json(job.body) rescue Beanstalk::TimedOut logger.debug("No message fetched after #{TIMEOUT} seconds") if $DEBUG job.release rescue nil conn.close rescue nil sleep(TIMEOUT) return after(0) { fetch } end if msg @mgr.assign!(msg, queue.gsub(/.*queue:/, ''), job, conn) else after(0) { fetch } end rescue => ex logger.error("Error fetching message: #{ex}") logger.error(ex.backtrace.first) job.release rescue nil conn.close rescue nil sleep(TIMEOUT) after(0) { fetch } end end end # Ugh. Say hello to a bloody hack. # Can't find a clean way to get the fetcher to just stop processing # its mailbox when shutdown starts. def self.done! @done = true end def self.done? @done end private # Creating the Redis#blpop command takes into account any # configured queue weights. By default Redis#blpop returns # data from the first queue that has pending elements. We # recreate the queue command each time we invoke Redis#blpop # to honor weights and avoid queue starvation. def queues_cmd return @unique_queues.dup << TIMEOUT if @strictly_ordered_queues queues = @queues.sample(@unique_queues.size).uniq queues.concat(@unique_queues - queues) queues << TIMEOUT end end end
STACK_EDU
I haven’t had a lot of response to my question about a New RPG Blog Planet, but what I have had has ranged from “sounds like a good idea” (with suggestions for alternatives) to “SQUEEE!” (which I interpret as fairly positive). I’ve had a bit more time to think about it, and am picturing the following features and guidelines. - Blog owners can register one or more RSS feeds. At this point I think I want to moderate additions because there are several things I want to do: - Ensure the blog is in fact RPG-related. I really want to avoid spam and other crap. I have a fairly liberal interpretation of ‘RPG-related’. I considered inviting Subterranean Design simply because the pictures can be awesomely evocative scene-setters (decided not to because there is no real excerpt — the only thing I would generally be able to show is the actual picture, and I want to avoid post-stealing). - Confirm that the registrant in fact has authority to register the blog (probably by simply adding a post saying they’re signing up, I’m not aiming to be too hardass about it). Because I plan to store as much information as I can about the posts (including the full post body, for indexing and search reasons) I want to be as friendly about it as possible. - Confirm feed quality. My personal aggregator includes a few blogs that do not provide excerpts in their feeds, or have only limited body text. That is entirely up to the feed owner, but it limits the utility of this service if I’ve got nothing I can display or search. - All posts will be stored locally for indexing and search reasons. However, body text will not be displayed in full as body text, with the following exceptions: - If no explicit excerpt is provided, an excerpt will be taken from the start of the body text. - If the post is short enough and no explicit excerpt is provided, the generated excerpt may end up being the entire body text. - When a search is run, I expect to show match context (text around the matched search criteria) in the search results. - Posts will be - Browseable by author (as identified by email address in the feed). - Browseable by date. I find that contrary to normal blog behavior, when browsing my personal aggregator by day (that is, reading each day in turn) it is much more useful to have each day’s posts presented in normal chronological order rather than reverse. However, the archive will likely still work in reverse chronological order. - Browseable by blog (source feed). - Browseable by keyword (tags and/or categories, as provided by source feed). - Searchable by text. Stretch feature: advanced searches, include author, feed, date ranges, tags. “feat design tag:dnd-4e” sort of thing. - Feeds will be categorized (definitely on registration, possibly afterward either by the feed owner or indirectly on request from the feed owner), so blogs may be browsed by category. “Show posts from OSR blogs” (or D&D 4e blogs, or Shadowrun blogs, or product review blogs, or whatever). When advanced searches come available, “dragon blogtype:osr” might be a valid option. - Subscribers (registered users) might have - Multiple formats for viewing post lists. I can easily see minimalist entries of just a line or two on the screen, larger entries with some meta-information about the post and/or blog (possibly including blog avatar, author avatar, and/or post featured image), larger yet with the excerpt displayed (possibly with ‘related articles’), and so on. I don’t want to get too fancy yet, but I can see moving in this direction. - Post marking (for later reading, or whatever). - Perhaps allow logging of (personal) notes against feeds or posts, to make things more useful. - Perhaps allow ‘following’; I can imagine a facility where I generally like what someone writes, so making it easy to share personal lists of favorite posts or the like might be worthwhile. Definitely a stretch feature. - Perhaps have personalized RSS feeds — aggregate the limited (excerpt-only) posts from a selected set of feeds into one. The subscriber would still need to follow links back to the original site to read the body text, though. Second stage might allow more dynamic and powerful personalized feeds, perhaps based on queries. “I’m interested in Carcosa, so give me a feed that tells me about posts mentioning Carcosa”… not sure about this one, it could be a bit of a challenge. This is still pretty open to adjustment, I’m still brainstorming to a fair extent. I have a few overarching guidelines, though. First, Wheaton’s Law: Don’t be a dick. Specifically, don’t just repackage other people’s stuff and publish it. Excerpts and indexing and searching is all cool, copying other people’s posts isn’t. Second: As useful and usable as possible. Third: Don’t overpromise. Start small and build up over time if there is demand. I don’t want this to take months to hit the streets, I’m quite happy to get something relatively rough up first, then build up from there. So, anyone (still) interested? Want to register feeds with this site, or even just read it? Any other feature ideas I may have missed? This is a pretty high-level list and I’m open to suggestions.
OPCFW_CODE
How many Great and Bountiful Human Empires have existed? Christopher Eccleston's Doctor refers a number of times to the "Fourth Great and Bountiful Human Empire". In total, how many so-called "Great and Bountiful Human Empires" have existed? At least 3 others, presumably. @Richard : That much is true. I seem to remember something that may be relevant. I dont remember when or which doctor but at some point he is questioned about how humans/earth can be destroyed if they have seen future earth and there are the empires. All of these great and bountiful empires are in our future and they have been to at least one. His response is that it can all change, I dont recall exactly, but the eventual point was that there could be many or none. Go forward from now and there would be a 4th, but go back change something and there may only be 2 or none. Subject to change at any point but 4 are recorded. First Human Empire = Approx 2500 A.D. During the K9 episode "The Korven", there's a mention of hostilities between the Earth Empire and the titular Korven, a race of aliens who invaded the Earth in the 2400s. It's not stated whether this Empire was specifically "Great and Bountiful" Second Great and Bountiful Human Empire = 4126 A.D. DOCTOR: Ah, got it! The Ood-Sphere, I've been to this solar system before, years ago, ages! Close to the planet Sense-Sphere. Let's widen out. (he does it) The year 4126. That is the Second Great and Bountiful Human Empire. DONNA: 4126. It's 4126. I'm in 4126. DOCTOR: It's good, isn't it? DONNA: What's the Earth like now? DOCTOR: Bit full. But you see, the Empire stretches out across three galaxies. Planet of the Ood Third Great and Bountiful Human Empire = 7704 A.D. As seen in the Doctor Who comic serial A Fairytale Life #01. Fourth Great and Bountiful Human Empire = 200,000(ish) A.D. DOCTOR: So, it's two hundred thousand, and it's a spaceship. No, wait a minute, space station, and er, go and try that gate over there. Off you go. DOCTOR: The Fourth great and bountiful Human Empire. And there it is, planet Earth at it's height. Covered with mega-cities, five moons, population ninety six billion. The hub of a galactic domain stretching across a million planets, a million species, with mankind right in the middle. The Long Game Subsequent Empires Beyond the year 500,000 Earth appears to have abandoned Empires for some considerable time, choosing to become part of a wider Galactic Federation. There's a very brief reference to the New Earth Empire (founded after the destruction of Earth in the year 5,000,000) in the comic serial Agent Provocateur but that's about it. @praxis - There is no evidence of a "Fifth Great and Bountiful Empire" (or indeed any higher number) in the show. Gratefully accepted. Wow. Nice finds! I wasn't expecting there to be much of a canon answer to this one. @randal'thor - The wonderful thing about a show that's run for 50+ years is that there's an answer for pretty much everything Great answer! Just wanted to add that the idea of a human "Empire" existing in the 2500s had already been established in a number of Doctor Who stories prior to that K9 episode, such as the Third Doctor stories Colony In Space and The Mutants, see the Earth Empire article on the TARDIS wikia. @Hypnosifl - I was looking for a reference to the actual words "Human Empire", not just "Empire" which is why I plumped for K9 Well, doing a search of the transcripts on chakoteya.net I see that Empire had been referred to as "Earth's Empire" in The Mutants and Frontier in Space, which seems like it's probably equivalent in meaning to "human empire". Incidentally it was followed by the Galactic Federation in the 3000s, which may have been a less specifically human-centric empire (so not really relevant to the question but it's interesting to see that as much as Doctor Who shuns detailed continuity, in broad strokes there is a somewhat consistent future history).
STACK_EXCHANGE
Why is the HTTP location header only set for POST requests/201 (Created) responses? Ignoring 3xx responses for a moment, I wonder why the HTTP location header is only used in conjunction with POST requests/201 (Created) responses. From the RFC 2616 spec: For 201 (Created) responses, the Location is that of the new resource which was created by the request. This is a widely supported behavior, but why shouldn't it be used with other HTTP methods? Take the JSON API spec as an example: It defines a self referencing link for the current resource inside the JSON payload (not uncommon for RESTful APIs). This link is included in every payload. The spec says that you MUST include an HTTP location header, if you create a new document via POST and that the value is the same as the self referencing link in the payload, but this is ONLY needed for POST. Why bother with a custom format for a self referencing link, if you could just use the HTTP location header? Note: This isn't specific to JSON API. It's the same for HAL, JSON Hyper-Schema or other standards. Note 2: It isn't even specific to the HTTP location header as it is the same with the HTTP link header. As you can see the JSON API, HAL and JSON Hyper-Schema not only define conventions for self referencing links, but also to express information about related resources or possible actions for a resource. But it seems that they all could just use the HTTP link header. (They could even put the self referencing link into the HTTP link header, if they don't want to use the HTTP location header.) I don't want to rant, it just seems to be some sort of "reinventing the wheel". It also seems to be very limiting: if you would just use HTTP location/link header, it doesn't matter if you ask for JSON, XML or whatever in your HTTP accept header and you would get useful meta-information about your resource on a HEAD request, which wouldn't contain the links if you would use JSON API, HAL or JSON Hyper-Schema. The semantics of the Location header isn't that of a self-referencing link, but of a link the user-agent should follow in order to complete the request. That makes sense in redirects, and when you create a new resource that will be in a new location you should go to. If your request is already completed, meaning you already have a full representation of the resource you wanted, it doesn't make sense to return a Location. The Link header may be considered semantically equivalent to an hypertext Link, but it should be used to reference metadata related to the given resource when the media-type is not hypermedia-aware, so it doesn't replace the functionality of a link to related resources in a RESTful API. The need for a custom link format in the resource representation is inherent to the need to decouple the resource from the underlying implementation and protocol. REST is not coupled to HTTP, and any protocol for which there's a valid URI scheme can be used. If you decided to use the Link header for all links, you're coupling to HTTP. Let's say you present an FTP link for clients to follow. Where would be the Link in that case? "REST is not coupled to HTTP, and any protocol for which there's a valid URI scheme can be used. [...] If you decided to use the Link header for all links, you're coupling to HTTP." I think this point is the one, which is the most coherent for me. I'm sorry, after thinking about it again I think I don't get it... "The Link header should be used to reference metadata related to the given resource." That sounds to be exactly the same as "functionality of a link to related resources in a RESTful API"? "REST is not coupled to HTTP"... Maybe. But if this would be a problem, wouldn't I need to expose my HTTP methods in my payload, too? E.g. /users/123/delete or /users/123/put? My DELETE and PUT depends on HTTP... why shouldn't my links? But as another counter-argument: Should I treat my JSON documents for machines the same as my HTML documents for humans...? I wouldn't ask, if I should move the links and buttons from my HTML into the HTTP header... Yes, in that sense you may say the link header is semantically equivalent to the links in the payload, and some people defend implementing HATEOAS with the Link header alone. I think it should be reserved only for media-types that are not hypermedia-aware. Just think that REST is an architectural style based on the successful design decisions made for the web itself. A REST API should be navigable and discoverable in the same way a website is. Do you usually check for links in your HTTP headers when you're browsing a webpage? If you're not treating your JSON documents in the same way as your HTML for humans, meaning they're not hypermedia-aware, then you're not using REST. You should use the Link header when you need to provide links for a media-type that's not hypermedia-aware. For instance, you want to reference the author of an image. You can't embed a link in the image itself. Recommended reading: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven Regarding methods in your hypermedia documents, this discussion popped up recently. You might want to check that group too. https://groups.google.com/forum/#!topic/api-craft/v0GNu7ksO3s Thank you very much for your effort. I'll look into your last link. (I know the article from the first link, but it is very theoretical. I says to "do" hypertext, but not exactly "how".) There's no mystery about the how: it's how any website does. It's usually not so obvious because most APIs use a format like JSON, with no native syntax for hyperlinks. Check hal+json, it should give you ideas. http://stateless.co/hal_specification.html I think it "is" a little mystery. If it would be obvious, we wouldn't have so much specs doing it all differently :) hal+json is just "one" way to do it. Thanks again for your reply. The semantic of the Location header depends on the status code. For 201, it links to the newly created resource, but in 3xx requests it can have multiple (although similiar) meanings. I think that is why it is generally avoided for other usages. The alternative is the Content-Location header, which always has a consistent meaning. It tells the client the canonical URL the resource it requested. It is purely informative (in contrast to the Location, which is expected to be processed by the client). So, the Content-Location header seems to closer resemble a self-referencing link. However, the Content-Location also has no defined behavior for PUT and POST. It also seems to be quite rarely used. This blogs post Location vs Content-Location is a nice comparison. Here is a quote: Finally, neither header is meant for general-purpose linking. In sum, requiring a standardized, self link in the body seems to be good idea. It avoids a lot of confusion on the client side.
STACK_EXCHANGE
I first heard about “kanban” years ago, when a former boss and agile coach, started talking about it more and more. At the time, I let it slide as a fad, but it was in the back of my mind as I went through life. It wasn’t until last year, when I started looking for a replacement to my failing to do lists, reminder programs, and GTD systems, that I returned to the idea. At the time, I was looking for that replacement, and remembering how well I did with an agile environment. I remebered how effective the task board always was for me, and started planning a personal agile system in my head. Then I remembered this “kanban” idea, and started looking into it. What I found intrigued me, I read everything I could find, and eventually came to Personal Kanban. A kanban, is a method of increasing transparency usually found in manufacturing environments. In short: The first official use of kanban can be traced to Taiichi Ohno’s work at Toyota. He needed a way to quickly communicate to all workers how much work was being done, in what state it was, and how the work was being done. His goal was to make work processes transparent – meaning he wanted everyone, not just managers to know what was “really” going on. The goal was to empower line workers to improve how Toyota worked. Everyone had a hand in making Toyota better. Personal Kanban, is similar, but is more focused on knowledge workers’ tasks. Overall, it really boils down to 2 rules: - Limit your work in progress - Visualize your work That’s it! This was perfect for me. A system that gave me structure, that still allowed me to be flexible in how I applied it. My use of personal kanban has been pretty steady since then, and my flow has evolved. I started with the common: Ready > Doing > Done Currently, it is a little more complex: Backlog > Investigate(3) > Ready > Doing (3) > Done This means that items start in my backlog. I then pull up to 3 items into my investigate column, where I ensure I know what the task really means. This is where I break the task up, get better estimates, and identify any potential questions or blockers that I need clarification on. Once I investigate, I move it to a ready state, and then, when I’m ready, I pull it into the Doing state, which is where I actually do the item. I don’t move another item into the Doing state until I move an item out of it (see rule number 1). I track all of this on a kanban board, which is like a scrum board (above), but with more columns. I use color to associate items with projects, and at times, i’ll include information about estimates, or bug numbers on the post-it notes. This works for me, but it still evolves; I change the board when I have new needs, or when the board isn’t properly giving me the feedback about what is going on. My goal with it is to make it get out of the way as much as possible, and eventually, to move it to a Moleskine notebook, like That will give me a level of mobility for my board, without requiring me to have network connections, or carry around a whiteboard to get erased. I’ve found kanban to be a useful tool for me, and I’m always learning more about it, and how to use it better. I love to hear other perspectives though, so if you have one, feel free to chime in!
OPCFW_CODE
// Package colors strings using ANSI escape sequences. package ansicolor import ( "strconv" ) // Color functions var ( Clear = newFunc(0) Reset = Clear Bold = newFunc(1) Dark = newFunc(2) Italic = newFunc(3) Underline = newFunc(4) Blink = newFunc(5) RapidBlink = newFunc(6) Negative = newFunc(7) Concealed = newFunc(8) StrikeThrough = newFunc(9) Black = newFunc(30) Red = newFunc(31) Green = newFunc(32) Yellow = newFunc(33) Blue = newFunc(34) Magenta = newFunc(35) Cyan = newFunc(36) White = newFunc(37) OnBlack = newFunc(40) OnRed = newFunc(41) OnGreen = newFunc(42) OnYellow = newFunc(43) OnBlue = newFunc(44) OnMagenta = newFunc(45) OnCyan = newFunc(46) OnWhite = newFunc(47) IntenseBlack = newFunc(90) IntenseRed = newFunc(91) IntenseGreen = newFunc(92) IntenseYellow = newFunc(93) IntenseBlue = newFunc(94) IntenseMagenta = newFunc(95) IntenseCyan = newFunc(96) IntenseWhite = newFunc(97) OnIntenseBlack = newFunc(100) OnIntenseRed = newFunc(101) OnIntenseGreen = newFunc(102) OnIntenseYellow = newFunc(103) OnIntenseBlue = newFunc(104) OnIntenseMagenta = newFunc(105) OnIntenseCyan = newFunc(106) OnIntenseWhite = newFunc(107) ) func newFunc(colorCode int) func(string) string { return func(text string) string { result := "" result += "\x1b[" + strconv.Itoa(colorCode) + "m" result += text result += "\x1b[0m" return result } }
STACK_EDU
Why is X mirror not working on a mesh? Does X-Axis Mirror work on a mesh? I applied X-Axis Mirror to a set of bones and it worked but not on the mesh, I tried to mirror the mesh using the X-Axis Mirror option but it doesn't mirror anything. Here's the .Blend. Could you explain your question in a bit more detail? I tried to mirror the mesh using the X-Axis Mirror option but it doesn't mirror anything. The mesh isn't exactly symmetrical. This behavior is intentional. Meshes which are almost (but not quite) symmetrical wont detect vertices as mirrored.This is needed to avoid problems with high-poly meshes where vertices may be very close. To resolve the problem, you can use the Snap to Symmetry tool. See: Mesh -> Snap to Symmetry. This has options to select a distance threshold and choose which side of the axis to use. This is because your mesh is not symmetrical. X mirror looks for a corresponding vertex on the opposite side of the object's origin: Grabbing the vertex at 1 will look for a vertex at 2. If your mesh is not symmetrical, chances are there is no vertex at 2. If there's no corresponding vertex, then X mirror can't do anything. What if I deleted half of the mesh? Would it work? @ChristopherMonday Yes, if you mirrored the remaining half back over to the other side, in order to provide X mirror with something to work off of. @ gandalf3♦ It didn't work https://drive.google.com/file/d/0BwnCnOOEcD5qbGJGZXphQU9LR1E/view?usp=sharing @ChristopherMonday It's still asymmetrical. You need a symmetrical mesh with symmetrical topology for X mirror to work properly. Looks symmetrical now...https://drive.google.com/file/d/0BwnCnOOEcD5qek1CVUxVWlZCeHc/view?usp=sharing Looks like one side is still missing, so it's still not symmetrical.. Try mirroring the existing side to the other side (either with a mirror modifier or a destructive method, see http://blender.stackexchange.com/q/14634/599) What about now? https://drive.google.com/file/d/0BwnCnOOEcD5qYlhJbzhpdkR3ZGM/view?usp=sharing Let us continue this discussion in chat. You can also get burned by this if your mesh has been rotated at some point and you haven't yet applied the rotation.
STACK_EXCHANGE
Python order of evaluation for a logging instantiation. Why do imported modules get evaluated first? Hi I have kind of an advanced question. I'm basically having a hard time why a downstream class has a class attribute log object being instantiated first. I have a python application with about 20 classes and 3 different modules and the application does some logging. I want to make the logging path configurable from the cli but at first, I just hard coded the logging path and instantiated a logging object. Then that logging instance to all other parts of the application with the hard coded path. So I hard coded logging path at first (just app.log). Now I'm trying to allow the user to set the path of the logging. So in my Log class, I have a singleton-esque method set_handler that sets the handler, which is a class variable. It follows a singleton pattern in that it can be just set once. So the user will pass in a logging path as well for the "first" handler. If the handler is already set for later log instantiations, then no handler is configured, the same handler is used, just a new log object is returned. So if the user passes in xyz.log then it will be set forever, fine. But it seems I cant nail down when the first time actually it is called. The entry point is much different than the first time a logging object is being instantiated. Im trying to force the log instantiation but python keeps having the first instantiation as something different. Its actually a class that is being imported by an imported class that is imported by the class that I actually want to execute first. And that class (which the code doesn't hit until much later) has a log object which is being instantiated first. So that class's log object is being instantiated but the user provided log path has not been passed yet, which is my intent. So basically Im not able to force python to evaluate the proper log instantiation first. This may not answer your question directly, but something that is not obvious, at least it wasn't to me, is that the first module that does import logging gets to call the shots. That might be in a module of yours, or it might be in something else that your module imports (any number of levels down). To be sure of being in control of the log instantiation, do it in a very early import by your main program. Even if the linter says that import should be lower down. Oh thats great to know and thanks for adding that. The class that is doing the instantiation first is actually just importing os, subprocess and three other first-party classes including the Log wrapper class. It seems only the class variables get evaluated first. If I add a singleton pattern to those as well, then it continues on. But its a crap design The logging package will be imported the first time the Python interpreter encounters the statement import logging. At that point it will not be "configured", so nothing will get logged. It is only when you call one of the logging packages's configuration functions that logging begins. You say that the user "passes in" the path of the log file (I think that's what you mean) - does this happen on the command line? In that case, you can't configure logging until your code parses the command line (obviously) so if you need to log something meantime, you have to send it somewhere else. Yeah I was collecting a list of messages while the user input was being parsed. Actually what happened was here https://chase-seibert.github.io/blog/2012/01/20/python-class-attributes-are-evaluated-on-declaration.html. I was storing the logger as a class attribute and when I imported the class, it was evaluating it. So what was happening was basically here, the logging object was being evaluated as soon as the class of the class attribute was imported. (When Class2 was imported, it's class attribute log was evaluated and thus instantiated). This guy did a blog about it - https://chase-seibert.github.io/blog/2012/01/20/python-class-attributes-are-evaluated-on-declaration.html. So to lay it out in more detail I had one file like import other_stuff from module2 import Class2 class Class1: def __init__(self): do stuff def func1(): do stuff with Class2 And then in module 2 it looked like import os import subprocess import other_stuff class Class2: log = LoggerClass(log_path=__name__) def __init__(self): def func2(): do stuff And in the application entry point it was like so. This was the part that I wanted to evaluate first... but it did not evaluate first. import click import other_stuff from module1 import Class1 @click.command("start") @click.option(--logpath) def start(logpath): log = LoggerClass(log_path=__name__) I was thinking that the log object in the start function in the entry point would be the first instantiated but it was the log object that was a class attribute of Class2 that was instantiated first actually. This is because a class's class attributes are evaluated upon being imported. Like when you import a class, that causes all the class attributes to be evaluated. So I actually just changed the code in Class2 to be like import os import subprocess import other_stuff class Class2: log = None def __init__(self): def func2(): do stuff @classmethod def log(cls): if cls.log is None: cls.log = LoggerClass(log_name=__name__) return cls.log And then of course I change the statements that do the logging from Class2.log.logstuff() to Class2.log().logstuff(). So the calls to log stuff no longer reference the class attribute log and instead call the class method log().
STACK_EXCHANGE
Which Sitemap to keep - Http or https (or both) ashishb01 last edited by Just finished upgrading my site to the ssl version (like so many other webmasters now that it may be a ranking factor). FIxed all links, CDN links are now secure, etc and 301 Redirected all pages from http to https. Changed property in Google Analytics from http to https and added https version in Webmaster Tools. So far, so good. Now the question is should I add the https version of the sitemap in the new HTTPS site in webmasters or retain the existing http one? Ideally switching over completely to https version by adding a new sitemap would make more sense as the http version of the sitemap would anyways now be re-directed to HTTPS. But the last thing i can is to get penalized for duplicate content. Could you please suggest as I am still a rookie in this department. If I should add the https sitemap version in the new site, should i delete the old http one or no harm retaining it. RangeMarketing last edited by Add the new version and delete the old. Retaining the old sitemap shouldn't flag you for duplicate content, however, it doesn't make sense to tell Google to crawl those pages if they are just going to redirect to new pages that are in the new sitemap. Extra work for GoogleBot. Hope this helps! I am about to submit a sitemap for one of my clients via webmaster tools. The issue is that I have way too many urls that I don't want them to be indexed by Google such as testing pages, auto generated pages... Is there way to remove certain URL from the XML sitemap or is this impossible? If impossible, is the only way to control these urls is to "No index" all these pages that i don't want the search engine to see? Thanks Mozzers,Technical SEO Issues | | Ideas-Money-Art0 Followed Google's instructions for using the Change of Address Tool in GWT to move rethinkisrael.org to www.fromthegrapevine.com. I'm getting this message, "We tried to reconfirm ownership of your old site (rethinkisrael.org) and failed. Make sure your verification token is present, and try again."Even though the site is verified, we undid the DNS change, and checked the meta verification tag. The tag is correct. And, since the site is ALREADY verified there was NO way to 'veryify' in GWT again. The message in GWT says "verification successful."We redid the DNS change, tried again to do the address change and get the same error message. Any ideas?Technical SEO Issues | | Aggie0 If Google knows about our sitemaps and they’re being crawled on a daily basis, why should we use the http ping and /or list the index files in our robots.txt?Is there a benefit (i.e. improving indexability) to using both ping and listing index files in robots? Is there any benefit to listing the index sitemaps in robots if we’re pinging? If we provide a decent <lastmod>date is there going to be any difference in indexing rates between ping and the normal crawl that they do today?</lastmod> Do we need to all to cover our bases? MarikaTechnical SEO Issues | | marika-1786190 I have a videos portal. I created a video sitemap.xml and submit in to GWT but after 20 days it has not been indexed. I have verified in bing webmaster as well. All videos are dynamically being fetched from server. My all static pages have been indexed but not videos. Please help me where am I doing the mistake. There are no separate pages for single videos. All the content is dynamically coming from server. Please help me. your answers will be more appreciated................. ThanksTechnical SEO Issues | | docbeans0 Hope everyone is enjoying the new year! I was wondering if converting your desk top website to a mobile one, example via http://my.dudamobile.com/, has any negative effects on SEO. Did it effect your site? Do you recommend doing it? Does it effect links? When people link to your desk top URL does that authority carry to the mobile, or would it be better if they link to the mobile (m.website.com) URL? Is http://my.dudamobile.com/ a good choice? Any feedback, as always, is greatly appreciated! JimmyTechnical SEO Issues | | jimmy02250 I'm in the process of migrating a whole site, which has excellent rankings built through ongoing SEO over the years, from http to https. What is the safest way of doing this, while maintaining rankings? I'm assuming 301 redirect of every page from http to https? Thanks!Technical SEO Issues | | A_Q1 Over the last month we've included all images, videos, etc. into our sitemap and now its loading time is rather high. (http://www.troteclaser.com/sitemap.xml) Is there any maximum sitemap size that is recommended from Google?Technical SEO Issues | | Troteclaser0
OPCFW_CODE
C++26 adds hazard pointers The ISO C++ standards committee chair gives an initial insight into C++26. This is a standard for the C++ programming language that will be issued in 2026. A new standard for the C++ programming language is issued every three years. The last release, C++20, will be succeeded at the end of this ... Read more Software’s post-pandemic recessionary lag factor Knowledge is power. But further than any mere level of learning by rote understanding, we use the term ‘savvy’ to denote a particular level of practical knowledge and ability. Coming out of the post-pandemic period of preparation for the ensuing recession will (arguably) require more savvine... Read more ‘SQL skills are the most sought-after programming skills of 2022’ According to IEEE Spectrum's latest survey, Python is the most popular programming language of 2022 while SQL skills are the highest in demand. EEE Spectrum conducts an annual survey of the most popular and sought-after programming languages. Python, C and C++ emerged as the three most popular ... Read more Linux kernel loses IDE support in release candidate of version 5.14 Linus Torvalds published the first release candidate for version 5.14 of the Linux kernel. Torvalds had some remarks accompanying the release, where he made clear his hopes that the release cycle will be smooth, with observations about the smoothness of the process related to the size of the releas... Read more Google Cloud launches Apigee X to help enterprises scale up On Wednesday, Google announced the launch of Apigee X, the new major release of the Apigee API management platform it bought in 2016. Amit Zavery, Google Cloud’s head of platform, said that if we look at current events, especially after the pandemic started in 2020, the volume of digital activiti... Read more New version of Node.js has new diagnostics features A new version of Node.js has been released. Version 15.1.0 offers, besides the usual bug fixes and other things, a new diagnostics channel feature. In a diagnostics channel a developer can keep track of all events that take place within an application. This is done in a separate object, so that ... Read more Red Hat Enterprise Linux has released the RHEL 8.3 beta to users Red Hat's updates are usually spaced six months apart, and the new update for Red Hat Enterprise Linux (RHEL) is here. The beta for RHEL 8.3 will work on all major RHEL architecture, including IBM Power, IBM Z, AMD, Intel 64-bit, and 64-bit ARM. For users who have the AMD architecture, Secure En... Read more Python 3.8 introduces Walrus Operator and more Python 3.8 has been released this October, bringing with it new capabilities designed to help developers produce their code effectively and efficiently. Known for being open source, Python continues to be a popular, general-purpose programming language used for server-side web development, softw... Read more Latest release Appian Platform makes encoding faster Appian comes up with a new version of his low code platform. It must be easier than ever to build business applications. In addition, the process with the renewed platform should be up to twenty times as fast. This is partly due to the expansion of the number of possibilities offered by the platform... Read more Developers can register a .dev domain with Google For a long time, Google used .dev domains only internally. But recently it has become possible for developers to register a solar domain as well. That's not cheap yet: at this moment you pay 11.500 dollar (10.132 euro) for a domain name. There's an additional $12 a year. According to Google, the dom... Read more
OPCFW_CODE
Is there a shorter expression for 'are not commonly discussed as much as'? I am currently doing some writing, and constantly try to create smooth transition and ease of readability and linkage between sentences. This sentence feels like it kind of halts the reader a bit. Albeit although assemblers are not commonly discussed as much as the two language processors, they are, however, equally important in the pipeline. Replace 'are not commonly discussed as much as' with something that virtually means the exact same but, might be a single word. 'Albeit although' - pick one. Drop 'however', & suddenly the whole sentence loses the limp. The part you're asking about is not the problem. The previous version of this question seemed like open-ended proofreading, which is off topic. However, asking for a word/phrase is squarely on topic. For best results, review the tag wiki. Simplify to "Assemblers are not discussed as much as the two language processors, but are equally important in the pipeline." Though the question is arguably on-topic, the answers being offered are, while good writing advice ... writing advice. @EdwinAshworth does rephrasing count as a phrase-request? Replacing a perfectly valid word / longer expression is both asking for style advice and usually open-ended, @ stevesliva . The phrase receive less attention than would work here and save you a few words. Ultimately, though, this is all a matter of taste. You can reduce the verbiage by using a comparative form. If you change "not commonly" to "less frequently", then you can reduce "as much as" to "than": Although assemblers are less frequently discussed than the two language processors, they are equally important in the pipeline. Note that I got rid of the conjunction "albeit" (which was not appropriate in that position) and "however" (which was redundant due to the presence of "although"). (Tetsujin first suggested that, in a comment above.) are discussed less than gets you all the way to as short as using synonyms like are less notable than or noteworthy etc. @stevesliva I don't understand your comment entirely, but I think that you're suggesting "less noteable" or "less noteworthy" instead of my "less frequently discussed". Yes, that would reduce the length by one word, but it would also change the meaning a bit more than my version does. Consider changing "Dogs are not commonly discussed as much as cats" to "People discuss cats more than dogs." 6 words, not 9; 8 syllables, not 12; 29 letters, not 39. Brevity aside, three issues present. The revision is in the active voice, whereas the original is not. Generally, the active is preferred to the passive voice. Second, both expressions might suffer from the critiques of pettifogging grammar dweebs who'd insist on "Dogs are not as commonly discussed as cats are" and "People discuss cats more than they do dogs." I'd concede the point but argue its usefulness. Finally, the revision subtly alters the sentence's meaning in that it changes emphases. The original focuses on dogs receiving less attention than cats, hinting that dogs are losing out somehow. The revision is feline focused: cats are more interesting, more worthy than dogs. A nuanced difference, I admit, but not an insignificant one. All good editing is reductive. The best editing clarifies as well. An idiom you might use is get short shrift, although there's probably a large chunk of your audience who will be back here asking what it means. One of its meanings is "get little attention". Assemblers get short shrift but they're no less important in the pipeline than the two language processors. Based on what I imagine the context to be, this sounds a bit too informal to me. @Casey: "pipeline" seems a rather informal metaphor to me, so I don't think there is a register clash, but I take your point. I had upvoted receive less attention as neutral.
STACK_EXCHANGE
For a web based application, View provides a powerful set of tools to introspect the application page, develop test logic and maintain it for changes in the application. Element is the fundamental entity in a View that you work with. Locating the right element on the page is the first step in achieving this functionality. When you are hovering in the View canvas in either Action Logic or in the Context, ACCELQ highlights elements on the screen depending on your mouse cursor position. You can then right-click on a hovered element to insert an operation (in Action logic) or to add the element to the repository or to manage it otherwise. In most instances, this direct hover would work. Hovering on the right element is important Most modern web applications are rich in presentation, layout, and UX. Your application developer may have prepared the HTML with multiple nested elements in a given functional area, to bring out desired layout and functionality. When you are hovering on an element, be sure to review the tooltip to make sure you are dealing with the right kind of element. If required, move around the cursor a bit to get to the desired element. Exploring the View for conflicting element placements Occasionally, there may be situations where you are not able to hover on the exact element you need on the View Canvas. This could be due to conflicting placement of elements in the page or a popover control hiding the elements you are interested in etc. Sometimes, background elements on a screen with multiple windows may also be an issue. Xplore Mode allows you to freely navigate and explore the elements on the View canvas in order to pinpoint the element of interest. It allows you to override the default selection of hover-element for a given cursor position. It offers the flexibility of reaching any arbitrary element on the screen, just like you would do on your browser Dev Tool (Inspect Element on Chrome for example). When you are in the Xplore mode, you can navigate on a set of elements either based on physical vicinity or on the DOM hierarchy. Switching to Explore Mode When you are hovering on a View, you can switch to Explore mode by double-clicking on the element vicinity you are interested in. Pick any approximate element in that area and start exploring by double-clicking. - You will find a notification bar at the top of the View indicating you are operating in Explore mode. There are two different traversals possible: Vicinity and DOM - You can explore elements using keyboard up/down and right/left arrow keys. - Regular mouse hover is disabled on the View. Moving the cursor around the screen has no effect. - Right-clicking on any other element gives an error message. - You can switch to exploring another element-vicinity by double-clicking again. Exiting from Explore mode Simplify press the Escape key on your keyboard to exit Explore mode. Traversing elements based on Vicinity This is the default Explore mode navigation type. You can navigate between a set of elements based on their physical vicinity on the page. These elements are related by the physical placement, rather than the DOM hierarchy. This is suitable when you are dealing with a set of overlapping elements located at the same cursor position. All elements which "contain" the double-clicked cursor position are formed as the set and you can use Keyboard Up/Down arrows to go back and forth between these elements. When you find the element-of-interest, right-click and select an operation in the context menu. Consider an example. We are trying to insert a command on the "Home" link in the figure below. When you hover on the Home link, it is highlighting a <div> element that is much larger in size. This div element is invisible on the screen but rendered at a higher z-index effectively sealing off access to the "Home" link. You cannot access the "Home" link in this state. The second image shows when you hover on the "Home" link and double-click. As you can see, the control is now tied to the "Home" link and it also indicates there are 24 elements hitting in the same cursor position. If needed, you can use keyboard up/down arrows to navigate within these 24 elements. Now that the control is on the "Home" link, we can right-click and insert a command from the context menu. This navigation list is formed strictly based on the DOM structure. When you start the Explore mode, a set of elements are formed which are part of the same DOM hierarchy as the currently highlighted one. You can use Keyboard Up/Down/Right/Left arrows to navigate vertically (ancestors) or horizontally (sibling nodes) in the DOM. A classic example could be an ability to navigate from a table cell <td> to a table row <tr> to the table <table>, in the HTML table control on a page. You can double click in a table cell, and press up/down arrow keys to move up to <tr> or <table>, and click on left/side arrow keys to move between sibling table cells. Switching between DOM and Vicinity Navigation Simply click on the switch button in the Explore Mode navigation bar in the top. Vicinity mode collects all elements which consist of the current cursor position in their area of the display. You can navigate between elements in this collection by using Keyboard up/down arrows. Note: You cannot use keyboard right/left arrows in this mode. DOM mode allows you to navigate in the DOM hierarchy like accessing a parent, child or sibling. You can navigate by using keyboard up/down (ancestor and child elements) and right/left (siblings) arrows. Ambiguous Elements on mouse-hover When you are freely hovering in the View canvas, there may be situations where multiple elements may be present on the screen at the same position. ACCELQ automatically prioritizes these matching elements and points to one of them by default (indicated by the tooltip on the element-hover). But there may be occasions when this default cannot be determined with an acceptable level of certainty. In those situations, the hover area is marked as "ambiguous" and suggests the user to double-click and switch to Explore mode. Once you are in the Explore mode, you can use the tools available to zero in on the desired element.
OPCFW_CODE
What are the differences between gnome, gnome-shell and gnome-core? I am running Ubuntu GNOME and apt says that I have gnome-shell installed, but not gnome or gnome-core. $ apt-cache policy gnome gnome: Installed: (none) Candidate: 1:3.8+4ubuntu3 Version table: 1:3.8+4ubuntu3 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages $ apt-cache policy gnome-shell gnome-shell: Installed: 3.10.4-0ubuntu5 Candidate: 3.10.4-0ubuntu5 Version table: *** 3.10.4-0ubuntu5 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages 100 /var/lib/dpkg/status $ apt-cache policy gnome-core gnome-core: Installed: (none) Candidate: 1:3.8+4ubuntu3 Version table: 1:3.8+4ubuntu3 0 500 http://in.archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages Why does apt say I have not installed gnome, although I'm using GNOME as the desktop environment? Kali linux is off-topic on Ask Ubuntu (hence Aditya's kind edit). We can give you an Ubuntu centric answer, but if you want a Kali centric one please ask on [unix.se]. Thanks! This is just an issue of metapackages. The Debian world (and I believe the RedHat one as well) has collected certain programs that are used together into easy-to-install metapackages. So, the package gnome is actually a shortcut for installing all sorts of goodies: aisleriot, alacarte, avahi-daemon, cheese, cups-pk-helper, desktop-base, evolution, evolution-plugins, file-roller, gedit, gedit-plugins, gimp, gnome-applets, gnome-color-manager, gnome-core, gnome-documents, gnome-games, gnome-media, gnome-nettool, gnome-orca, gnome-shell-extensions, gnome-tweak-tool, gstreamer1.0-libav, gstreamer1.0-plugins-ugly, hamster-applet, inkscape, libgtk2-perl, libreoffice-calc, libreoffice-gnome, libreoffice-impress, libreoffice-writer, nautilus-sendto, network-manager-gnome, rhythmbox, rhythmbox-plugin-cdrecorder, rhythmbox-plugins, rygel-playbin, rygel-preferences, rygel-tracker, seahorse, shotwell, simple-scan, sound-juicer, telepathy-gabble, telepathy-rakia, telepathy-salut, tomboy, totem, totem-plugins, tracker-gui, transmission-gtk, vinagre, xdg-user-dirs-gtk, browser-plugin-gnash, gdebi, nautilus-sendto-empathy, telepathy-idle, dia-gnome, gnome-boxes, gnucash, libreoffice-evolution, planner This is the full Gnome desktop and is not needed to run the Gnome desktop environment. So, while you have gnome-shell installed, you don't have all the associated applications like games and email client etc that come with the full desktop environment. This is not a problem and it does not hinder you from using Gnome in any way. gnome-core is also a meta package, it will install the official "core" modules of the Gnome desktop: at-spi2-core, baobab, brasero, caribou, caribou-antler, dconf-gsettings-backend, dconf-tools, empathy, eog, evince, evolution-data-server, firefox, or, fonts-cantarell, gconf2, gdm, gkbd-capplet, glib-networking, gnome-backgrounds, gnome-bluetooth, gnome-calculator, gnome-contacts, gnome-control-center, gnome-dictionary, gnome-disk-utility, gnome-font-viewer, gnome-icon-theme, gnome-icon-theme-extras, gnome-icon-theme-symbolic, gnome-keyring, gnome-menus, gnome-online-accounts, gnome-packagekit, gnome-panel, gnome-power-manager, gnome-screensaver, gnome-screenshot, gnome-session, gnome-settings-daemon, gnome-shell, gnome-sushi, gnome-system-log, gnome-system-monitor, gnome-terminal, gnome-themes-standard, gnome-user-guide, gnome-user-share, gsettings-desktop-schemas, gstreamer1.0-plugins-base, gstreamer1.0-plugins-good, gstreamer1.0-pulseaudio, gtk2-engines, gucharmap, gvfs-backends, gvfs-bin, libatk-adaptor, libcanberra-pulse, libcaribou-gtk-module, libcaribou-gtk3-module, libgtk-3-common, libpam-gnome-keyring, metacity, mousetweaks, nautilus, notification-daemon, pulseaudio, sound-theme-freedesktop, tracker-gui, vino, yelp, zenity, network-manager-gnome, gnome, Note that the gnome metapackage also installs the gnome-core metapackage. In any case, the main point here is that metapackages are not needed. You can install each of their component packages manually so lacking one or more metapackages does not imply that anything is actually missing from your system.
STACK_EXCHANGE
Now comes into play the to add the users to allow them to have RDP access to the server. If the server is the member of a farm and the user is logged KellyK Thanks so much for this. Check youris now a constantly evolving Windows as a Service solution.You use Remote Desktop Licensing Manager (RD Licensing Manager) to install, issue,problem and it was killing me. If that's the case, then instead of adding him to the local remote desktop To change that, install the "Terminal server" component, either remote be sure to check the Group Policy settings on the remote box. desktop Rdp-tcp Listener Properties recover your Spiceworks IT Desktop password? There could be legitimate reasons for reassigning the default RDP port to a remote By creating an account, you're agreeing to our Terms the results and run Tasklist while grep’ing for PID 2252. I am sure many of you are error Sonora OP michelleseguraco Apr 3, 2015 at 11:26 UTC will do. best data centerinsights. This remote access is controlled by the Allow log on through Remote Desktop Services the listener then the connection is successful. Keep it up, thank To Log Onto This Remote Computer You Must Be Granted The Allow Log On Through Terminal Services So, you grab the Process ID (PID) number fromprobably unhealthy food define set of sets Limit Notation.This snip-in cannot beno local groups on the domain controller. What are the difficulties of landing on an upslope runway What to do What are the difficulties of landing on an upslope runway What to do http://windowsitpro.com/systems-management/tips-troubleshooting-remote-desktop-connection-problems Settings\Security Settings\Local Policies\User Rights Assignment.Did I participate inalready familiar this GPO and this group.One thought: Is this terminal does she mean it like lesbian girlfriend? By default members of the When The User Account Is Not Given The Logon Remotely Rights By Gpo User Rights, as their name suggests, control who is authorized to Thanks for that info! 3 years ago Reply shabUsers groups are given remote logon rights. the message he sees. Looking to get things navigate here latest trends and steps on how to fix them. am at a loss too. DBforumsoffers community insight on everything from ASP to Oracle,Logon privileges for the RDP-Listener.I was having the sameadministrative purposes only, you don't have to install Remote Desktop Session Host (aka Terminal Server).It's important to note that Microsoft does desktop Terminal Server and restart the Remote Desktop Services service.Navigate to Computer Configuration -> Windows Settings -> Have you got the right level To Log Onto This Remote Computer You Must Have Terminal Server User Access Permissions Actually, local groups on users group, you'll likely need to add him to the federated remote desktop users group. Next time you see the error message that Figure 5 shows, add users to the target server, on SMB you NEVER had to do that!Allowing users to run programs on a https://support.microsoft.com/en-us/kb/2779073 keys need a separate unique constraint?Would you like to answerservice in the Remote Desktop Services server role included with Windows Server 2008 R2.All users that have been tested areUsers" to this policy. Otherwise, the connection To Log Onto This Remote Computer You Must Be Granted Windows 7 OS's ..Interestingly, i added the user itself, and now instead of a popup with thatJSI privileges, you might get various errors messages. of security on the RDP sessions?Add "Remote Desktopaccount on which or by which the command was run.First thing to do is see if a But i http://videocasterapp.net/remote-desktop/repairing-remote-desktop-sp3-error.php server ALSO a domain controller?Parts of the plot hiding when plotting discontinuous functions What to do withthrough Terminal Services” does not include the Remote desktop Users group.I am Edwin Rocky and this time I am back with some interesting information Allow Logon Through Remote Desktop Services Greyed Out you, thank you, thank you. from Add/Remove Programs or from the "Configure Your Server Wizard". Distribution group inmy pre-teen daughter who has been out of control since a severe accident? the privileges of domain admins maintain them, you'll hardly need this. Note that Remote Desktop Licensing (RD Licensing)—formerly Terminal Services Licensing (TS Licensing)—is a role In this case, the Allow log on through Remote I still have theTip 4873. remote Rdp-tcp Listener users Desktop Connection Problems JSI Tip 10169. the event logs? Allow Logon Through Remote Desktop Services Registry Powerful tools you need, all for free.Click Add and add the userKevin Neberman This is killing me! Logical && statement with null validation Does Pankaj P. 5 years ago ReplyTip 4873. I checked by following Till next Not the answer and get the latest news from Data Center Knowledge. rdp or ask your own question.Open this in the workstation where you want to When looking at an upgrade to SMB 2003 it became clear that user policies and whatnot--made no difference on target server. the 90/10 rule of program optimization?objects in a child domain requires Enterprise Admin, DA is not enough. Otherwise the RDP-Tcp setting how it work with GPO.The new server allows member of the terminal-server or ask your own question. not recommend changing the port assigned to RDP. By creating an account, you're agreeing to our Terms Thanks.
OPCFW_CODE
The email routing is not working, mails are diverted to the catch-all address. It was working for a month or so but all the emails are now delivered to catch-all adress. I even tried changing the destination address, deleting and again adding the custom address but its all going to catch-all address. Please take a look. Seconded. I’ve also noticed this with my domains. All routes, workers and drop rules get skipped when catch-all is enabled and they just go to the catch-all address only. This issue is happening for us as well. The expected behavior (which was working fine until this morning) is that only incoming emails which do not match any route should be forwarded to catch-all email address. But right now, all routes are being skipped and all incoming emails are going to the catch-all address. Disabling catch-all fixed it for now. This same issue is happening for me too. The catch-all address gets all of the emails, even when the destination address is a custom email address for which there is a route configured and active. This was working OK until some time yesterday (Friday 11th August about 11am UTC+1). I had not touched the config at all and it just stopped working as expected. To illustrate the problem in a little more detail… My cloudflare account is managing my domain, example com. I am using Cloudflare for DNS for example com, including DNSSEC. I see (under routing / settings) a green lozenge saying ‘Email DNS records configured’. I have been using Cloudflare’s Email Routing service for this domain for the last month or so with no problems until yesterday. Looking in routing / routes, I have a custom email address configured in Email Routing, lets say this is custom1 example com and it is configured with the action ‘Send to an email’ and the destination is myname1 mailprovider1 com. The status of this item in the list of custom email addresses shows a green tick in a slider and the word ‘Active’. I have a catch-all address configured too, which is configured to ‘Send to an email: myname2 mailprovider2 com’. When an email is sent (from some other email system) to custom1@example <dot> com, that email is not routed to myname1@mailprovider1 <dot> com but is instead routed to myname2@mailprovider2 <dot> com. myname1@mailprovider1 <dot> com and myname2@mailprovider2 <dot> com are working email addresses that each receive email OK and neither of them forwards to the other. Note that I have deliberately used intended recipient email addresses on different mail providers so that I can be absolutely sure that the misdirection is happening in Cloudflare’s Email Routing system and not in the intended recipient email system(s). I don’t know if this is a related issue or not, but from the dashboard, when looking at the ‘Activity Log’ in routing / overview, I can see that Cloudflare always shows the recipient email address in the ‘Custom Address’ field, even when the recipient email address is NOT an email address that appears under ‘Routing’ as a custom email address. Perhaps the Activity log always worked that way, but it is surprising that the words ‘Custom address’ that head the 3rd column in the activity log does not refer to the same thing as the words ‘Custom address’ in the filter drop-downs at the head of the Activity Log. E.g. I sent an email to newaddress@example <dot> com, where newaddress is not in the list of email addresses for which a route is configured, and in the Activity Log I see newaddress@example <dot> com in the column ‘Custom address’, but I do not see newaddress@example <dot> com in the drop-down list called ‘Custom address’ which can be used to filter the Activity Log, that drop-down list instead including - as might be expected - only the Email addresses on example com for which a ‘Custom address’ appears in the routing list. And just to say that disabling the catch-all kinda solves the problem - but not having a functional catch-all means that I have just had to create >50 routes so that the email addresses I have given out assuming that I had a catch-all will work. I hope I haven’t missed any! The catch-all was the feature that spurred me to use Cloudflare’s service. My previous provider was just dropping support for catch-alls, and most others don’t seem to support it. Sorry that I keep replying to myself, but I have another domain with Cloudflare which doesn’t exhibit this behaviour. This other domain doesn’t use DNSSEC, although I have no reason to think that difference is significant. I don’t have DNSSEC enabled but still routing failing for me. Most probably they are having multiple nodes for routing out of which some are failing. Yes, I also encountered the same problem, which is very frustrating. Is it a problem with Cloudflare or something? How should we solve it. Not all domain names will encounter this problem, so what’s going on. Thanks for the reports! The team are working on a fix You can follow the progress here: Cloudflare Status - Cloudflare Email Routing Issues This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
import Foundation import ObjectMapper public enum UpdateType { case Pending case Sent } // Represents a single update public class UpdateDAO: Mappable, CustomStringConvertible { public var clientId: String? public var createdAt: NSDate? public var day: String? public var dueAt: NSDate? public var id: String? public var profileId: String? public var profileService: String? public var scheduledAt: NSDate? public var sentAt: NSDate? public var serviceLink: String? public var serviceUpdateId: String? public var sharedNow: Bool? public var status: String? public var text: String? public var type: String? public var updatedAt: NSDate? public var via: String? public var retweetText: String? public var retweetUsername: String? public var updateType: UpdateType? // public class func newInstance(map: Map) -> Mappable? { // return UpdateDAO() // } required public init?(_ map: Map) { } public func mapping(map: Map) { id <- map["id"] clientId <- map["client_id"] createdAt <- (map["created_at"], DateTransform()) day <- map["day"] dueAt <- (map["due_at"], DateTransform()) profileId <- map["profile_id"] profileService <- map["profile_service"] scheduledAt <- (map["scheduled_at"], DateTransform()) sentAt <- (map["sent_at"], DateTransform()) serviceLink <- map["service_link"] serviceUpdateId <- map["service_update_id"] sharedNow <- map["shared_now"] status <- map["status"] text <- map["text"] type <- map["type"] updatedAt <- (map["updated_at"], DateTransform()) via <- map["via"] retweetText <- map["retweet.text"] retweetUsername <- map["retweet.username"] } // MARK: Printable public var description: String { return "[\(id!)] \(profileService!): \(type)" } }
STACK_EDU
It has been a while since Microsoft rolled out Office 365 Groups out to general availability and it caused a lot of confusion from many segments. First, was it a Yamemr killer? Second was it a file collaboration, or an email communication? Third, was it something to control or something to let loose? Are O365 Groups a Yammer Killer? Darn good question and one that there isn’t an “official” answer to that I have heard. If we look at the history of Microsoft and acquisitions of critical functionlity like FAST Search then Yammer is destined to be a part of O365/SharePoint. Is that Groups? Maybe. There is a lot of similarity. They allow for conversations, they allow for file storage, calendaring, and note taking. BOth allow for external users to participate and can be public or private. O365 Groups though seem to be geared towards small group collaboration and communication where Yammer’s play is Enterprise Social. Yes the features are similar, but the goals are different. Perhaps Microsoft will take O365 Groups and make them able to embed into any team site, or extend them so that they can be used to socialize other systems. That isn’t today. There also is the lack of the general company group where everyone can converse…and the ability to track topics that is critical to an Enterprise Social Network. Verdict: Not today, maybe not ever Are O365 Groups about File Collaboration or Email Communication? Yes. Seriously, it is about both. For too long Email and Files have lived in different worlds with two completely separate products to serve them. Neither was optimal for the way that people actually work. We communicate constantly…we have to to get our work done. But we communicate about files because that is the product of our work. If we stay in Exchange then we are sending emails back and forth, whereas if we stay in SharePoint we end up either assigning tasks, that generate emails, or we are emailing links. Neither of those are the right solution. Groups allow us to marry the two together. Dedicated communication channel that is topical to the group along with dedicated file collaboration topical to the group. Is it perfect…bot yet, but its getting there. Verdict: First real step to convergence of communication and collaboration Do I strictly control O365 Groups, or let them grow organically? When I talk to customers about Governance I warn of the dangers of organic, uncontrolled growth. This is a real problem in SharePoint. But SharePoint is a structured environment that we need to control. Groups are going to fall into the ad hoc or even personal collaboration level of governance. Absent free and easy use by all users, they will not flourish and be a benefit to the company. The very promise of an O365 Group is that anyone can create it with a couple of mouse clicks and start working immediately. The end user is empowered to solve their own problems. As near as I can tell, the only way to prevent users from creating groups is to not allow their email into the cloud. I have customers ask if we can prevent any user from creating a group in Yammer, or an O365 Group and the answer is “No, and you don’t want to”. You don’t prevent a user from creating a folder in their email…or from creating a personal distribution group and you don’t want to restrict the use of O365 Group. However, you do need to think on a plan for getting rid of groups that are stale and not needed anymore. Verdict: Peace, Love, and O365 Groups
OPCFW_CODE
Refactor Houdini deadline submission Hello, I'm sorry if this comes out as a bit harsh but I think the approach this PR is taking to support caching in the farm is wrong and over engineered. First of all, caching in the farm (and rendering or any other Houdini processes) are already supported by third-party toolsets (Deadline, HQueue, Tractor...) and in WAY more powerful ways that this PR tries to accomplish and the OP plugin framework can manage. This is duplicating all of that logic in OP and adding 1,398 more lines to the already super complex code base!! Most Houdini TDs are already familiarized with those vanilla workflows and having them learn this other "black box" approach through OP is backwards and doesn't add any benefit in my opinion. You can see an example of a very normal submission to the farm here https://github.com/ynput/OpenPype/pull/5621#issuecomment-1732166830 OpenPype shouldn't try to orchestrate the extract/render dependencies of the Houdini node graph, that's already done by these schedulers/submitters, we just need means to be able to run OP publish tasks of the generated outputs, but without doing any gimmicks, just taking a path, a family, a few other data inputs and registering it to the DB so it runs the other integrate plugins of OP like publishing that to SG/ftrack as well (and ideally the API for doing that in OP should be super straightforward to call it from anywhere! the current json file input isn't the best access point to that funcionality). If we wanted to help the existing vanilla submission OP could provide a wrapper of the vanilla submitters so it sets some reasonable defaults and we could intersect the submission to run some pre-validations on the graph... set some parms that might be driven by the OP settings or create utility HDAs to facilitate the creation of the submitted graph so frame dependencies are set correctly and chunk sizes for simulations... but that's it, we don't need to reinvent the wheel by interpreting how the graph needs to be evaluated. On the other hand, I still don't quite get why the existing submit_publish_job is limited to "render" families and why it's not abstracted in a simple reusable module that any plugins that require to submit to Deadline can reuse it. This PR showed how a lot of the code had to be duplicated again with most of the lines exactly the same, doubling in technical debt. This PR https://github.com/ynput/OpenPype/pull/5451/files goes in the right direction in abstracting some of those things but the right approach should be to remove all of the noise from submit publish job that's render specific and make use of the same util module every time we just need to run an OP publish task in the farm. However, as ï said initially, I don't think we should even take this approach for Houdini and we should just leverage the existing farm submitters code, but this is relevant for any other tasks that we choose to submit to the farm, we are going to have to write a "submit_<insert_family_type>_job" set of plugins for each? Extra notes: I feel like I already mentioned this elsewhere in another PR but have you gotten anyone requesting to publish IFDs? Is there any use cases to load those after the fact? AFAIK those aren't really consumable by anything other than Houdini and just as a pre-process to then run the render, it's not like ASS files that can be used as an interchange format from the Arnold plugins. Originally posted by @fabiaserra in https://github.com/ynput/OpenPype/issues/4903#issuecomment-1738343017 [cuID:OP-7455] As far as I'm able to tell, The issue description is mentioning refactoring Houdini deadline submission. I did some search for this topic and added it on forums Houdini Future - Vanilla Deadline ROP I'll just quote it here: Vanilla Deadline ROP In order to use the vanilla deadline ROP node. we need to figure a way that only performs the publishing. @fabiaserra's solution was to implement ax_publisher_submitter that creates a json file and submits an AYON publish job (in compliance with our deadline addon) Also, he override the vanilla rop nodes to add Ayon parameters, similar to https://github.com/ynput/ayon-houdini/pull/2 So, the node network actually looks more like this More info about it in his demo on Github. Also, I did quick search about it and I think such an idea requires extending the Deadline ROP itself which I have no idea how it's achieved. It seems that deadline rop supports specific standalone plugins, and I couldn't find a way to extend its functionality to support AYON. I wish if there are some exposed callback scripts to make extend it. still need to extend my search. Here are some screenshots with placebo parameters (non-functional 'prototype'). Guess this issue should be after https://github.com/ynput/ayon-houdini/pull/2. As I recall from preivous discussion, we should allow setting priorities on which rop nodes to be rendered. And also allows both local and farm rendering. It seems that deadline rop supports specific standalone plugins, and I couldn't find a way to extend its functionality to support AYON. I wish if there are some exposed callback scripts to make extend it. still need to extend my search. Here are some screenshots with placebo parameters (non-functional 'prototype'). AYON is already supported by Deadline, this is literally the AYON plugin for Deadline https://github.com/ynput/ayon-deadline/tree/develop/client/ayon_deadline/repository/custom/plugins/Ayon As for customizing Houdini's Deadline integration (i.e. to add env vars so the AYON environment gets injected) it's also pretty easy, you just need to provide your customizations under the custom/submission/Houdini/Main folder. You can also modify the Client code (if you want to do any tweaks to the HDA parms) although that's strictly not necessary for what you want as most of the logic behind the client code happens on the Main code. The main changes you need to do are on the SubmitHoudiniToDeadlineFunctions.py and add these at the SubmitRenderJob function: fileHandle.write("EnvironmentKeyValue0=HIP_=%s\n" % os.getenv("HIP")) # We do this so the GlobalJobPreLoad.py from Deadline injects the OP environment # to the job and it picks up all the correct environment variables ayon_bundle_name = os.getenv("AYON_BUNDLE_NAME") fileHandle.write( "EnvironmentKeyValue1=AYON_RENDER_JOB=1\n" if ayon_bundle_name else "EnvironmentKeyValue1=OPENPYPE_RENDER_JOB=1\n" ) fileHandle.write( f"EnvironmentKeyValue2=AYON_PROJECT_NAME={os.getenv('AYON_PROJECT_NAME')}\n" if ayon_bundle_name else f"EnvironmentKeyValue2=AVALON_PROJECT={os.getenv('AVALON_PROJECT')}\n" ) fileHandle.write( f"EnvironmentKeyValue3=AYON_FOLDER_PATH={os.getenv('AYON_FOLDER_PATH')}\n" if ayon_bundle_name else f"EnvironmentKeyValue3=AVALON_ASSET={os.getenv('AVALON_ASSET')}\n" ) fileHandle.write( f"EnvironmentKeyValue4=AYON_TASK_NAME={os.getenv('AYON_TASK_NAME')}\n" if ayon_bundle_name else f"EnvironmentKeyValue4=AVALON_TASK={os.getenv('AVALON_TASK')}\n" ) fileHandle.write( f"EnvironmentKeyValue5=AYON_APP_NAME={os.getenv('AYON_APP_NAME')}\n" if ayon_bundle_name else f"EnvironmentKeyValue5=AVALON_APP_NAME={os.getenv('AVALON_APP_NAME')}\n" ) fileHandle.write( f"EnvironmentKeyValue6=AYON_BUNDLE_NAME={ayon_bundle_name}\n" if ayon_bundle_name else f"EnvironmentKeyValue6=OPENPYPE_VERSION={os.getenv('OPENPYPE_VERSION')}\n" ) Note: the _HIP one is not necessary but we use that as a trick to map it on the farm once the HIP changes to the temp file in the farm Following up the discussion, I think there two ideas mentioned: Make cache submission more like render submission. this also includes supporting multiple render targets option in the publisher UI. I think this can be a short term goal. Native Houdini deadline submission. Tagging @BigRoy Hey @antirotor You marked this as "Needs Info". Please call out someone for the aditionnal infos you need 📢 🙂 Hey @antirotor You marked this as "Needs Info". Please call out someone for the aditionnal infos you need 📢 🙂 @dee-ynput @antirotor As far as I can remember, this issue is related to two topics: Refactor/merge submit_houdini_render_deadline.py and submit_houdini_cache_deadline.py and avoid duplicating code. Support publishing via native deadline HDA. which can be solved/partially solved by https://github.com/ynput/ayon-houdini/pull/122 . Thanks for the summary @MustafaJafar ✨ I'll mark this as blocked so that we can come back to it once all the discussions you've mentionned are solved 👍
GITHUB_ARCHIVE
Take some numbered cards; between ten and twenty should be enough (you could use a suit from a pack of playing cards). Shuffle the cards. Then organise your deck of cards into numerical order. What method did you use to put them in numerical order? Can you think of any other ways you could have sorted them? Here are some different sorting algorithms you could try. You may find it easiest to lay the cards out in a line to keep track of their order and see what's happening at a glance. Compare the first two cards. If they are in the wrong order, swap them round. Then compare the second and third card. If they are in the wrong order, swap them round. Keep going through the pack. When you have finished, keep the cards in the new order and repeat the process from the beginning of the pack. Repeat until you get all the way through the pack without doing any swaps. The cards are now sorted. First pass: compare the first two cards. Swap them if necessary. Second pass: compare the second and third cards. Swap them if necessary. Then compare and swap the first two cards again if necessary. Third pass: compare the third and fourth cards. Swap them if necessary. Then compare and swap second and third cards if necessary. Then compare and swap first and second cards if necessary. Continue in the same way for the rest of the pack. Go through the pack until you find the Ace. Swap it with the first card. Then go through the pack until you find the Two. Swap it with the second card. Then go through the pack until you find the Three. Swap it with the third card. Repeat with the rest of the cards. Put the first card down. Take the second card, and put it to the right or to the left of the first card, depending on whether it's higher or lower. Then take the third card and place it correctly relative to the first two cards, making a space if necessary. Take the fourth card and place it correctly relative to the first three cards, making a space if necessary. Keep going until all the cards are in order in the new pile. Take the first card of the pile. Sort the rest of the pack into numbers smaller than and numbers bigger than the number on the first card. Put the first card down between these two sub-packs. Then sort each sub-pack of cards by taking the top one and sorting the rest into two sub-packs in the same way. Keep going until there are no sub-packs with two or more cards. Try each algorithm a few times, and keep a record of how many 'moves' or 'swaps' you do. You could work with a friend and 'race' against each other to see who sorts their pack the quickest. If you are struggling to make sense of the written algorithms, here are some videos showing each algorithm being performed. Here are some questions to consider: - On average, which algorithm did you find to be quickest? - What is the 'worst-case scenario' for each algorithm? - How long would it take in the worst case? - If you know a little about computer programming, think about how you might instruct a computer to perform these algorithms, and how long the computer would take to perform a sort of n objects using each algorithm. Notes and Background One way to get a sense of how sorting algorithms work is to watch animations, such as the ones on this website Thanks to Margaret for testing this problem.
OPCFW_CODE
Explicitly request new instances from Aurelia dependency injection container In Aurelia (+ TypeScript)... Is there a way to directly reference the container in context (ex. in a view model) and explicitly "request" new instances from it? Someone didn't bother to read this: https://stackoverflow.com/help/privileges/vote-down Does this answer your question? https://stackoverflow.com/questions/45219953/create-a-new-instance-of-a-class-that-is-using-dependency-injection-in-aurelia Nick, thanks for replying. I was actually looking for ways get a reference to the scoped Container, and sort of document its API for resolving new instances here. To get the container in your view-model you generally have 3 options: 1. Inject the container Decorate your viewmodel with @inject(Container) or simply apply any decorator (which makes tsc emit type metadata) and make sure the type is specified in your constructor like so: constructor(private container: Container) {} This is the recommended way to get a container as it will give you the child container scoped to that particular view model. Meaning if you request things like Element or Router, you'll also get the ones scoped to that view model. Things you register to that container are only resolvable through that container or its children - not its siblings or parents. 2. Use the global container There is always one "root" container which you can access anywhere in your code via the Container.instance static property. This can be useful for some components that kind of live outside the normal aurelia lifecycle, or if you really need the root. You'll want to avoid this when you can though as it leads to spaghetti code. 3. "Abuse" the router I wouldn't necessarily recommend this but there's always a .container property on every configured router. This is the scoped child router - the same one you'd get if you injected it in the constructor of your view model. Once you have a reference to the container: Call container.get(Foo), to get an instance of Foo from only that container, or call container.getAll(Foo) to get a list of all Foo's from that container and all of its parents, up to the root. For constructors it defaults to calling the constructor and recursively resolving its dependencies if it has any. Then it stores the instance as a singleton. For anything that's not a constructor (except null and undefined) it defaults to storing the value and returning it whenever you call it with the same value again (not particularly useful but at least no error). For null or undefined it will throw an error. Lifetime and scope There are two lifetime registration types: singleton gives the same instance for the lifetime of the container transient gives a new instance each time you call the container The lifetime of a singleton is further determined by the lifetime of the container it's registered to which, in the case of a typical child container, is the lifetime of the view model. Other parts of the API surface are essentially just differently-scoped variants of either singleton or transient. Setting the registration type Many options here and I won't go into all of them here. The one that's relevant for you is the direct container API: container.register...(key, fn) After that, when you call container.get(key) it will resolve the dependency according to the registration you just set it to. You can change this after the fact as well - will just overwrite the existing resolver. singleton: container.registerSingleton(Foo) instance (singleton but you provide the instance): container.registerInstance(Foo, new Foo(new Bar())) transient: container.registerTransient(Foo) custom function: container.registerHandler(Foo, (container, key, resolver) => new Foo(container.get(Bar)) (for a transient Foo with a singleton Bar) There are other options but these are the ones most commonly used. Last note about the key, fn arguments: calling register(Foo) is equivalent to calling register(Foo, Foo). You could also say register("foo", Foo) if you don't want/don't have a reference to the class name from where you want to call it. Being able to open the debugger and say document.body.aurelia.container.get("foo") is something I personally find quite handy for debugging sometimes :) Fred, I am not accomplishing anything in particular other than understanding how these APIs work and documenting them here. For the sake of completeness and to make it clear for others, would you please edit your answer to emphasize the registration and resolution APIs available through a Container instance (with sample code) so I can accept your answer? Most of it is described here: http://aurelia.io/docs/fundamentals/dependency-injection#how-aurelia-uses-containers Fair enough. I edited it with that goal in mind but I don't think this is the right place to go over the whole API though, so I scoped it (no pun intended) to your question of directly calling the container. If the docs are not clear enough about this then perhaps I should go and edit those..
STACK_EXCHANGE
All that customization in Linux is per your requirements, but the users often need to pay more attention to customizing the grub. There are a few reasons that there’s a need to change a few things like background image, font color, and other visual elements. This can make it easier to spot the different boot options available. Now the question is, how to customize it? This post will provide the installation method of Grub customizer on Ubuntu 22.04 - Top Uses of Grub Customizer - Install Grub Customizer on Linux - Launch Grub Customizer on Linux - Remove Grub Customizer on Linux Let’s get started! Top Uses of Grub Customizer It is a tool that allows users to customize the GRUB bootloader’s boot menu and options shown when you start up your system. For example, see the image below. Now the above image is just a simple menu with a few options, but what if you change the background color (the image can also be set) or change the look of the fonts? Well, this can be done using the Grub Customizer; the other uses of it include the following. - Change the appearance of the boot menu, including the background image, font, and color scheme. - Adding or removing the boot options, such as different kernels or operating systems, from the GRUB menu. - Change the default time, which is set to choose the default option from the GRUB menu automatically. The boot order can also be changed. - Using the Grub Customizer, you can troubleshoot the booting issues by changing, adding, or removing the boot options. Installing the Grub Customizer on Linux Due to a bug on Ubuntu 22.04 and other latest distros, it was taken down from the official stores, but it can be installed using the official PPA of grub customizer. Installing the grub customizer on different distros of Linux requires a few commands to be executed. For example, in Ubuntu 22.04, we’d recommend following these steps. Adding the Personal Package Archive (PPA) Repository to the System It is highly recommended to add the PPA repository, which your system requires to install the package on your system, and you are to press the “ENTER” key when required. $ sudo add-apt-repository ppa:danielrichter2007/grub-customizer After a few seconds, you can proceed to the below step that involves the installation. Install Grub Customizer on Debian-Based Distros Make sure to add the PPA repository to your system; otherwise, you’d face this error. Now to install the grub customizer in your system, use this command. $ sudo apt install grub-customizer The above image confirms that it is successfully installed on Ubuntu 22.04; it’ll work with all Debian-based systems, including Debian and Linux Mint. For the other distributions, scroll down. Install Grub Customizer on RedHat and Other Similar Distributions For the users of RedHat, Fedora, and CentOS, the grub customizer can be installed using this command, and you won’t need to add a PPA repository as we did above. $ sudo dnf install grub-customizer You are required to wait a while for the downloading and installing process before you can use it. Installing Grub Customizer on Other Distros If you are a user of Arch Linux or Manjaro, use this command to install it on your system, and it doesn’t require any PPA repository. $ sudo pacman -S grub-customizer After you’re done installing it, let’s launch it. How to Launch Grub Customizer on Linux? Once the grub customizer is installed, it can easily be launched from the activities by typing “grub” in the search bar and seeing the following icon. Once you click on its icon, you’d be asked for authorization, so enter the password and click on the “Authenticate” button. You’ll now be welcomed with this GUI with a bunch of settings we’d leave you to explore. How to Remove Grub Customizer on Linux? When you want to remove Grub Customizer on Linux, use either one of these commands depending on your distro. $ sudo apt remove grub-customizer #For Ubuntu, Debian, and Linux Mint $ sudo dnf remove grub-customizer #For RedHat, Fedora, and CentOS $ sudo pacman -R grub-customizer #For Arch Linux and Manjaro That’s how the Grub customizer is handled on Linux. Customizing the GRUB Boot Loader becomes easy when you use the Grub Customizer; you can add life to your grub menu. This guide teaches how to install it on Ubuntu 22.04 and other major distributions of Linux.
OPCFW_CODE
Make a compiled binary run at native speed flawlessly without recompiling from source on a another system? I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed? There's really no way I would have thought "Java" upon reading the words "native speed"... The first step to running the same compiled body of code on multiple systems at native speed without recompiling is to choose one processor instruction set and throw out all other systems. If you pick Intel, then you must throw out ARM, MIPS, PowerPC, and so forth because the native machine code instructions for one architecture are completely unintelligible to other processors. Ok. So now the task is to run the same body of compiled native code on multiple systems (all using the same processor architecture) at native speed without recompiling. So basically, you want to run the same code under different operating systems on the same hardware. If the hardware is the same and the only difference is the operating system, then the trivial answer is yes, you can do it if you can write your code without making any calls to the operating system. No memory allocation. No console output. No file I/O. No network I/O. No fun. Furthermore, your code will have to be written in such a way that the code does not require address relocation fixups, since each operating system has different ways to represent relocatable code. One way to do that is to arrange your code on disk exactly as it would appear in memory, including reserving space to use for writable data (global variables, stack, and heap). Then all you have to do to run the code is copy the file bytes into memory at a predefined base address, and jump to the starting address. The MSDOS .com executable file format has been doing this since at least 1981, and CP/M for long before that. However, MSDOS didn't have today's virus scanners to contend with back then. Virus scanners get very excited when anyone other than the host OS loads file data into memory and attempts to execute that memory. Because, ya know, that's exactly what viruses do. Since each OS has its own executable file format, you'll also need to figure out how to get your block of "flawless" native code into memory on all these different operating systems. You will need at a minimum one program loader compiled for each operating system you want to run your block of native code in. While you're writing a program loader for each OS you want to target, you could also define your own file I/O functions that map to the OS native equivalents so that your block of native code can do file I/O on any system. Ditto for console I/O or graphics output. Oh wait - that's exactly what WINE does. That's also why the frame rates you see in WINE are so much lower than the same operations in the host OS - WINE is translating Win32 GDI graphics calls into something provided by the native host OS (Linux -> XWindows), and where there isn't an exact function match or where there is an operation semantic mismatch (which is frequently the case), WINE has to implement all the functionality itself, sometimes at great cost. But given the ubiquity of standardized hardware like IDE drives, USB devices, and BIOS functions, maybe you don't need to go to all the trouble of mapping your own portable APIs onto whatever the OS has built in. Just write a little code to do file I/O to IDE devices, do graphics output using VESA BIOS functions. If you abstract the code a little bit, you can support multiple kinds of hardware and pick the appropriate function pointer to use based on what hardware you find at runtime. Then you could truly run your block of native code on any system (using one particular processor architecture) at native speed without recompiling. Oh wait - you just wrote your own OS. ;> +1 for "Virus scanners get very excited when anyone other than the host OS loads file data into memory and attempts to execute that memory." Oh yeah, and for a very clear, thoughtful explanation. Yes, it is technically possible to translate a binary executable program written for one processor architecture and operating system into a binary executable program that will run on another processor and operating system. It's also an unholy amount of work. There is a problem with the "native code execution speed" terminology. You can compile a program to native code with optimizations disabled, and the resulting code will be native executable code running at "native code execution speed" but it will probably run slower than the same source code compiled with optimizations enabled. Both are running "native code execution speed", but they are running different quantities and quality of machine code to achieve the same core algorithm. Machine instructions are much more primitive than higher level source programming languages. When compiling source code into machine code, a lot of information is lost. Data types, for example, are usually reduced by a compiler down to a handful of machine primitives - pointer, integer, float. A string is a pointer to memory. A char is an integer. An object instance is a pointer. When you translate one machine instruction set into another machine instruction set, you are handicapped because you don't have as much information about the data as a source code compiler has. Compiling from source code, the compiler can see relationships and optimizations in the data that would be very difficult to discover just by looking at the machine code alone. Story time: Digital Equipment Corporation created a system called FX!32 that took native compiled Win32 Intel x86 executables, decompiled them, and translated the logic into native Alpha AXP processor instructions running Windows NT AXP. In this case, the OSes were at least cut from the same cloth, but one was 32 bit and the other was 64 bit, and at the machine code level they had radically different calling conventions. Nevertheless, it worked, and it worked remarkably well. Due to the differences in hardware, an Intel x86 app running on AXP could eventually run faster than the same app running on Intel hardware. (FX!32 used profiling to reoptimize the AXP code after the Intel app was run a few times, so performance usually started out pretty bad but improved each time you ran the app) However, even though everything was executing native AXP instructions, the FX!32 translated app never ran as fast as taking the source code and recompiling it specifically for the AXP instruction set. The FX!32 translated native AXP instruction stream was bulked up by the necessity to fully represent the semantics of the original Intel x86 instructions even if the (unseen) higher level algorithm didn't require all aspects of those semantics. When doing machine instruction to machine instruction translation, you can see/hear every note in the symphony but you may have trouble picking out which ones define the melody.
STACK_EXCHANGE
One of my colleagues from the Business Operation texted me on one morning and asked me where she can get insights, understand some of the terminology, difference between the SQL and NoSQL, and how to make decision which type of database to be used. Instantly, I replied, “get it from me!” I was pretty confident that I could give her an answer and I wanted to explain databases in a more interesting way. What is SQL? Structured Query Language (SQL) is computer language for database management systems and data manipulation. SQL is used to perform insertion, updation, deletion. It allows us accessing and modifying data. It stored in a relational model, with rows and columns. Rows contain all of the information about one specific entry and columns are the separate data points. What is NoSQL? NoSQL encompasses a wide range of database technologies that are designed to cater to the demands of modern apps. It stored a wide range of data types, each with different data storage models. The main ones are document, graph, key-value and columnar. This explains the above picture. Apps such as Facebook, Twitter, search engine (web) and IoT applications generate huge amount of data, both structured and unstructured. The best examples to explain what is unstructured data are photos and videos. Therefore, it needs different method to store the data. NoSQL databases do not store data in rows and columns (table) format. Differences between SQL and NoSQL There are a lot of websites which we can search online to give us the differences and I referred to this website. NoSQL is also known as schema-less databases. The above screenshot uses the word, dynamic schema, which means the same, it does not have a fixed schema which locked same number of the columns (fields) for data entry. NoSQL data allow to have different number of columns when data is added. Another major difference is scalability, SQL is vertical scaling and NoSQL is horizontal scaling. Let’s use a picture to explain scalability. Relational databases are designed to run on single server in order to maintain integrity of the table mappings and avoid the problems of distributed computing. Often, we will look into more RAM, more CPU and more HDD, ways to upsize our system by upgrading our hardware specification. It is scale up or vertical scaling. This process is expensive. NoSQL databases is non-relational, making it easy to scale out or horizontal scaling, meaning that it runs on multiple servers that work together, each sharing part of the load. It can be done on inexpensive commodity hardware. Question: SQL or NoSQL? Let’s refer to this article, the choice of the database between SQL and NoSQL cannot be concluded on the differences between them but the project requirements. If your application has a fixed structure and does not need frequent modifications, SQL is a preferable database. Conversely, if you have applications where data is changing frequently and growing rapidly, like in Big Data analytics, NoSQL is the best option for you. And remember, SQL is not deceased and can never be superseded by NoSQL or any other database technology. In short, it depends on what type of applications or project requirements and type of query result as well. Big data is used to refer not just to the total amount of data generated and stored electronically (volume) but also to specific datasets that are large in both size and complexity which algorithms are required in order to extract useful information from them. Example sources such as search engine data, healthcare data and real-time data. In my previous article about What is Big Data?, I shared that Big Data has 3 V’s: - Volume of data. Amount of data from myriad sources. - Variety of data. Types of data; structured, semi-structured and unstructured. - Velocity of data. The speed and time at which the Big Data is generated. Yes, based on all the above, we have covered 2 of the 3 V’s, the volume and variety. Velocity is how fast data is generated and processed. Although, there are more V’s out there and some are relevant to Big Data’s description. During my visit to the Big Data World 2018 in Singapore, I realized that my understanding of Big Data was limited to the understanding of the volume and variety. In this blog, I am going to write more. Storing Big Data Unstructured data storage which cannot be stored in the normal RDBMS for some reasons and often Big Data is related to real-time data and required real-time processing requirements. Hadoop Distributed File System (HDFS) It provides efficient and reliable storage for big data across many computers. It is one of the popular distributed file systems (DFS) which stored both unstructured and semi-structured data for data analysis. Big Data Analytics There are not many tools for NoSQL analytics in the markets at the moment. One of the popular method dealing with Big Data is MapReduce by dividing it up into small chunks and process each of these individually. In other words, MapReduce spread the required processing or queries over many computers (many processors). This Big Data does not limited to search engine and healthcare, it can be data e-commerce websites where we want to perform targeted advertising and provide recommendations systems which we can often see in websites such as Amazon, Spotify or Netflix. Big Data Security Securing a network and the data it holds are the key issues, a basic measurement such as firewall and encryption should be taken to safeguard networks against unauthorized access. Big Data and AI While smart home has became a reality in the recent years, the successful invention of smart vehicles which allows vehicles drive in auto-mode, gives us a big hope that one day smart city can be realized. Countries such as Singapore, Korea, China and European countries such as Ireland and UK are planning smart cities, using the implementation of IoTs and Big Data management techniques to develop the smart cities. I am looking forward. Dawn E. Holmes (2017) Big Data A Very Short Introduction.
OPCFW_CODE
We spent two days last week working on solo projects at Hack Reactor. I built a visualization of literary influences, beginning with Thomas Pynchon -- who, aside from being totally amazing, a genius, and one of my personal role models, also has a lengthy section on Wikipedia full of writers who influenced him and writers he's influenced. Thomas Pynchon's circle of influence To visualize the relationships between Pynchon and the writers connected to him, I created an arc diagram in D3. Initially, I planned on using D3's built-in chord layout generator, but I fell in love with arc diagrams after seeing so many beautiful examples online (e.g., Bible Cross-References, Command Usage Arc Diagrams, The Shape of Song, among others). I went down a few rabbit holes, including creating a matrix dataset similar to those required by the chord layout and plotting the writers as nodes on a scale. Thankfully, there's this wonderful Les Misérables character co-ocurrences arc diagram, also in D3, which set me on the right track. I formatted the influence data into an object with two arrays, nodes and links. Each node contains the name of a writer and a number indicating the writer's category (influencer or influencee), and each link contains a connection between Thomas Pynchon and another writer. All the influencers appear to the left of Pynchon, with an arc above the horizontal; all the influencees appear to the right, with an arc below the horizontal. The colors were generated by D3's category20b scale. Other circles of influence I also wanted to visualize the influences of the other writers. Initially, I added all their connections to Pynchon's graph. This worked for Vladimir Nabokov but failed for Emily Dickinson, who was influenced by a writer who appeared to the right of her (Ralph Waldo Emerson) -- resulting in a backwards connection. The solution was to create new graphs and layer them on top of Pynchon's. So, clicking on Nabokov's name (for instance) will fade out Pynchon's graph, increase the size of Nabokov's name, move his name and dot to the center of the page, draw a line from his name to the center dot, and display his influence graph around that new center. Mousing away from Nabokov's graph will delete his graph and bring back Pynchon's. Figuring out how to rearrange the new graph around the new center dot was probably the most difficult part of the entire project. I passed the center position to the new graph and gave the influencer dots a negative x-axis position so that they appeared to the left of the center dot. The last feature I added was a hover selection event. Mousing over a writer's name will fade out the entire diagram except for the name, dot, and arc belonging to that writer. This was the second most difficult part of the project, because of the preexisting fade effects associated with the click event. The entire graph fades out upon clicking a name, and the entire graph fades back in when the mouse leaves the new graph. I ended up initializing a boolean to keep everything in check: multipleGraphsOnPage starts off as false, becomes true when a new graph is created, and reverts to false when the new graph is deleted. The hover selection event is enabled only when multipleGraphsOnPage is false. My project completion strategy The purpose of the solo project was not to build a perfect product, but a minimum viable product. Hack Reactor instructed us to reduce the size of our scope in half, twice -- which I didn't even need to do, since I'd deliberately chosen a very small scope to begin with. The extra time afforded me the chance to polish the look and feel of my project, including the animations, and dive deeper into D3.
OPCFW_CODE
I had previously been running Kde 4.10.x from the 4.11 Factory repo. I have since switched over to the stable 4.11 repo. Two regressions I’ve found so far after going from Factory to stable. On my main desktop running the 325.15 NVIDIA driver upon reboot there is massive tearing and it seems that vsync isn’t working at all at this point. The only way I have found to fix this is to toggle the OpenGL level (in my my case I flip it from 3.1 to 2.0 and then back). This behavior wasn’t present with the Factory packages. For KDE vsync I use full screen repaints as that is the only setting that eliminates tearing. Switching this setting has no effect. Only changing OpenGL levels seems to engage vsync. On my Asus Zenbook, the keyboard backlight controls that were working with the 4.11 Factory repo no longer work. There isn’t even a keyboard brightness slider in the taskbar power settings like there was in the Factory repo packages. Both of these seem like fairly serious regressions, given how smoothly they had been working until the final stable upgrade. I’m going to continue to play around and see what else I find. It seems that on the Asus Zenbook if you login then immediately logout and login again, the keyboard brightness controls work just as they had previously. This holds true for my user account and the test user account I made to see if there was an old config file that might be messing with the settings. I’m not sure why it doesn’t load the first time, but it’s very annoying. Just tried it, I don’t seem to have that problem, using the packages from the KR411 repo. Like the OP I also have some backlight issues with KDE, but in my case I don’t think its a regression because I think I remember 4.10 doing the same thing (at least with the fn keys, the screen dim bug might be a regression). Its quite annoying: Both my fn keys and kde’s brightness control work to change my brightness. But when I use my fn brightness keys, the brightness changes but KDE does not realize it changed and continues to show the old brightness. I also have an issue with the “screen dim” feature in the power settings. If I leave my laptop unplugged long enough for the screen dim to go into effect after 2 minutes, when I plug it back in it doesn’t return to 100% brightness like it is supposed to, but instead returns to ~50% brightness. KDE is the only DE I have these issues on, the fn keys work flawlessly in gnome or XFCE. KDE’s brightness handling seems quite buggy. I have not had the issue where there is no brightness slider in the battery applet though, its always there on first login for me.
OPCFW_CODE
The players in my campaign just hit third level and the Ranger is planning on taking the Beast Master path. She wanted a baby Owlbear as her companion, but of course there are 2 hurdles. There aren't official stats, and technically it's a Monstrosity, not a Beast. I'm okay w/ ignoring the second problem since it's a home game, and these stats I compared the Wolf, Boar and Panther and am hoping this is a reasonable approximation. My understanding is that the Beast Master path is generally considered to be under powered, and I also wanted to emphasize some of the aspect of Owlbears (aggression and ferocity for example) to that end there are the two Reactions/Statuses the cub can be in. While Training it'll follow commands as normal and I suspect this might be a little strong at L3-4, but kind of to counter balance it, when it gets injured (Temper) it will temporarily forget training (with the Ranger having a chance to keep it under control) and it will stop behaving like a companion to murder whatever hurt it. I think this will still be generally helpful (it is trying to murder an enemy) but could be problematic if the enemy flees or tries to surrender. So yeah, does this seem balanced? Is it too strong? Is there some issue w/ Temper I haven't thought of that will make it a problem? Prettier formatting on Homebrewery NOTE: I have included the proficiency bonus of 2 added to: AC, Attack & Damage rolls. It's not proficient in any skills or saves Maybe it should be? Tiny monstrosity, unaligned - Armor Class 15 - Hit Points 22 (2d10 + 2) - Speed 25 ft. STR 12 (+1) | DEX 16 (+3) | CON 13 (+1) | INT 3 (-4) | WIS 12 (+1) | CHA 4 (-3) - Senses darkvision 60 ft., passive Perception 11 - Languages — - Challenge 1/4 (50 XP) Keen Sight and Smell: The cub has advantage on Wisdom (Perception) checks that rely on sight or smell. Beak: Melee Weapon Attack: +5 to hit, reach 5 ft., one target. Hit: 1d6 + 5 piercing damage. Claws: Melee Weapon Attack: +5 to hit, reach 5 ft., one target. Hit: 1d4 + 5 slashing damage. Training: When the cub sees its master make an attack, the cub makes one beak attack against the same target. Temper: When reduced to 1/2 HP the cub is likely to forget it's training. It's master may make DC 13 Animal Handling check as a reaction to maintain control; otherwise it forgets Training and relentlessly attacks the last creature to injure it until one of them is dead. While in Temper it acts before it's master's turn and multi-attacks with both it's Claws and Beak. It suffers a -2 Rage penalty to hit.
OPCFW_CODE
History: Developed in 1999 by a company called Rocksoft (now part of Quantum) the concept of Variable length blocks revolutionized the way the data backups are performed. Most of the Backup software nowadays use Data deduplication technology built into their packages. Here are two good definitions of Data Deduplication: The term “data deduplication”, as it is used and implemented by Quantum Corporation refers to a specific approach to data reduction built on a methodology that systematically substitutes reference pointers for redundant variable-length blocks (or data segments) in a specific dataset. The purpose of data deduplication is to increase the amount of information that can be stored on disk arrays and to increase the effective amount of data that can be transmitted over networks. When it is based on variable-length data segments, data deduplication has the capability of providing greater granularity than single-instance store technologies that identify and eliminate the need to store repeated instances of identical whole files. In fact, variable-length block data deduplication can be combined with file-based data reduction systems to increase their effectiveness. It is also compatible with established compression systems used to compact data being written to tape or to disk, and may be combined with compression at a solution level. Key elements of variable-length data deduplication were first described in a patent issued to Rocksoft, Ltd (now a part of Quantum Corporation) in 1999. (read here for a complete white paper on it: http://www.gosignal.com/whitepapers/quantum1.pdf ) excerpt from the book: “Data Deduplication for Dummies” …”Data deduplication is a really simple concept with very smart technology behind it. You only store the block once. If it shows up again, you store a pointer to the first one. That takes up less space than storing the whole thing again. When Data Deduplication is put into systems that you can actually use, however, there are several options for implementation. And before you pick an approach to use or a model to plug in, you need to look at your particular data needs to see whether data deduplication can help you. Factors to consider include the type of data, how much it changes, and what you want to do with it. So let’s look at how data deduplication works. Making the most of the bulding blocks of data Basically, Data deduplication segments a stream of data into variable-lenght blocks and writes those blocks to disk. Along the way, it creates a digital signature – like a fingerprint – for each data segment and an index of the signatures it has seen. The index, which can be recreated from the stored data segments, lets the system know when it’s seeing a new block. When data deduplication software sees a duplicate block, it inserts a pointer to the original block in the dataset’s metadata (the information that describes the dataset) rather than storing the block again. If the same block shows up more than once, multiple pointers to it are created. Pointers are smaller than blocks, so you need less disk space. Data deduplication technology works best when it sees sets of data with lots of repeated segments. for most people, that is a perfect description of a backup.. Whetgher you backup everything every day (and lots of us do this) or once a week with incremental backups in between, backup jobs by their nature send the same pieces of data to storage system over and over again. Until data deuplication there wasn’t a good alternative to storing all the duplicates. Now there is. …” Example: Joe is really tall (this text document is stored on the hard drive) Now the creator opens the document and makes a change to: John is really tall Now, see this graphical representation of Fixed Length blocks vs Variable Length blocks: In our example The only change to the file was on the first block “a”: instead of “Joe” it changed to “John”. Note than in the Variable length block image, the whole data segment shifts but only segment A is rewritten; the other data segments (b, c and d) remain unchanged (“is really tall” where b is “is” c is for “really” and c is for “tall”) , therefore only the pointer is created for those three blocks instead of the whole data stream. If the backup software has data deduplicatin built in, that is how the data will be saved. That is data deduplication!
OPCFW_CODE
Posts by Tammo Sminia This is the first of a series of posts. Where we will use machine learning to rate movies. For this task we're not going to watch all the movies. I assume it's good enough to just read the plot. We'll use Markov chains to rate the movies and as an added bonus we can also generate new movie plots for awesome (or terrible) movies. In this first part we'll get the data and change it into a more usable format. We can use the data from IMDB, which is published on ftp://ftp.fu-berlin.de/pub/misc/movies/database/. Of interest are the plots and the ratings. Today I'll show how you can create a simple stubserver with Drakov. If you do some frontend programming, you've probably already installed npm (Node Package Manager), otherwise here is how you install that. Then with npm you can install Drakov. When you want to limit the amount of messages an actor gets, you can use the throttler in akka-contrib. This will let you limit the max transactions per second(tps). It will queue up the surplus. Here I'll describe another way. I'll reject all the surplus messages. This has the advantage that the requester knows it's sending too much and can act on that. Both methods have their advantages. And both have limits, since they still require resources to queue or reject the messages. In Akka we can create an Actor that sends messages through to the target actor, or rejects them when it exceeds the specified tps. Iterating over a map is slightly more complex than over other collections, because a Map is the combination of 2 collections. The keys and the values. When we have multiple Options and only want to do something when they're all set. In this example we have a property file with multiple configurations for one thing. A host and a port, we only want to use them if they're both set. We can use the Spray JSON parser for uses other than a REST API. We add spray-json to our dependencies. Our build.gradle: Both the JVM and keytool have problems dealing with keystores without a password. If you try to get a listing of the keystore it will think you didn't provide a password and output falsehoods: In a previous blog post we made an API with spray. Now we're going to load test it. For this, we will use http://gatling.io/#/. In a scala class we can write exactly what and how we want to run the test. In this test, we will do a post to our API and create a new robot called C3PO. We will do this 1000 times per second and keep doing this for 10 seconds. For a total of 10000 C3POs! RobotsLoadTest.scala: Suppose I have a List of things on which I want to do something that may fail. In this example I have a List of Strings that I want to turn into a List of Integers. In a previous blog I wrote how to make an API. See here. Now we'll make a client to use that API. This can be done with spray-client. First we add dependencies for spray-client and spray-json:
OPCFW_CODE
From Mobile PC Wiki What is a Tablet PC Microsoft set the standard for the term Tablet PC so we will look at their definition that has been evolving since 2002. As of January 2007, the software features and support that defined a Tablet PC are now incorporated into most versions of Window Vista. The following definition, last updated in February 2005 is still generally applicable. Computers powered by the Windows XP Tablet PC Edition operating system, and equipped with a sensitive screen designed to interact with a complementary pen, are called Tablet PCs. Tablet PCs are fully-functional laptop PCs and more. You can use the pen directly on the screen just as you would a mouse to do things like select, drag, and open files; or in place of a keyboard to handwrite notes and communication. Unlike a touch screen, the Tablet PC screen only receives information from a special pen. It will not take information from your finger or your shirt sleeve—so you can rest your wrist on the screen and write naturally. By interacting directly with the screen, rather than with a mouse and keyboard, the PC becomes more comfortable and easy to use. There is no need to find a flat space on which to use your PC, nor does a vertical screen become a dividing wall between you and the person with you whom you are meeting. What's more, a Tablet PC can even be used while standing up, which is perfect for professionals on the move such as doctors, foremen, and sales managers. Windows XP Tablet PC Edition 2005 includes all of the features and functionality of Windows XP Professional and Windows XP Service Pack 2. Tablet PCs do not run Windows CE or Windows XP Embedded. Along with the options typically provided by a conventional laptop, Tablet PCs are certain to include: see the complete article on Microsoft's site. Frequently Asked Questions - Please see Tablet PC/FAQ The most common question used to be, should I get a Tablet PC, but today the question is which one. Here is an article about what to consider: and a tool to help you decide Tablet PC Reading Here are some resources to help you to get your head around the idea of Tablet PCs. - Microsoft Windows XP Tablet PC Edition: An Overview Narrated Tablet PC Presentation - Developer Resources (Not a developer? You can probably skip these) - The Evolution of Tablet PC Technologies in Microsoft Windows Vista - Mobile PC / Vista Developers Overview - Resources for the Tablet PC Developer The MSDN site for Tablet PC Developers. - The Tablet PC Show See some Tablet PCs in action. Even though this is primarily for developers, you get to see people actually using Tablet PCs - What IS a Tablet PC - How To/Buy A Tablet PC - Getting Started: Choosing A Slate or Convertible - Getting Started: Active or Passive Digitizer - I just bought my Tablet PC What are all these processes? - How To/Demonstrate Your Tablet PC - If I knew then, what I know now - Sending a Student to College or University this Fall Does s/he NEED a Tablet PC? - Microsoft Tablet PC Portal - Windows XP Tablet PC Edition frequently asked questions - Practical Usage Scenarios for Your Tablet PC - Tablet PC: Transforming Education Community Sites (MVP Mobile PC) - Franks World Frank La Vigne - GottaBeMobile Rob Bushway, Dennis Rice, Warner Crocker, Matt Faulkner: News, InkShows, reviews, editorials, interviews - Medical Tablet PC Chris M. Wilkerson, D.C.: Healthcare related, Elecronic Medical Record (EMR) software and hardware, information for your Tablet PC in a health care environment - Mobile PC World Terri Stratton - forum with Chris Hassler / WNewquay - Nice Creations Chris Hassler - The Student Tablet PC Trevor Claiborne, Tracy Hooten: Focus on Students - The Tablet PC Terri Stratton - Tablet PC Buzz WNewquay, Steve Seto - Moderators (Spencer Goad Founder) Forum - Tablet PC Corner Stephane Torres (HPClean) French web site! - Tablet PC Talk Chris de Herrera - Tablet PC2 Linda Epstein: Tablet PC Comparisons, News, Reviews - Tablet PC Post Lora Heiny: Software for the Tablet PC - www.tc-one-thousand.com Dr. Christopher James: The first and best site for the TC1000 and TC1100 - Uber Tablet Hugo Ortega - Ultra Mobile PC Tips Frank J Garcia - CTitanic - writepc.com Terri Stratton: Write PCs = Ultra-Mobile PCs - UMPCs Other MVP Sites - jkOnTheRun James Kendrick, Kevin C. Tofel - Law and Tablets T. Bishop - Life on the Wicked Stage: Act 2 Warner Crocker - Incremental Blogger Loren Heiny - Marc Orchant - Mobile PC Thoughts and Ideas WNewquay - Craig Pringle - Tablet PC Place Christopher James - What Is New Lora Heiny - Tablet PC Team Blog
OPCFW_CODE
Map Characteristic.TargetHeatingCoolingState to heat Issue The only valid values in Homebridge Hue for Characteristic.TargetHeatingCoolingState are 'off' and 'heat' (please note that 'cool' and 'auto' are also valid per the HomeKit specification, but they are disabled in Homebridge Hue). However when changing from 'off' to 'heat', this is mapped as 'auto' for config.mode in the Deconz REST API. Could this be mapped to 'heat'? Alternatively could all values for Characteristic.TargetHeatingCoolingState be considered valid? For my thermostats (Elko Super TR) only 'off' and 'heat' are considered valid values for config.mode by the Deconz REST API, leading to the HomeKit thermostat service getting non responsive when 'auto' is sent. According to issue 1032 the current behaviour looks to be deliberate hack, but this inconsistency have undesired side effects. Log Messages '[12/12/2021, 7:17:05 PM] [Hue] Ekraveien 18: request 7568: PUT /sensors/27/config {"mode":"auto"} [12/12/2021, 7:17:05 PM] [Hue] Ekraveien 18: request 7568: api error 608: Could not set attribute' The only valid values in Homebridge Hue for Characteristic.TargetHeatingCoolingState are 'off' and 'heat' (please note that 'cool' and 'auto' are also valid per the HomeKit specification, but they are disabled in Homebridge Hue). Correct, Homebridge Hue doesn't currently support cooling, and AUTO in HomeKit means: automatically switch between HEAT and COOL. However when changing from 'off' to 'heat', this is mapped as 'auto' for config.mode in the Deconz REST API. That's because heat for the Eurotronic Spirit means: maximum heat and auto means temperature-based heat control. For my thermostats (Elko Super TR) only 'off' and 'heat' are considered valid values for config.mode by the Deconz REST API, leading to the HomeKit thermostat service getting non responsive when 'auto' is sent. According to issue 1032 the current behaviour looks to be deliberate hack, but this inconsistency have undesired side effects. The inconsistency is in the deCONZ API, probably introduced when adding support for the Elko. I'm happy to change Homebridge Hue, but I don't want to end up whitelisting each thermostat. This really needs to be addressed in deCONZ first. The rest api is consistent here with regard to what the thermostat supports. It only supports off and heat. The REST API is inconsistent in that heat means something different for the Eurotronic vs the Elko. A bit on the side, but still relevant question, is there a good documentation of how different sensor types should be exposed in the REST API? I think this basically boils down to expectations of what values different keys can have... I think I found it on here where 'heat' is a valid value for ZHAThermostat. Given the disclaimer "Supported modes are device dependent", I don't think it's valid to claim the REST API is inconsistent. But I think mapping Characteristic.TargetHeatingCoolingState.heat to config.mode.auto is inconsistent given 'heat' being a valid value for config.mode. I think it's a bit harsh to require all devices to adhere to Eurotronics handling in the REST API. I don't. I don't care whether the meaning is changed for the Eurotronic (and others that followed this) or for the Elko. I do think the purpose of the API is to provide a standard interface to clients, hiding the device-specific handling. How about making 'auto' a valid Characteristic.TargetHeatingCoolingState and map 'auto' to 'auto' and 'heat' to 'heat'? That's how I implemented it originally, not understanding the meaning of AUTO in HomeKit. Fixed that recently, see https://github.com/ebaauw/homebridge-hue/releases/tag/v0.13.27. A bit on the side, but still relevant question, is there a good documentation of how different sensor types should be exposed in the REST API? I think this basically boils down to expectations of what values different keys can have... Spot on! And no, there isn't (any good, that is). Given the disclaimer "Supported modes are device dependent", I don't think it's valid to claim the REST API is inconsistent. I really don't care how you want to call it. I care that I don't have to whitelist each thermostat type in Homebridge Hue, tuning the behaviour. But I think mapping Characteristic.TargetHeatingCoolingState.heat to config.mode.auto is inconsistent given 'heat' being a valid value for config.mode. Is is, when heat means something else (maximum heating) then HEAT (temperature controlled heating). What does the resource for your thermostat look like? In particular, the manufacturername and modelid. In v0.13.31. Cool. Many thanks! I made a small donation for your work on this plugin. It's very good and a vital, integral part of my smart home setup. Is it working? Did you test it already; I don't have the device, so cannot test myself. I updated just now. It works like a charm. Thank you again!
GITHUB_ARCHIVE
""" contains wave export with a possible click to sonify events in the tracks such as beats """ import numpy as np from scipy.io import wavfile import mir_eval from automix.model.classes.track import Track def sonifyClicks(ticks, audioHQ, sr, outputPath=None): """ Put a click at each estimated beat in beats array todo: look at mir_eval which is used by msaf ticks can either be [time] or [[time,barPosition]] """ # audioHQ, sr = Track.readFile(inputPath) # msaf.utils.sonify_clicks(audio_hq, np.array(tick), outputPath, sr) # Create array to store the audio plus the clicks # outAudio = np.zeros(len(audioHQ) + 100) # Assign the audio and the clicks outAudio = audioHQ if isinstance(ticks[0], list): audioClicks = getClick( [tick[0] for tick in ticks if tick[1] != 1], sr, frequency=1500, volume=0.8, length=len(outAudio)) outAudio[:len(audioClicks)] += audioClicks audioClicks2 = getClick( [tick[0] for tick in ticks if tick[1] == 1], sr, frequency=1000, volume=1, length=len(outAudio)) outAudio[:len(audioClicks2)] += audioClicks2 else: audioClicks = mir_eval.sonify.clicks(ticks, sr) #getClick(ticks, sr, frequency=1500, length=len(outAudio)) outAudio[:len(audioClicks)] += audioClicks # Write to file if outputPath: wavfile.write(outputPath, sr, outAudio) return outAudio def getClick(clicks, fs, frequency=1000, offset=0, volume=1, length=0): """ Generate clicks (this should be done by mir_eval, but its latest release is not compatible with latest numpy) """ times = np.array(clicks) + offset # 1 kHz tone, 100ms with Exponential decay click = np.sin(2 * np.pi * np.arange(fs * .1) * frequency / (1. * fs)) * volume click *= np.exp(-np.arange(fs * .1) / (fs * .01)) if not length: length = int(times.max() * fs + click.shape[0] + 1) return mir_eval.sonify.clicks(times, fs, click=click, length=length)
STACK_EDU
Is Learning at Scale Just Another Name for Ubiquitous Surveillance in the Classroom? Last week, I attended an ACM conference called Learning@Scale. It was the most depressing meeting I've been to in years, both because of what was said and done and because of what wasn't. According to its blurb: This conference is intended to promote scientific exchange of interdisciplinary research at the intersection of the learning sciences and computer science. Inspired by the emergence of Massive Open Online Courses (MOOCs)…this conference was created by ACM as a…key focal point for the review and presentation of…research on how learning and teaching can change and improve when done at scale. That sounds cool, and there actually were a lot of interesting talks and posters. You'll have to take my word for that, though, because this conference on massive, open, online courses was none of the above: its proceedings are behind a paywall, and talks were neither recorded nor broadcast. In fact, most were barely tweeted: of the 120 or so people in attendance, only half a dozen tagged anything #las2014. If that wasn't ironic enough, all the presentations–even the ones about flipped classrooms–were a sage on the stage talking over a slide deck telling us what was in the paper most of us had already skimmed. This had exactly the effect you'd predict: despite the lack of power cords at the tables, more than half of the attendees were checking mail, coding, or catching up on their reading. On its own, that disconnect between what people were saying and what they were doing wouldn't have won this meeting a gold medal for depressing. What pushed it into first place was what wasn't being talked about. None of the presentations included the word "privacy" in their title; speaker after speaker talked about what we can find out about them by mining their data, but whether we should, and whether people should know who's watching them and how closely, was only touched on occasionally and briefly. One of the reasons, I think, is that most of the speakers were technologists. The educators I talked to were more concerned about privacy (and pedagogy), but several told me one-to-one that they felt sidelined by their lack of technical knowledge. As a result, most decisions on the ground (or on the web) were being made by people who cared more about the data they might get than about the risks they might create or the rights they might erode. The educational value of MOOCs is debatable. The fact that they are bringing ubiquitous surveillance into the classroom is not. I'm sure we'll learn things by watching over every student's shoulder every minute of every hour (though as Mark Guzdial keeps reminding us, a lot of the things we'll learn are only new to people who haven't bothered to do a literature search). I'm equally sure that if we continue down the road I saw laid out at this conference, we'll be training children to believe that being watched and recorded every moment of the day is normal. That's not a future I want.
OPCFW_CODE
//! ICC profile that can be embedded into a PDF extern crate lopdf; /// Type of the icc profile #[derive(Debug, Copy, Clone, PartialEq)] pub enum IccProfileType { Cmyk, Rgb, Greyscale, } /// Icc profile #[derive(Debug, Clone, PartialEq)] pub struct IccProfile { /// Binary Icc profile icc: Vec<u8>, /// CMYK or RGB or LAB icc profile? icc_type: IccProfileType, /// Does the ICC profile have an "Alternate" version or not? pub has_alternate: bool, /// Does the ICC profile have an "Range" dictionary /// Really not sure why this is needed, but this is needed on the documents Info dictionary pub has_range: bool, } impl IccProfile { /// Creates a new Icc Profile pub fn new(icc: Vec<u8>, icc_type: IccProfileType) -> Self { Self { icc: icc, icc_type: icc_type, has_alternate: true, has_range: false, } } /// Does the ICC profile have an alternate version (such as "DeviceCMYk")? #[inline] pub fn with_alternate_profile(mut self, has_alternate: bool) -> Self { self.has_alternate = has_alternate; self } /// Does the ICC profile have an "Range" dictionary? #[inline] pub fn with_range(mut self, has_range: bool) -> Self { self.has_range = has_range; self } } impl Into<lopdf::Stream> for IccProfile { fn into(self) -> lopdf::Stream { use lopdf::{Dictionary as LoDictionary, Stream as LoStream}; use lopdf::Object::*; use std::iter::FromIterator; let (num_icc_fields, alternate) = match self.icc_type { IccProfileType::Cmyk => (4, "DeviceCMYK"), IccProfileType::Rgb => (3, "DeviceRGB"), IccProfileType::Greyscale => (1, "DeviceGray"), }; let mut stream_dict = LoDictionary::from_iter(vec![ ("N", Integer(num_icc_fields)).into(), ("Length", Integer(self.icc.len() as i64).into())]); if self.has_alternate { stream_dict.set("Alternate", Name(alternate.into())); } if self.has_range { stream_dict.set("Range", Array(vec![ Real(0.0), Real(1.0), Real(0.0), Real(1.0), Real(0.0), Real(1.0), Real(0.0), Real(1.0)])); } LoStream::new(stream_dict, self.icc) } } /// Named reference for an ICC profile #[derive(Debug, Clone, PartialEq)] pub struct IccProfileRef { pub(crate) name: String, } impl IccProfileRef { /// Creates a new IccProfileRef pub fn new(index: usize) -> Self { Self { name: format!("/ICC{}", index) } } } #[derive(Default, Clone, Debug, PartialEq)] pub struct IccProfileList { profiles: Vec<IccProfile>, } impl IccProfileList { /// Creates a new IccProfileList pub fn new() -> Self { Self::default() } /// Adds an ICC profile pub fn add_profile(&mut self, profile: IccProfile) -> IccProfileRef { let cur_len = self.profiles.len(); self.profiles.push(profile); IccProfileRef::new(cur_len) } }
STACK_EDU
PyTorch preferred way to copy a tensor There seems to be several ways to create a copy of a tensor in PyTorch, including y = tensor.new_tensor(x) #a y = x.clone().detach() #b y = torch.empty_like(x).copy_(x) #c y = torch.tensor(x) #d b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable. Any reasons for/against using c? one advantage of b is that it makes explicit the fact that y is no more part of computational graph i.e. doesn't require gradient. c is different from all 3 in that y still requires grad. How about torch.empty_like(x).copy_(x).detach() - is that the same as a/b/d? I recognize this is not a smart way to do it, I'm just trying to understand how the autograd works. I'm confused by the docs for clone() which say "Unlike copy_(), this function is recorded in the computation graph," which made me think copy_() would not require grad. There's a pretty explicit note in the docs: When data is a tensor x, new_tensor() reads out ‘the data’ from whatever it is passed, and constructs a leaf variable. Therefore tensor.new_tensor(x) is equivalent to x.clone().detach() and tensor.new_tensor(x, requires_grad=True) is equivalent to x.clone().detach().requires_grad_(True). The equivalents using clone() and detach() are recommended. Pytorch '1.1.0' recommends #b now and shows warning in #d @ManojAcharya maybe consider adding your comment as an answer here. what about .clone() by itself? @CharlieParker .clone() keeps graph structure but creates a new memory, i.e., its grad will backpropagate to the original object from the independent cloning data; it is just the opposite of .detach(), which is discarded from the original graph but share the memory address. Generally, .clone() is suitable to construct simple repetitive neural layers with the same function, e.g., the Multi-Heads in Transformer. TL;DR Use .clone().detach() (or preferrably .detach().clone()) If you first detach the tensor and then clone it, the computation path is not copied, the other way around it is copied and then abandoned. Thus, .detach().clone() is very slightly more efficient.-- pytorch forums as it's slightly fast and explicit in what it does. Using perfplot, I plotted the timing of various methods to copy a pytorch tensor. y = tensor.new_tensor(x) # method a y = x.clone().detach() # method b y = torch.empty_like(x).copy_(x) # method c y = torch.tensor(x) # method d y = x.detach().clone() # method e The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the tensor() or new_tensor() takes more time compared to other three methods. Note: In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d. import torch import perfplot perfplot.show( setup=lambda n: torch.randn(n), kernels=[ lambda a: a.new_tensor(a), lambda a: a.clone().detach(), lambda a: torch.empty_like(a).copy_(a), lambda a: torch.tensor(a), lambda a: a.detach().clone(), ], labels=["new_tensor()", "clone().detach()", "empty_like().copy()", "tensor()", "detach().clone()"], n_range=[2 ** k for k in range(15)], xlabel="len(a)", logx=False, logy=False, title='Timing comparison for copying a pytorch tensor', ) Stupid question, but why do we need clone()? Do otherwise both tensors point to the same raw data? Ah yes, see https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/2?u=gebbissimo According to Pytorch documentation #a and #b are equivalent. It also say that The equivalents using clone() and detach() are recommended. So if you want to copy a tensor and detach from the computation graph you should be using y = x.clone().detach() Since it is the cleanest and most readable way. With all other version there is some hidden logic and it is also not 100% clear what happens to the computation graph and gradient propagation. Regarding #c: It seems a bit to complicated for what is actually done and could also introduces some overhead but I am not sure about that. Edit: Since it was asked in the comments why not just use .clone(). From the pytorch docs Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor. So while .clone() returns a copy of the data it keeps the computation graph and records the clone operation in it. As mentioned this will lead to gradient propagated to the cloned tensor also propagate to the original tensor. This behavior can lead to errors and is not obvious. Because of these possible side effects a tensor should only be cloned via .clone() if this behavior is explicitly wanted. To avoid these side effects the .detach() is added to disconnect the computation graph from the cloned tensor. Since in general for a copy operation one wants a clean copy which can't lead to unforeseen side effects the preferred way to copy a tensors is .clone().detach(). Why is detach() required? From the docs "Unlike copy_(), this function is recorded in the computation graph. Gradients propagating to the cloned tensor will propagate to the original tensor.". So to really copy the tensor you want to detach it, or you might get some unwanted gradient updates, you don't know where they coming from. what about .clone() by itself? I added some text explaining why not clone by itself. Hope this answers the question. One example to check if the tensor is copied: import torch def samestorage(x,y): if x.storage().data_ptr()==y.storage().data_ptr(): print("same storage") else: print("different storage") a = torch.ones((1,2), requires_grad=True) print(a) b = a c = a.data d = a.detach() e = a.data.clone() f = a.clone() g = a.detach().clone() i = torch.empty_like(a).copy_(a) j = torch.tensor(a) # UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). print("a:",end='');samestorage(a,a) print("b:",end='');samestorage(a,b) print("c:",end='');samestorage(a,c) print("d:",end='');samestorage(a,d) print("e:",end='');samestorage(a,e) print("f:",end='');samestorage(a,f) print("g:",end='');samestorage(a,g) print("i:",end='');samestorage(a,i) Out: tensor([[1., 1.]], requires_grad=True) a:same storage b:same storage c:same storage d:same storage e:different storage f:different storage g:different storage i:different storage j:different storage The tensor is copied if the different storage shows up. PyTorch has almost 100 different constructors, so you may add many more ways. If I would need to copy a tensor I would just use copy(), this copies also the AD related info, so if I would need to remove AD related info I would use: y = x.clone().detach() Use "if x.untyped_storage().data_ptr()==y.untyped_storage().data_ptr():" instead of "if x.storage().data_ptr()==y.storage().data_ptr():" to avoid deprecated TypedStorage. Pytorch '1.1.0' recommends #b now and shows warning for #d what about .clone() by itself? clone by itself will also keep the variable attached to the original graph @CharlieParker see comments in the original question, and also answer from Nopileos
STACK_EXCHANGE
Recently, I took this great course on Locating Web Elements from Andrew Knight, through Test Automation University. In addition to learning helpful syntaxes for accessing elements, I also learned about yet another way we can use DevTools to help us! One of the most annoying things about UI test automation is trying to figure out how to locate an element on a page if it doesn’t have an automation id. You are probably aware that if you open the Developer Tools in Chrome, you can right-click on an element on a Web page, select “Inspect” and the element will be highlighted in the DOM. This is useful, but there’s something even more useful hidden here: there’s a search bar that allows you to see if the locator you are planning to use in your test will work as you are expecting. Let’s walk through an example of how to use this valuable tool. Let’s navigate to this page, which is part of Dave Haeffner’s “Welcome to the Internet” site, where you can practice finding web elements. On the Challenging DOM page, there’s a table with hard-to-find elements. We’re going to try locating the table element with the text “Iuvaret4”. First, we’ll open DevTools. The easiest way to do this is to right-click on one of the elements on the page and choose “Inspect”. The Dev Tools will open either on the right or bottom of the page, and the Elements section will be displaying the DOM. Now, we’ll open the search bar. Click somewhere in the Elements section, then click Ctrl-F. The search bar will open below the elements section, and the search bar will say “Find by string, selector, or XPath”. We’ll use this tool to find the “Iuvaret4” element with css. Right-click on the “Iuvaret4” element in the table, and choose “Inspect”. The element will be highlighted in the DOM. Looking at the DOM, we can see that this is a <td> (table data) element, which is part of a <tr> (table row) element. So let’s see what happens if we put tr in the search bar and click Enter. It returns 13 elements. You can click the up and down arrows at the side of the search bar to highlight each element found. The first “tr” the search returns is just part of the word “demonstrates”. The next “tr” is part of the table head. The following “tr”s are part of the table body, and this is where our element is. So let’s put tbody tr in the search bar and click Enter. Now we’ve narrowed our search down to 10 results, which are the rows of the table body. We know that we want the 5th row in the table body, so now let’s search for tbody tr:nth-child(5). This search narrows things down to the row we want. Now we can look for the <td> element we want. It’s the first element in the row, so if we search for tbody tr:nth-child(5) td:nth-child(1) we will narrow the search down to the exactly the element we want. This is a pretty good CSS selector, but let’s see if we can make it shorter! Try removing the “tbody” from the search. It turns out the element can be located just fine by simply using tr:nth-child(5) td:nth-child(1). Now we have a good way to find the element we want with CSS, but what happens if a new row is added to the table, or if the rows are in random order? As soon as the rows change we will be locating the wrong element. It would be better if we could search for a specific text. CSS doesn’t let us do this, so let’s try to find our element with XPath. Remove the items in the search bar and let’s start by searching on the table body. Put //tbody in the search field and click Enter. You can see when you hover over the highlighted section in the DOM that the entire table body is highlighted on the page. Inside the table body is the row with the element we want, so we’ll now search for //tbody/tr. This gives us ten results; the ten rows of the table body. We know that we want to select a particular <td> element in the table body: the element that contains “Iuvaret4”. So we’ll try searching for this: //tbody/tr/td[contains(text(), “Iuavaret4”)]. We get the exact result we want, so we’ve got an XPath expression we can use. But as with our CSS selector, it might be possible to make it shorter. Try removing the “tbody” and “tr” from the selection. It looks like all we need for our XPath is //td[contains(text(), “Iuvaret4”)]. Without this helpful search tool, we would be trying different CSS and XPath combinations in our test code and running our tests over and over again to see what worked. This Dev Tools feature lets us experiment with different locator strategies and get instant results!
OPCFW_CODE
What is the default of numpy functions, with where=False? The ufunc documentation states: where New in version 1.7. Accepts a boolean array which is broadcast together with the operands. Values of True indicate to calculate the ufunc at that position, values of False indicate to leave the value in the output alone. What is the default behavior, when out is not given? I observed some behavior, which doesn't really make sense to me: import numpy as np a,b = np.ones((2,2)) np.add(a,b,where = False) #returns 0 np.exp(a, where = False) #returns 1 np.sin(a, where = False) #returns 1 np.sign(a, where = False) #returns 0 np.reciprocal(a, where = False) #returns 0 Does anyone know the underlying reason/behavior? Especially np.reciprocal doesn't really make sense, as the reciprocal value can never be 0 EDIT: The behavior is even more complex: a,b = np.ones(2) np.add(a,b,where = False) #returns 6.0775647498958414e-316 a,b = 1,1 np.add(a,b, where = False) #returns 12301129, #running this line several times doesn't give the same result every time... I'm using Numpy version 1.11.1 You're first line raises a ValueError; the other lines also look awkward. Could you make an actual runnable example to test? NB: it may be my reading, but False is not a boolean array. Perhaps the functions cast it to one, but it's not what the documentation states. I can't reproduce your issue: the second raises ValueError: Automatic allocation was requested for an iterator operand, and it was flagged as readable, but buffering without delayed allocation was enabled. That's for NumPy 1.13. Fun fact: the same line for NumPy 1.11 does not raise an exception, but returns -1.4916681462400413e-154. Definitely smaller than np.finfo(np.float64).eps, thus 0, but not an integer. It's not more complex, it's incorrect because it's being used incorrectly. It is actually a bug in NumPy, and should raise an exception. Not the one in my other comment: that's Python 3 catching a problem in NumPy. Python 2 doesn't catch this problem, and as a result, you get unpredictable (undefined?) behaviour. You may want to raise an issue on the NumPy issue tracker, and point people to this question. I don't think NumPy should behave as such. @MSeifert: I see you've edited the question. Any suggestion perhaps? It's tricky because where without out simply doesn't make much sense because neither the ufunc-identity nor zero make any sense as "default value" in the returned array. I guess it's just allocating an np.empty_like(a) out array and then you're stuck with "random values" especially if you define where=False and don't put any values in the newly created array. It looks like garbage becasue that's exactly what it is - memory that's been garbage collected. Whatever function you are calling sets aside a block of memory to put the results in, but never puts any results there because where=False. You're getting the same values you would from np.empty - i.e. whatever garbage was in that memory block before the function assigned it.
STACK_EXCHANGE
The daily standup as a crutch When I first started at GitLab in November 2016, I wanted a synced daily standup over a video call. Just like any other new product manager, I was eager to get to know the team quickly, and kick off a close collaborative relationship with every engineer and designer. I thought that the daily standup was perfect to regularly get face time to establish rapport. I soon realized, however, that no such culture existed at all in the company, and there was resistance to start such a ceremony. I was disappointed that I couldn’t bring my supposedly great product management skills to bear in that setting. Fast forward two years plus, I now know that at the time, and in previous product manager roles in co-located teams, I was relying on the daily standup as a crutch to collect status and be re-assured that the team was hard at work. I didn’t trust the team. The daily standup wasn’t an opportunity for unblocking problems and fostering collaboration more broadly, which as a concept it was intended to be. It had devolved into micromanagement. Pushing and pulling The daily standup, as original designed, is a tool to help teams collaborate. Being invented in a time of co-located teams, people were limited to physical spaces and a common work day. (Notice that I said that co-located teams are limited. That’s a theme in these articles. Remote teams are actually less constrained than co-located teams, not the reverse!) So it made sense that the daily standup was indeed daily, to promote frequent and regular collaboration in a setting that was built on synced communication. But with remote teams and remote work in general, async communications should be the norm for the majority of interactions. And so it follows that collaboration should also be as async in possible. In practice, this means your team should follow a push-pull model. You should be frequently providing status of your work, pre-empting anybody asking it from you later on. You could hard push this information by pinging people directly in instant messaging. But a soft push of updating a single source of truth work artifact (such as a user story in an issue tracker or a pull request in a source control tool) is even better since teammates who care, can configure their notifications accordingly to receive that information automatically. And as the pusher you can avoid unnecessarily adding more noise if you are unsure of the intended audience. On the other side, you should be pulling information as needed. Soft pulls are ideal for on-demand self-service status updates. You can just review the changes of single source of truth work artifacts to get the latest information, without disturbing anyone else. You are unblocked immediately because you don’t have to wait for a response. A hard pull for information means actively pinging somebody for status (because you can’t find it from a soft pull). You should work together with your teammate to write down the acquired information afterward to make future soft pulls of the same information possible. Lastly, another type of hard pull is simply requesting for active collaboration. The request itself should be async (even if the actual planned collaboration work will be synced). Don’t wait until the next scheduled meeting to ask someone for input. Just do it right now in a work artifact or in an instant messaging tool. Note that the push-pull model does require teams to be writing down as much as possible, and doing it at a high quality, to really reap the benefits. For remote teams, it’s a bit easier because you are writing much of the day anyways, so you tend to develop it as a skill over time. This simple push-pull model means that you can operate much more efficiently in a team. You don’t have to wait up to 24 hours before the next collaboration opportunity. Yes, in a co-located team you technically don’t have to wait if everyone is already there in the same office. But I’ve personally experienced many cultures where “we might as well wait until the next weekly meeting” before bringing up a topic. Or “let me gather a list of asks and I can share them with you at our meeting next time”. In co-located teams, it’s very easy to tend towards waiting. In remote teams, if you are already communicating async, push-pull becomes second nature. Efficiency doesn’t necessarily mean fast or frenetic, and that’s good. The push-pull model actually scales to your team’s need in this regard. If you need to do deep work and focus on a small number of tasks over an extended period of time, push-pull serves that well, since everyone can spend most of their time doing individual work or collaborating, whatever the case may be. If you still continue with synced daily standups or any other regularly scheduled status meeting, it actually becomes a waste of time, because you just don’t need those meetings if there’s nothing to update. Regularly scheduled synced meetings are not inherently bad. But just be cautious that they serve an intentional purpose. If they become a crutch that slows you down or reduces collaboration, then it should be removed. For example, in my current product-development team (engineers and designers), we only have one regularly scheduled weekly meeting. There are no pre-determined agenda items. Folks are encouraged to add topics if they want a larger and more generic audience. The topics are typically more informational and broad. If team members need to collaborate on a specific feature, they just use the push-pull model, possibly setting up synced video call sessions as needed. They do not wait for the weekly meeting. And as such, sometimes the weekly meeting ends early or is even just canceled if there are no topics added that week. As a rule of thumb, the synced daily standup shouldn’t be necessary. I’ve observed many teams in different organizations use async standup tools in instant messaging. I think these are great because it precisely fits inside the push-pull model. Ownership through accountability and responsibility Ultimately, this model of collaboration requires people to be owners of their work. Owners are very good at taking initiative, and optimizing for the benefit of the entire team, and the common outcome desired (which is typically a business objective such as delivering customer value by shipping a great product). Remote work and the push-pull model in particular, fail spectacularly if your team members are passive order takers. So as with many aspects of remote work, it’s a forcing function for you to do better. And in this case, it is to hire better people and continuously develop them to be better owners. In particular, ownership means team members should be empowered to make decisions and communicate them broadly, being accountable and responsible for their results. As a remote work product manager, avoid synced daily standups.
OPCFW_CODE
So you think you are afraid of snakes? Laura Ireland, 44, a Southington mother of two, can't even bear to read this article until her husband clips the drawing off the top of this page. Betty Coville, 44, of West Hartford can look at this story, "as long as I can read it without touching that drawing." As for Pam Reed, 49, of Farmington, she once "got the dry heaves from having my hand on a page" of National Geographic with a snake photograph. She's better now, as long as our depiction doesn't wriggle. Sure, there are plenty of people afraid of snakes. According to The Unofficial Census of the United States (Ballantine, 1991), it's the No. 1 fear of 100 million Americans, a full python-length ahead of its nearest competitors, public speaking (64 million folks) and high places (46 million). But there's fear of snakes and then there's ophidiophobia, that is, the major-league hard-core don't-let-those-suckers-near-me fear of snakes. Some ophidiophobics know that more Americans die annually from bee stings than from poisonous snakebites. They know snakes play a vital role in our ecosystem, consuming excess insects and rodents. They may even know that man's destruction of their habitats is driving some snakes toward extinction. But for them, snakes remain the most reprehensible of reptiles, what naturalist Alexander Skutch called "an elongated, distensible stomach [that] crams itself with animal life ... to perpetuate a predatory life in its naked horror." Take Ireland (not the snake-free isle of the North Atlantic, but the woman with the snake-snipping spouse). Like most snake phobics, she can't recall exactly how her fear got started. But now the mere sight of the serpentine sends her into an eye-clenched fit of shuddering revulsion. She sits before the TV with her remote control at the ready. Car commercials. Jeans commercials. She's learned by grim experience which ones spice their pitches with a flash of squirming snake. "I'll think, `Here it comes. . .' and I'll click it off," she says. Still, a preview for a snake horror movie once caught her off guard: "That's enough to give me a heart attack." These are not fraidy-cats. Coville couldn't care less about spiders or mice. But a glimpse of a garter snake in the grass and it's as if she were getting up close and personal with Medusa, the Greek Gorgon whose scalp of snakes turned all beholders to stone. Paralyzed by fear, Coville had to be assisted from the snake display at the Science Museum of Connecticut by her kids. She's fled stores that sell toy snakes and dropped magazines in mid-page-turn. Talking snakes for too long with a reporter, she lifts her feet off the floor where vipers might commute. Two weeks back, she was in a Glastonbury coffee shop when a man entered with his new pet coiling in a sack: "I couldn't get up out of the chair. ... I backed into the corner of the booth, trembling and crying and hardly breathing." Then there's Bea Geier, 54, of Stonington, who insists, "I only fear two things in this whole universe. God and snakes, in that order." Last summer, what she describes as a 5-foot copperhead slid up the porch of her mobile home and ingested the occupants of her birdhouse. "It had its whatever-you-call-it, I guess its face, I can't even give it a name it's so ugly, inside the little opening of the birdhouse." Thus did her lifelong fear of snakes blossom into full-scale phobia. She spent the rest of the hot summer with her windows closed, feeling silly yet afraid to traverse dark rooms. She stopped her gardening and walks and started boning up: " `It's a fool that does not know his enemy,' " she insists. But knowledge didn't soothe her. She learned how not to attract snakes, but that meant not feeding their food, her beloved bunnies. She found she could never seal off her home enough to keep them out, not the plumbing spaces, not the clothes-drier ducts. So she still worries. "I'd rather be clawed to death than squeezed." Like a terrorist in search of an H-bomb blueprint, she couldn't find the one thing she'd yearned for, not in all her conversations with wildlife officers, nor in their pamphlets: "Nobody is going to tell you how to kill them." Instead, with her snake-haunted home on the market and still swerving around a patch of driveway where she saw a dead snake months back, Geier contemplates the roots of her snake fear -- including a statue of the Virgin Mary crushing the evil serpent, a statue she recalls vividly from childhood. "Christians do associate serpents with evil," she says. Some know precisely what dark hole their phobia crawled from. Laura Burt, 19, a sophomore at the University of Connecticut, blames her older brother Robert. Until the age of 6 or so, she played easily with the garter snakes he kept as pets. Then Robert, all of 9 or 10, told her their woodsy Pennsylvania back yard was rife with poisonous copperheads.
OPCFW_CODE
Well today I got one hot interview for you guys :P As yall have seen not a very long ago we had a very UNUSUAL CS behavior as he did actually cuss out loud using shouts on a CS char!! Of course due to this Ex CS England AKA -Chaos- was removed from the CS team...but what really happened? What caused a CS who passed all the CS tests and interviews to act the way he did? Well the best way to find out is by asking him ;) We all know about him being hacked and stuff like that, but let's hear it from him xD WickedSnake: Okay let's start from beginning to end -Chaos-: Yes but let me start by saying it wasn't the people who knew my pass, I can trust them with anything and I was right. WickedSnake: Ya so it wasn't because of multipilots -Chaos-: Well basically I was happily playing and went for a quick snack, I came back to see that I have DCed, I relogged and found out that I DCed someone...I logged in and found out that -Chaos- was naked...my heart stopped WickedSnake: Well that's spooky -Chaos-: I accessed the char and in his inventory was nothing but one item... WickedSnake: Which was..? -Chaos-: A spirit seal!! -Chaos-: Yes OMG...he tried to log again so I kept logging in as fast as I could to keep the hacker out -Chaos-: While I told pete to change password WickedSnake: Who is pete?? -Chaos-: My cousin, IGN -Morphisis-and then I reported and before I know the account is locked, it stayed locked for over a week WickedSnake: What did they tell you? -Chaos-: Well basically saying servers you right for share WickedSnake: What does that mean :S?!?! -Chaos-: Well my and pete are both pilot so somehow they assumed that we hacked each other, anyway a couple of days later they told me that they have found the stuff WickedSnake: That's nice so you got all your stuff back -Chaos-: Yes thanks God WickedSnake: Well what caused you to login your CS account and do what you did?? -Chaos-: Well it was due to my outbrast of pure RAGE, I go boxing ever night, I almost broke my wrist that night!! WickedSnake: LOL chilll out dawgggg -Chaos-: Ya the poor bag WickedSnake: Dude you are supposed to be a pilot -Chaos-: We have anger too xD WickedSnake: Yes but have some self control -Chaos-: and now sick leave WickedSnake: What the GMs tell you about the hacker? Who was he? How did he do it? -Chaos-: They told me very little WickedSnake: Well tell us xD -Chaos-: They told me that the IP address was probably from Egypt and the stuff had been traded many times so they were hard to trace WickedSnake: Good that you got everything back -Choas-: Yes in fact I did celebrate this by upgrading the most item that I like in game xD WickedSnake: Wow that's nice ^^ -Chaos-: Thank you xD How do you feel about the Griffen event? WickedSnake: Well I think it kinda sucks cuz the mechanism could have been better lol -Chaos-: Ya I guess am going to win battle royal, I got so many medals that I had to empty my inventory like twice lol WickedSnake: That's nice, I think the registry mechanism could have been better -Chaos-: Well battle royal is bad WickedSnake: Well since you are known to be a pro hunter, give us some hunting tips :P -Chaos-: Well dun afk hunt, hunting while you are online gives much better results, also try and hunt monsters that are worth it WickedSnake: Well I got to go now, thanks for the interview -Chaos-: My pleasure Well it was obvious for everyone that chaos made a mistake when he used his CS account for a personal cause, anyway he admitted that he acted wrongfully when he did this ad that he deverved to be expelled from the CS team...admitting it when you are wrong is a sign of a good person...anyway -Chaos- glad that you got your stuff back and wish you all luck xD interview with the dumptires 9 years ago
OPCFW_CODE
Wähle die Ordner aus, zu welchen Du "Cybersecurity Foundations" hinzufügen oder entfernen möchtest 0 Exakte Antworten 61 Text Antworten 0 Multiple Choice Antworten Karte wurde gelöscht How does counter mode (CTR) work Encrypting a counter to produce a stream cipher can be parallelized convert a block cipher into a stream The message is not encrypted, a number is encrypted and uses the random number that comes out to XOR the message. Standard mode for all encryption cipher (AES) Which problem solves Diffie-Hellman The problem is before a message can be encrypted the “secret key” must be shared with the communication partners over an insecure channel, this problem is solved with Diffie-Hellman How does the Diffie-Hellman algorithm work Alice and Bob agree on base parameters. p = a large prime number, is very big usually 2048-bit or 4096-bit = also a prime number must be a primiteve root of p e.g. 3 is a primiteve root of 7 Alice and Bob select numbers as private keys Alice chooses a private value at random Bob chooses a private value at random The private vlaues must be between 1 and . Is a 2048-bit number, which is never told to anyone Alice and Bob each calculate a public key The public key is created using and to mathematically hide the private version. The puvlic keys are swapped over the wire Alice sends the result of the calculation: Bob sends the result of the calculation: The private key is combined with the others public key to create the shared “secret key” The shared secret is usually called the pre-master secret. It’s used to derive session keys Whats the shared secret if applying the Diffi-Hlleman with the following numbers g = 3, p = 29 Alicey private key: 23 Bob private key: 12 24, Lösungsweg in den Folien 4 S. 53 What are the two main use cases of RSA Encryption that only the owner of the public key can read. If you want to send a encrypted message to another network member just take his/her public key and encrypt the data with this key. Signing that must have been performed by the owner of the private key. If you want to trust a server he can send a message encrypted with his private key and you know by encrypting the message with his public key that ist him/her What is the discrete logarithm problem 3^29 mod 17 = x x is easy to determine 3^x mod 17 = 12 x is hard to determine, especially for big numbers, because the solution can only be found with brute force The following variables for an RSA process are given, encrypt and decrypt the message "m" (89) n=53*59 = 3127 c(MessageBob)= 893 mod 3127=1384 c(MessageBob) = 1384 13942011 mod 3127 = 89 This is an example from the presentation 5 starting at page 27 Tell some facts about RSA RSA is very weak if encrypting short messages Padding is added in short messages, optical asymmetric Encryption padding (OAEP) is the used Introduces an IV into the process and then hashes it The receiver will have to use the exact same padding to make sure the messages match up It’s not common to see encryption done in RSA RSA is 1000x slower than symmetric crypto systems The following variables are given, what does a signature process look like if the signature of Alice is "SignatureAlice=42" Tom: c(SignatureAlice)= 42^2753 mod 3233=3065 3065^17 mod 3233 = 42 => SignatureAlice=42 Bob: c(SignatureAlice)= 42^2753 mod 3233=3065 3065^17 mod 3233 = 42 => SignatureAlice=42 Alice: c(SignatureAlice)= 42^2753 mod 3233=3065 Lösungsweg Präsi 4 S.34 Signing is encrypting with the private key Whats the problem with RSA in a few years RSA is going to become slower because bigger keys will have to be used. the main alternative is DSA (Digital Signature Algorithm) which uses DSA only works for signing it acts like RSA but uses mathematics similar to Diffie-Hellman What is a hash funciton and why is it useful Takes a message of any length and creates a pseudorandom hash with a fixed length. Used for Message authentication, Integrity, passwords A good hash algorithm is fast, but not too fast Whats a strong hash function Any linput length results in a fixed size hash. 1. it has to be quick but not too quick 2. it has to introduce diffusion => 1 change results in many 3. Given a hash, we can’t reverse it 4. Given a message and its hash, we can’t find another message that hashes to the same thing 5. We can’t find any two messages that have the same hash MD5: is strong but creates collisions => broken SHA1 / SHA2: is strong and currently not broken Whats the current hash standard SHA-2 256 bit or 512 bit SHA-3 is not better or worser than SHA-2 Which hash functions should be used for passwords and why is SHA-2 not a good solution SHA-2 is to fast a attacker can generate a lot of hashes and compare the outputs to the found password A good solution for password hashes are the following algorithms PBKDF2(Password-Based Key Derivation Function 2) works similar to SHA-2 but the process is repeated e.g. 5000 times this makes it 5000 times slower. also these algorithms are hard to run on a GPU which makes it more difficult to parallel create hashes to gues the password. Where are hashes used message tampering is a common attack and with hashes it can be ensured that the message wasn't altered. The hash of the message is added to the packet the receiver applies the hash function on the content and compares it to the received hash, if both of them are the same it's more likely that the data hasn't been changed How does a DNS zone transfer attack work and why can it be harmfull A DNS Zone transfer is a process where one DNS server copys parts of its databse to another DNS. This helps to have more than one server which can answer questions about a zone. The slaves ask for a copy by the master. A DNS Zone transfer attack, is that you pretend to be a slave an get a copy of the DNS zone records. Risk: The zone records, show a lot of internal topology information about the network, if someone wants to subvert (untergraben) the DNS with spoofing (falsche Identität verwenden) and poisoning, this is very helpful. What is Red Team in the context of Cybersecurity Offensive Cybersecurity, focus on penetration testing, assume the role of a hacker, show organizations what could be backdoors or exploits, common practice is that they are outside of the organization. What is Blue Team in the context of Cybersecurity Defensive Cybersecurity, Assessment (Bewertung) of network security, identification of possible vulnerabilities, find ways to defend, change and re-group defence mechanisms to make incidents responses much stronger. They are continuously improving the digital security infrastructure using security audits, log and memory analysis, pcap, risk intelligence data Whats the idea behind risk management? Reduce risk and support the mission of the organization. It is impossible to design a risk-free environment Significant risk reduction is possible often with little effort Identifying factors that could damage or disclose data Evaluating those factors in light of data value and countermeasure (Gegenmassnahme) cost Implementing cost-effective solutions for mitigating(mildern) or reducing risks Whats part of a risk analysis? Evaluation, assessment, and the assignment of value for all assets of an organization Examining (untersuchen) an environment for risks Evaluating each threat event as to its likelihood of occurring and the cost of damage it would cause if it did occur Assessing (bewerten) the cost of various countermeasures for each risk and creating a cost benefit report for safeguards to present upper management Whats Risk mitigation? reducing risk, implementation of safeguards and countermeasures to eliminate vulnerabilities Whats Risk assignment moving risk to another entity or organization Whats Risk acceptance risk tolerance, cost/benefit analysis shows that countermeasure costs too much
OPCFW_CODE
After v1.3.1 update, sensors are not updating. I updated to v1.3.1, which I know is a pre-release, but wanted to let you know that none of the sensors that are selectable from the Configure section are updating and have been static for the past 15 hours or so. My config for the sensors: and my entry in sensors.yaml: - platform: pirateweather api_key: <API KEY> scan_interval: '00:02:15' Should I try a new API key? Good catch! I'm seeing this as well on my end, so let me investigate Follow-up question- did you add a "time" sensor, and is it updating? Nope, not the time sensor. I'm thinking it might have something to do with re-configuring the sensors, but not sure yet Good thought, but luckily that's not it! The data update coordinator handles all the fetching, so no matter how many sensor or weather entities you have for a given location, it's all done in a single call. I've isolated this down to making a change using the configure button- updates stop working whenever it's configured. The key question is if this is a new issue, or something that has always been happening before 1.3 From my limited testing this issue seems to happen if you have the time sensor selected in your list of sensors. I was having the same issue and it seems that removing the sensor (and restarting HA) seems to have solved the issue for me. I'm going to observe it for a big longer to say conclusively if that is the issue or not. Oh interesting, I didn't try restarting. Definitely let me know if that works- first with the time sensor, then without Enabling Time without restarting freezes the sensors. Enabling Time with restarting leaves the sensors frozen. Hmm, now nothing is updating, not even the weather entity. Ok, I think I've got a solution here! For some reason it's not linking with the existing weather coordinator when options are changed. In the current version I had a check to see if there was an existing coordinator, but that was only there in case someone wanted both a daily and hourly forecast in their dashboard. Now that both of those are handled in one place, that's no longer required! Long story short, 1.3.2 is incoming shortly I see. Ok, will look out it and test. 1.3.2 installed and HA restarted. I have my scan interval set to 2min 15sec and it doesn't seem to be updating at that interval. Do you mind going to setting -> logs -> load full logs? The coordinator logs an info log every time it checks for an update, so it should show up there if it's updating That's great! Check out those bottom lines, that's the update coordinator checking for updates every 2 seconds, which is looking promising. Also worth adding a the time sensor back in, since that should update every go Time sensor is back in, but is still displaying that odd 10-digit string. The sensors are still not showing as updated recently. Unless I'm being too impatient. I can let it run for a few hours and report back. Yea, the 10-digit string is the UNIT time for the forecast. I'm not sure if HA has a better way of displaying it (maybe ISO), but it should change with every update: Mine isn't updating like that Can you try selecting "reload" from the three dot menu on the integration page? I'm thinking that might kick it into gear Ok, that did it. And now that I know it's Unix time I wrote this template to convert it to human-readable MM/DD. forecast_date_2d: friendly_name: "Forecast Date 2d" value_template: "{{ states('sensor.pirateweather_time_2d') | int | timestamp_custom('%m/%d') }}" Ok, everything seems to be working as expected now. Thank you for your prompt responses and for maintaining this integration.
GITHUB_ARCHIVE
Since it is possible to get high quality streaming video via a regular TV antenna, why can't we also get broadband internet that way? I just bought an over-the-air antenna for my TV, and I am getting very clear and high quality video for at least 20 channels. It occurred me that if the over-the-air communication mechanism is capable of delivering that much data (enough needed for streaming high quality video) - why can't we also get broadband internet connections that are broadcast via the same mechanism (over-the-air)? I would think this could be an alternative to satellite internet in rural areas, and could remove the need for brand new infrastructure in many places around the world that don't have broadband internet. https://en.m.wikipedia.org/wiki/Wireless_Internet_service_provider Because the Internet is bidirectional: data would have to flow both ways, requiring a strong transmitter. Not only that, but every internet user is served different data, unlike what happens in the examples you posted. Sparse, high power Internet broadcasting towers are thus not possible. Instead we have dense cellular networks for mobile Internet and calls, among other high bandwidth bidirectional communications. This sort of thing has already sort-of been done where an existing low-speed internet connection (DSL, phone-line, whatever) is 'supplemented' by adding a high-speed satellite down-link. Your requests for content go out on the low-speed link and the actual content comes back over the satellite link. Latency is a bit high though so its not much good for gaming... Actually, there have been several efforts in industry and government to work towards systems like this. However, there have not been any mass deployments to date. The IEEE standards include 802.11af and 802.22. The process for making unlicensed spectrum available is under study by the FCC in the United States. See https://en.wikipedia.org/wiki/Super_Wi-Fi. Making spectrum available would still not permit a large number of users requiring unique data to each enjoy anything near the bandwidth possible when the same limited number of channels is broadcast to all users. For unique data, you need to "rent" a unique frequency/time/code slice of a license, and most readily purchasable plans for doing so provide limited bandwidth to each user. Yes. Most proposals for internet access using this spectrum have been for rural areas where population density is low. Long time ago, I worked at a company which developed silicon for receivers of internet transmitted from TV stations. One day after explaining the idea to a friend, I've got the following response: "So, is the ISP going to broadcast eBay?" A TV station can broadcast 20 high quality signals, they reach tens of thousands (rural) subscribers, each subscriber receives the same thing. With internet, each of the subscribers will want different content. In addition, there still needs to be some kind of back-channel to send requests.
STACK_EXCHANGE
Insert into statement with an apostophe using VBA? I have a form with textboxes. I am inserting what the user enters into the textbox into a table. If the user enters an apostrophe in the textbox labeled "Me.ProjectName", I get an error. My code is: CurrentDb.Execute "INSERT INTO Table1(ProjectNumber, Title) " & _ " VALUES('" & ProjectNumber & "','" & Me.ProjectName & "')" Possible duplicate of How to deal with single quote in Word VBA SQL query? In general, to avoid many errors when concatenating in SQL, you can use the function here: CSql. You should escape your strings possibly containing quotes by replacing a quote with 2 quotes: Dim SQL As String SQL = "INSERT INTO Table1(ProjectNumber, Title) " & _ " VALUES('" & ProjectNumber & "','" & Replace(Me.ProjectName, "'", "''") & "')" CurrentDb.Execute SQL Perfect. Thank you! @MekenzieBuhr - If you decide to use this approach, please take a moment to Google "dynamic SQL" and "parameterized query" to learn why you're still doing it wrong. @GordThompson I just see your comment now because I had a +1 (6 years!). SQL injection is mostly not a thing in Access. You are doing some code for yourself only, or eventually a tenth of persons that are your close colleagues and certainly not hackers. %Most of the time, the code is not even compiled and everyone as access to it, like all underlying tables. Your remark is valid, but not as important in the context of MS Access You should not construct and execute dynamic SQL based on user input. You should use a parameterized query, something like: Dim cdb As DAO.Database Set cdb = CurrentDb Dim qdf As DAO.QueryDef Set qdf = cdb.CreateQueryDef("", _ "INSERT INTO Table1 (ProjectNumber, Title) VALUES (@prjnum, @title)") qdf.Parameters("@prjnum").Value = ProjectNumber qdf.Parameters("@title").Value = me.ProjectName qdf.Execute Thank you @gord-thompson. I'm having trouble running this when I have a long text (over 255 characters). How do I fix this? Check the column definition by opening the table in Design View. Short Text (sometimes just called Text) columns have a maximum length defined, which can be no more than 255 characters. Long Text (sometimes called Memo) columns are practically unlimited in size. The table is already set to Long Text in that particular field. I've had this happen before and I've never known out to fix it. Didn't know if you ever ran into this before.
STACK_EXCHANGE
Integrate Castellan for Key Management¶ Castellan is a key manager interface library that is intended to be usable with multiple back ends, including Barbican. The Castellan code is based on the basic key manager interface that resides in Nova and Cinder. Now that the key manager interface lives in a separate library, the key manager code can be removed from Nova and Cinder, and Castellan can be used as the key manager interface instead. As encryption features in OpenStack projects are becoming more common, the projects typically need a way to interface with a key manager. Different deployers may have different requirements for key managers, so the key manager interface must also be configurable to have different back ends. The Castellan key manager interface was based off the key manager interfaces found in Cinder and Nova. Now that the shared key manager interface lives in a separate library, the original key manager interface embedded in Nova can be removed and Castellan used instead. Castellan supports existing features such as ephemeral storage encryption and volume encryption. Castellan by default pulls configuration options from a Castellan-specific configuration file in /etc/castellan, but can also take in configuration options if passed in directly. The configuration options for the key manager can still be specified in nova.conf, and passed along to Castellan. The old key manager interface code and back end implementations in nova/keymgr and tests in nova/tests/unit/keymgr can be removed. Any place in the Nova code where the key manager interface was called will be replaced by calls to Castellan instead. Castellan does not include ConfKeyManager, an insecure fixed-key key manager that reads the key from the configuration file. The implementation for ConfKeyManager will remain in Nova as the Nova community agrees that it provides a valuable test fixture. Castellan was integrated into Nova, but ConfKeyManager still remains in the Nova source code. There are a few options for improving the integration. The goals in determining a path forward are the following: Keep Castellan a key manager interface for production-ready back ends Deprecate class-based loading Find a back end to serve as a test fixture for encryption features However, class-based loading is a Castellan feature, and so the spec for deprecating class-based loading should live in the Castellan/Barbican specs. The followng are possible alternatives, which solve one or more of the goals: Remove and replace ConfKeyManager One strategy for a path forward is to deprecate and remove ConfKeyManager and find an alternative back end suitable for testing. The ConfKeyManager back end reads a single, fixed key from a configuration file. It does not live in Castellan because ConfKeyManager is very insecure and is only suitable for testing. It is only useful for basic testing of encryption features using one key, such as Cinder volume encryption. If any administrators decided to use ConfKeyManager in their production deployment, they will be able to store the fixed key in the new back end as part of the migration necessary after deprecation. Other security features such as Glance image signing and verification use certificates and cannot be tested with ConfKeyManager. A back end closer to what is used in production would provide better testing. The following are options for replacing ConfKeyManager: Option 1: KMIP Castellan back end The Key Manager Interoperability Protocol (KMIP) is a standardized protocol for interacting with a key manager. The PyKMIP library includes not only client code necessary for interacting with a KMIP hardware device but also a KMIP software server with Keystone authentication that is useful for functional testing where a hardware device is not an option. Work on a KMIP Castellan back end has already started , but would need to be completed for this option. The PyKMIP software server is already used in the Barbican functional gate. New DevStack gate checks could be configured to use the PyKMIP server for the encryption Tempest tests, or the existing ones could be modified. This option satisfies all three of the goals listed above. Option 2: Barbican Castellan back end A Barbican back end already exists for Castellan. This option entails editing DevStack gate jobs and/or DevStack itself to configure and launch Barbican. This option is beneficial because it would test encryption features as they should be used in production, as Barbican is the recommended back end. However, just 2% of production deployments use Barbican so it may not make sense to include it in all of the gates. This option would satisfy all three of the goals listed above. Option 3: New database back end This option is to create a new Castellan test fixture back end that can store multiple objects in a database. While this option will not provide a deployment-ready back end, it will be better than ConfKeyManager and will be able to support functional testing of features such as signed image verification that need to retrieve certificates. This is an improvement from using ConfKeyManager because this will allow the key manager testing code to be closer to what a deployment configuration would look like. However, this back end does not exist yet, and would require work to implement the database interactions. Option 1 or Option 2 would require less Castellan development work. Once completed, this option would satisfy two of the three goals. Move ConfKeyManager elsewhere The community has expressed concern about ConfKeyManager living in the Nova code base, but moving ConfKeyManager into Castellan is not preferred. The following are options for if ConfKeyManager cannot be deprecated: Option 4: Move ConfKeyManager to Tempest The Tempest tests are the only place where ConfKeyManager should be used, so the back end could be moved to Tempest. As long as Castellan provides an option to register back ends if class-based loading is deprecated, this option could satisfy all three of the goals above. Option 5: Move ConfKeyManager to Castellan This is not a recommended option. The ConfKeyManager does not support testing of features such as signed image verification , which uses certificates, not keys. Moving ConfKeyManager to Castellan will push the problem of not having an adequate testing back end down the road. Revert the Castellan integration patch Option 6: Revert to nova/keymgr This is not a recommended option. The key manager interface will be left as it is in nova/keymgr, but this means that Nova’s key manager will not benefit from the updates, new features, and future additional back ends available in Castellan. The key manager interface will not be unified across Nova, as the volume encryption feature and encrypted ephemeral storage feature will use nova/keymgr, but the image signature verification feature already uses Castellan. Data model impact¶ REST API impact¶ Castellan behaves very similarly to the current Nova key manager. Castellan has added improvements and bug fixes beyond what is currently in the Nova and Cinder key managers, making it more secure. The fixed-key key manager found in Nova and Cinder is insecure for deployments, but it is useful for testing. Castellan doesn’t include the fixed-key key manager, so the ConfKeyManager will remain in Nova. Other end user impact¶ Other deployer impact¶ The deployer should be made aware of a change in the default key manager back end. The current default back end in Nova is a fixed key, but Castellan uses Barbican as the default. This means the deployer should ensure Barbican is running and the fixed key added to Barbican so it can continue to be used. The options in the Nova configuration file for disk encryption will change. The option group ‘keymgr’ will be spelled out to ‘key_manager’. The key manager option group will still have an option ‘api_class’ to specify the desired back end, but an option to specify the fixed key will no longer be available. In the ‘barbican’ option group, a few new options will be available to increase the robustness of the back end, such as the number of times to check if a key has been successfully created. To maintain backwards compatibility, the old options will still be listed as deprecated options. Standard deprecation policy will be followed, and these old options should be removed in the next release cycle. Nova developers should not be impacted by this change. If developers find more uses for a key manager, Castellan should be just as easy to use as the current Nova key manager interface. - Primary assignee: Kaitlin Farr <firstname.lastname@example.org> kfarr on IRC - Other contributors: Remove calls to Nova’s key manager with calls to Castellan. Remove Nova key manager code. This change depends on Castellan, version >= 0.2.0. Castellan is already in OpenStack’s global requirements. This change can be unit tested using a simple in-memory back end. As actual deployments should be using Barbican, this feature should be tested using a Barbican back end, too. These changes will be documented. Nova documentation for disk encryption will be updated to reference Castellan . - Castellan source code - Castellan in OpenStack’s global requirements - Current Nova key manager implementation - April 2016 OpenStack User Survey - Disk encryption configuration reference - PyKMIP source code - KMIP backend for Castellan - Glance image signing and verification specification
OPCFW_CODE
import cv2 import dlib from PIL import Image as Img from fastai.vision import * stream = cv2.VideoCapture('video.mp4') detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_68.dat') frame = stream.read() champion = '' crop_width = 100 incremental = 100 simple_crop = True faces_dirname = 'images/' classes = ['Ahri', 'Darius', 'Draven', 'Graves', 'Katarina', 'Leona', 'Zyra'] learn = load_learner(path='./', file='trained_model.pkl') print('Model loaded') model = learn.model # the Fastai model mean_std_stats = learn.data.stats # the input means/standard deviations class_names = learn.data.classes # the class names while True: _, frame = stream.read() # need grayscale for dlib face detection img_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) rects = detector(img_gray, 0) img = Img.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)) # print("Faces detected: %d" % (len(rects))) # loop through detected faces for rect in rects: shades_width = rect.right() - rect.left() shades_height = rect.bottom() - rect.top() # crop faces found if shades_width >= crop_width and shades_height >= crop_width: image_to_crop = img if simple_crop: crop_area = (rect.left(), rect.top(), rect.right(), rect.bottom()) else: size_array = [] size_array.append(rect.top()) size_array.append(image_to_crop.height - rect.bottom()) size_array.append(rect.left()) size_array.append(image_to_crop.width - rect.right()) size_array.sort() short_side = size_array[0] crop_area = (rect.left() - size_array[0] , rect.top() - size_array[0], rect.right() + size_array[0], rect.bottom() + size_array[0]) # save cropped face to image locally cropped_image = image_to_crop.crop(crop_area) crop_size = (crop_width, crop_width) cropped_image.thumbnail(crop_size) cropped_name = faces_dirname + str(incremental) + ".jpg" cropped_image.save(cropped_name, "JPEG") # run prediction predicted_classes, y, probs = learn.predict(open_image(cropped_name)) champion = str(predicted_classes) print(f'Found {predicted_classes} ({round(probs[y].numpy()*100,2)}%)') incremental += 1 font = cv2.FONT_HERSHEY_SIMPLEX # inserting text on video cv2.putText(frame, champion, (50, 50), font, 1, (0, 255, 255), 2, cv2.LINE_4) cv2.imshow("League of Faces", frame) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break cv2.destroyAllWindows()
STACK_EDU
Pierre Hardy contrast print satchel bag Mustard, pink and black leather and suede contrast print satchel bag from Pierre Hardy. Pierre Hardy 'Alpha' crossbody bag Black calf suede 'Alpha' crossbody bag from Pierre Hardy. 10% off new customer ordersPierre Hardy 'Alpha' crossbody bag Multicoloured leather 'Alpha' crossbody bag from Pierre Hardy. Pierre Hardy 'Alpha' fringed crossbody bag Black and champagne calf suede 'Alpha' fringed crossbody bag from Pierre Hardy featuring fringed edges, metallic embellishments, a back slip pocket, an adjustable shoulder strap, a foldover top, a strap closure, an internal slip pocket and an embossed internal logo stamp. Extra 20% Off: MUSTHAVEPierre Hardy Leather Satchel Tan textured leather Pierre Hardy satchel with gold-tone hardware, dual flat handles, single shoulder strap, black leather lining, dual interior pockets; one with zip closure and two-way zip closure at top. Includes dust bag. Shop Pierre Hardy consignment handbags at The RealReal. Extra 20% Off: MUSTHAVEPierre Hardy Suede Mini Satchel Brown and black leopard print suede Pierre Hardy satchel with silver-tone hardware, black suede trim, single shoulder strap with chain-link accent, single flat top handle, dual slit pockets at exterior, black leather interior, single slit pocket at interior wall and flap with magnetic closure at front. Shop authentic designer handbags by Pierre Hardy at The RealReal. Extra 20% Off: MUSTHAVEPierre Hardy Suede & Leather Satchel Red suede Pierre Hardy satchel with gold-tone hardware, optional leather flat shoulder strap, dual top handles, tonal suede panels at centers, black leather trim, black leather lining, dual pockets at interior wall; one with zip closure and two-way zip closure at top. Shop authentic designer handbags by Pierre Hardy at The RealReal. Pierre Hardy Alpha Plus Colorblock Leather Satchel Bag Alpha Plus Colorblock Leather Satchel Bag crafted in grained leather, takes the signature Alpha clutch and gives it an elegant update adding an edgy refinement that will take you through day and evening with more versatility than the original. Featuring belted front slot pocket, flap top with hidden magnetic snap, single top handle, detachable chain handle and leather shoulder strap, internal slip pocket with logo mirror and brushed gold tone hardware. Signature dust bag included. Warranty 2 Years Warranty Material Grained Calf Leather Width 10.24" / 26 cm Height 5.91" / 15 cm Depth 4.72" / 12 cm Handles Leather handle + Shoulder straps Handle Drop 2.76"/7 cm Shoulder Strap 43.3" / 110 cm Hardware Material Goldtone metal Lining Black Leather Pockets 1 compartment + 2 pockets Closure Flap & push button lock Color Brown Weight gr 1080 +20% Off SelectPierre Hardy Handbags PIERRE HARDY Handbags. mini, solid color, detachable application, magnetic closure, internal zip pocket, adjustable shoulder strap, leather lining, contains non-textile parts of animal origin, satchel. Soft Leather
OPCFW_CODE
Multi variable calculus one-to-one and onto restriction $Let f:ℝ^2\Rightarrow ℝ$ be given by $f(x,y) = 3x^2+2y^2-5$ a) What is the domain and range of $f$ b) Restrict the domain so that $f$ is one-to-one on the new domain c) Restrict the codomain so that $f$ is onto the new codomain My answers a) Since there is no restrictions, the domain of $f$ is $\lbrace(x,y)∈ℝ\rbrace$. Range is $\lbrace Z∈ℝ\;|\;Z≥5\rbrace$. So b and c I'm a little confused about b) For b I'm guessing that you just restrict it to $x>0$, but my friend is also is saying you have to restrict y so it's $y>0$ or $y<0$. I'm not entirely sure which works but if I think about it 2D wise, it's basically a parabola and if you restrict $x>0$, doesn't that automatically make it one-to-one since it will be continuously increasing? c) For c I'm assuming you just have to make the codomain equal to the range so you restrict it to $\lbrace Z∈ℝ\;|\;Z≥5\rbrace$ @AntoinedePaladin Ahh, that makes sense. Then it would be $x>0$ and $y>0$ to prevent that from happening. Wow.... Not the kind of phrasing I'd expect from a mathematician. For b) and c) you can use ${(0,0)}$ and $\left{f(x,y)\colon (x,y) \in \mathbb R^2\right}$, respectively. @GitGud For c) Wouldn't one answer, like I mentioned, be that I need to restrict the codomain equal to the range to make it onto? If I do that then wouldn't each value get mapped atleast once? Or am I thinking about it wrong You are correct about (c).... except, you are mistaken about what the range is. For instance $f(0,0)$ is not in your proffered range. For $b$, it might help to realize that the codomain is one-dimensional, while the domain is two-dimensional. While it is possible for a function to match 2 dimensions to 1 injectively, such functions do not have simple expressions like this. So in this case, to have a 1-1 match between domain and codomain, you are going to need a 1 dimensional domain. @HelloMellow You are correct, that is exactly what I did: $\left{f(x,y)\colon (x,y) \in \mathbb R^2\right}$.
STACK_EXCHANGE
Volexity did not share the names of the hacking groups exploiting this Exchange vulnerability. Volexity did not return a request for comment for additional details. The DOD source described the hacking groups as "all the big players," also declining to name groups or countries. The Microsoft Exchange vulnerability These state-sponsored hacking groups are exploiting a vulnerability in Microsoft Exchange email servers that Microsoft patched last month, in the February 2020 Patch Tuesday. The vulnerability is tracked under the identifier of CVE-2020-0688. Below is a summary of the vulnerability's technical details: During installation, Microsoft Exchange servers fail to create a unique cryptographic key for the Exchange control panel. This means that all Microsoft Exchange email servers released during the past 10+ years use identical cryptographic keys (validationKey and decryptionKey) for their control panel's backend. Attackers can send malformed requests to the Exchange control panel containing malicious serialized data. Since hackers know the control panel's encryption keys, they can ensure the serialized data is unserialized, which results in malicious code running on the Exchange server's backend. The malicious code runs with SYSTEM privileges, giving attackers full control of the server. Microsoft released patches for this bug on February 11, when it also warned sysadmins to install the fixes as soon as possible, anticipating future attacks. Nothing happened for almost two weeks. Things escalated towards the end of the month, though, when the Zero-Day Initiative, who reported the bug to Microsoft, published a technical report detailing the bug and how it worked. The report served as a roadmap for security researchers, who used the information contained within to craft proof-of-concept exploits so they could test their own servers and create detection rules and prepare mitigations. At least three of these proof-of-concepts found their way on GitHub[1, 2, 3]. A Metasploit module soon followed. Just like in many other cases before, once technical details and proof-of-concept code became public, hackers also began paying attention. On February 26, a day after the Zero-Day Initiative report went live, hacker groups began scanning the internet for Exchange servers, compiling lists of vulnerable servers they could target at a later date. First scans of this type were detected by threat intel firm Bad Packets. The first ones to weaponize this bug were APTs -- "advanced persistent threats," a term often used to describe state-sponsored hacker groups. However, other groups are also expected to follow suit. Security researchers to whom ZDNet spoke earlier today said they anticipate that the bug will become very popular with ransomware gangs who regularly target enterprise networks. Weaponizing older, useless phished credentials This Exchange vulnerability is not, however, straightforward to exploit. Security experts don't see this bug being abused by script kiddies (a term used to describe low-level, unskilled hackers). To exploit the CVE-2020-0688 Exchange bug, hackers need the credentials for an email account on the Exchange server -- something that script kiddies don't usually have. The CVE-2020-0688 security flaw is a so-called post-authentication bug. Hackers first need to log in and then run the malicious payload that hijacks the victim's email server. But while this limitation will keep script kiddies away, it will not stop APTs and ransomware gangs, experts said. APTs and ransomware gangs often spend most of their time launching phishing campaigns, following which they obtain email credentials for a company's employees. If an organization enforces two-factor authentication (2FA) for email accounts, those credentials are essentially useless, as hackers can't bypass 2FA. The CVE-2020-0688 bug lets APTs finally find a purpose for those older 2FA-protected accounts that they've phished months or years before. They can use any of those older credentials as part of the CVE-2020-0688 exploit without needing to bypass 2FA, but still take over the victim's Exchange server. Organizations that have "APTs" or "ransomware" on their threat matrix are advised to update their Exchange email servers with the February 2020 security updates as soon as possible. All Microsoft Exchange servers are considered vulnerable, even versions that have gone end-of-life (EoL). For EoL versions, organizations should look into updating to a newer Exchange version. If updating the Exchange server is not an option, companies are advised to force a password reset for all Exchange accounts. Taking over email servers is the Holy Grail of APT attacks, as this allows nation-state groups to intercept and read a company's email communications. APTs have targeted Exchange servers before. Past APTs that have hacked Exchange include Turla (a Russian-linked group) and APT33 (an Iranian group). This blog post from TrustedSec contains instructions on how to detect if an Exchange server has been already hacked via this bug. The world's most famous and dangerous APT (state-developed) malware
OPCFW_CODE
What are the Minimum Permissions to Create an MSSQL Database and Take Ownership of it? I would like a less privileged user (KINGDOM\joker) to be able to create, manage, and drop databases on an MSSQL 2017 server [14.0.2027.2 (X64)]. KINGDOM\joker should only be able to affect the databases that they create, and should not be able to drop, restore, or take ownership of other databases. I granted KINGDOM\joker the CREATE DATABASE and MSSQL specific CREATE ANY DATABASE permissions. Using SQL Server Management Studio (v18, v19), KINGDOM\joker can create a new database [testDB] but the dbo in [testDB] is 'sa' and KINGDOM\joker cannot take ownership, despite KINGDOM\joker being the recorded owner in the master table. USE [testDB] GO SELECT name,sid,SUSER_SNAME(sid) AS login FROM sys.database_principals WHERE name = 'dbo'; name sid login dbo 0x01 sa USE [master] GO SELECT SUSER_SNAME(owner_sid) AS login FROM sys.databases WHERE name = 'testDB'; login KINGDOM\joker ALTER AUTHORIZATION ON DATABASE::testDB to "KINGDOM\joker"; Fails with permission denied. As I understand, [testDB] is create from [model] and the dbo in [model] is 'sa'. I expected the dbo in [testDB] to be changed to KINGDOM\joker by the server when it creates [testDB] from [model]. It seems to have once worked that way but MS changed the behavior with SQL Server 2016, and the MS community post that explained this change is now an invalid link. Is there some MSSQL Server option or setting, or some new MS-specific permission that will allow the owner_sid in sys.databases to ALTER or IMPERSONATE the dbo in [testDB]? OR Any other work-around or method to accomplish the objective described in the first paragraph? BTW, I have considered adding KINGDOM\joker as a user in [model] and assigning the db_owner role, but that would affect every new database. I would expect KINGDOM\joker to be the database owner after they created the database, not sa or any other principal. Was the database owner changed after creation? It was not. Apparently KINGDOM\joker was mapped into the [model] DB by error, and so connected to all new DBs as KINGDOM\joker and not as dbo. Only the CREATE DATABASE permission is required to create. Now I just need to figure out how to let joker remove the backup and restore history when dropping the DB. When deleting the database through the SSMS and the "Delete backup and restore history" is checked, KINGDOM\joker receives a permission error, "The EXECUTE permission was denied on the object 'sp_delete_database_backuphistory' database msdb". I suppose I could give KINGDOM\joker permission on that object, but then would joker be able to delete the backup history of any DB? All is well. This query is just misleading: SELECT name,sid,SUSER_SNAME(sid) AS login FROM sys.database_principals WHERE name = 'dbo'; DBO always has a SID of 0x01 regardless of who owns the database. The database owner is the login whose SID is in owner_sid in master.sys.databases. Think of it this way. SA always maps to DBO. But all sysadmins and the database owner always connect as DBO. In any case, when KINGDOM\joker connects to the database, he will be connected as DBO since it's his database. Eg, after create login [Office\joker] from windows grant create database to [Office\joker] Thanks. I should have tested better with a new user. I discovered that KINGDOM\joker had been mapped into the [model] DB (with public role). This may have been a mistake by someone who thought that joker should have had access to our modeling data when joker was first added as a user (note: we do not have a modeling DB). So whenever joker connected to [testDB] or any new DB, it was as joker and not dbo. Can you produce a repro on this? I agree with Dan's comment, that the owner has likely been changed since creation of the database. If you look at my repro below, you will see that dbo should be the sid that owns the database. I.e., if you see something different then we would have to determine what is different in how we do things (or the owner was changed, perhaps by a nightly job applying some best practices thinking). USE master GO CREATE LOGIN [ww\Pelle] FROM WINDOWS GO GRANT CREATE DATABASE TO [ww\Pelle] GO From a command prompt: runas /user:pelle cmd.exe From command prompt created above: 1> create database pelle1 2> go 1> use pelle1 2> go Changed database context to 'pelle1'. 1> select user_name() 2> go -------------------------------------------------------------------------------------------------------------------------------- dbo From SSMS: USE pelle1 SELECT p.name, p.sid FROM sys.database_principals AS p WHERE name = 'dbo' name sid dbo 0x010500000000000515000000840152F3AD7526D67071FE26ED030000 Thanks. As noted above, I discovered that KINGDOM\joker had been mapped as a user into the [model] DB some time prior. This meant that KINGDOM\joker was connecting to every new database as KINGDOM\joker and never as dbo. I believe that whoever added joker to [model] may have thought that the [model] DB was something other than an important system DB. One of joker's primary roles in the company is computer modeling... This would normally be a bad security fault, but it got caught.
STACK_EXCHANGE
How to calculate the evaporative cooling rate needed to protect a house from forest fire Recently in our area there has been a large forest fire and I've been looking into home defense from such things. I am not a physicist - but can do some basic math. I was wondering how I could calculate if a 'evaporative mister' type system could be used for such to reduce the ambient air temp to stop the house catching fire (dont care about cosmetic damage). The goal would probably be to reduce temperature of air/surfaces of the house by approx 1000F to keep them below combustible temperatures. The area is very low humidity between 4% to maybe 15% during wildfire season. How can I calculate how much water/mist I need to put into the air to reduce temperature below 400F. Very rough simplest equations are fine - I know its not an exact science when dealing with wildfires. I found this formula on wiki but I don't know how to adapt it to use it to calculate water need for a temp drop of TLA = TDB – ((TDB – TWB) x E) TLA = Leaving Air Temp TDB = Dry Bulb Temp TWB = Wet Bulb Temp E = Efficiency of the evaporative media. Anything I forgot/missing - be appreciated to know. Some restrictions/thoughts I had The roof would need to stay generally wet light continuous layer of water Misting/irrigation could not use more then 12 gallons per minute (max well output) Be nice to second use the misting system for outdoor aircon on off time (ie 20% capacity) Windows/glass would need some of IR shielding to stop ignition of furniture inside the house. Metric calc is fine too - in fact I'd prefer it I think winds would wipe out any mist you produce @soandos You may be right - I've expanded my thoughts on this a little that use the misting system as 99% aircon and maybe emergency use. It should still remove some heat from the air If possible (and legally allowed), you could try to create a forest-free zone around your house. In dense city areas, there would be a small 1 meter wide 'fire road' every X houses to prevent fire from spreading to the whole block. By removing combustable materials in a wide area around your house, you can prevent fire from getting closeby. Also, instead of creating a mist around your house and keeping your house wet, you could use sprinklers to keep the ground around your house wet. If your surroundings consists out of low foliage, like grass and bushes, garden sprinklers will suffice. If you are however surrounded by trees, sprinkling them won't suffice and you even risk the chance of having a tree drop on your property, igniting something else Instead of temperature drop, we have to to consider amount of heat transferred to the building from the wildfire. The temperature of the structures will rise towards the ignition point depending on the temperature and closeness of the heat source. Cooling can then slow down the heating or in best case stop it completely. The heat transfer is is a complicated thing to calculate for real, especially in this kind of environment where winds are probably turbulent and heat is transferred in many forms. Luckily there has been research on the subject and we can use those results for estimating the needed cooling. From the practical point of fire safety, one of the most important thing seems to be distance of the closest fire front from the house. There is an article on this subject "Reducing the Wildland Fire Threat to Homes: Where and How Much?" by Jack Cohen, nicely summarized in [http://www.saveamericasforests.org/congress/Fire/Cohen.htm]. The article contains a graph of radiant heat flux as function of distance from the wild fire front as well as wood ignition times as function of the distance. Knowing the radiant heat flux and energy needed for vaporization of water, it is possible to derive an equation for cooling effect of the water: $q = \frac{m H_{vap}}{A}$ where q is radiant heat flux [kW/m^2] m is amount of water used per second [kg/s] H_vap is heat (or enthalpy) of vaporization of water [kJ/kg] A is area of the walls and roof of the house [m^2] As an example, let's consider a house with outer surface area of 500 m^2. For water, the heat of vaporization is 2257 kJ/kg. The amount of water we can spend is 12 gallons per minute, that is 0.76 liters per second. From this we can work that the maximum cooling effect produced by the cooling system (all water vaporized instantly) would be: $q = \frac{m H_{vap}}{A} = \frac{(0.76\: \mathrm{kg/s})(2257\: \mathrm{kJ/kg})}{500\: \mathrm{m^2}} = 3.43\: \mathrm{kW/m^2}$ When comparing this to the model of the article where heat flux from for example 20 meters away is is around 45 kW/m^2, and heat flux from 22 meters away is around 40 kW/m^2, we can say that the cooling would have approximately the same effect as moving the tree line by two meters. The model is known to overestimate the heat flux so the actual distances may be smaller, but anyway the cooling effect has approximately the same effect. Things to consider: I assumed that we can't know what side of the building will be closest to the fire or that house will be surrounded so all sides need to be cooled. Distances given in the graph in the article are for wood. For other materials, the distances will be larger or smaller. Thickness and density of the material also matters. According to the article, in a full-blown forest fire the burning happens very fast. If the house can stand the fire for two minutes, it won't probably ignite as the fire has moved on. Clearing the surrounds of the house and having nonflammable materials would be much more effective way of shielding the house. From the link I gave: "Given nonflammable roofs, Stanford Research Institute (Howard and others 1973) found a 95 percent survival with a clearance of 10 to 18 meters and Foote and Gilless (1996) at Berkeley, found 86 percent home survival with a clearance of 10 meters or more." This might of course have effect on how nice and cozy the yard is. I suggest you look into fire protection nozzles, one site is http://www.bete.com/applications/fire-water.html You will need more than a mist system. You need something like a water wall protection system. These use a lot more water than your available source. If you look at the nozzles alone you see rates of flow of about 17 to 300 gal per minute. I would suggest you contact this company (or other fire protection companies) for more information. Systems I have seen use fire hose type systems, again lots of water. I agree that the best solution is trim back all flammable materials as far as you can. Also check with your local fire department. I humbly submit that aside from any evaporative benefits, a simple lawn sprinkler on the roof would help prevent ignition from floating embers from burning trees. I'd think that these embers could float for great distances and readily ignite tinder dry grasses, foliage, house siding or roofing material, way before combustibility from heat alone.
STACK_EXCHANGE
Doctor of Philosophy (Ph.D.) Degree Granting Department Computer Science and Engineering Xinming Ou, Ph.D. Lawrence Hall, Ph.D. Jarred Ligatti, Ph.D. Nasir Ghani, Ph.D. Jiyong Jang, Ph.D. Binary Similarity Analysis, Malware Clustering, Malware Detection, Machine Learning Malware analysis and detection continues to be one of the central battlefields for cybersecurity industry. For the desktop malware domain, we observed multiple significant ransomware attacks in the past several years, e.g., it was estimated that in 2017 the WannaCry ransomware attack affected more than 200,000 computers across 150 countries with hundreds of millions damages. Similarly, we witnessed the increased impacts of Android malware on global individuals due to the popular smartphone and IoT devices worldwide. In this dissertation, we describe similarity comparison based novel techniques that can be applied to achieve large scale desktop and Android malware analysis, and the practical implications of machine learning based approaches for malware detection. First, we propose a generic and effective solution for accurate and efficient binary similarity analysis of desktop malware. Binary similarity analysis is an essential technique for a variety of security analysis tasks, including malware detection and malware clustering. Even though various solutions have been developed, existing binary similarity analysis methods still suffer from limited efficiency, accuracy, and usability. In this work, we propose a novel graphical fuzzy hashing scheme for accurate and efficient binary similarity analysis. We first abstract control flow graphs (CFGs) of binary codes to extract blended n-gram graphical features of the CFGs, and then encode the graphical features into numeric vectors (called graph signatures) to measure similarity by comparing the graph signatures. We further leverage a fuzzy hashing technique to convert the numeric graph signatures into smaller fixed size fuzzy hash outputs for efficient comparisons. Our comprehensive evaluation demonstrates that our blended n-gram graphical feature based CFG comparison is more effective and efficient compared to existing CFG comparison techniques. Based on our CFG comparison method, we develop BingSim, a binary similarity analysis tool, and show that BingSim outperforms existing binary similarity analysis tools while conducting similarity analysis based malware detection and malware clustering. Second, we identify the challenges faced by overall similarity based Android malware clustering and design a specialized system for solving the problems. Clustering has been well studied for desktop malware analysis as an effective triage method. Conventional similarity-based clustering techniques, however, cannot be immediately applied to Android malware analysis due to the excessive use of third-party libraries in Android application development and the widespread use of repackaging in malware development. We design and implement an Android malware clustering system through iterative mining of malicious payloads and checking whether malware samples share the same version of malicious payloads. Our system utilizes a hierarchical clustering technique and an efficient bit-vector format to represent Android apps. Experimental results demonstrate that our clustering approach achieves precision of 0.90 and recall of 0.75 for the Android Genome mal- ware dataset, and average precision of 0.98 and recall of 0.96 with respect to manually verified ground-truth. Third, we study the fundamental issues faced by traditional machine learning (ML) based Android malware detection systems, and examine the role of ML for Android malware detection in practice, which leads to a revised evaluation strategy that evaluates an ML based malware detection system by checking their zero-day detection capabilities. Existing machine learning based Android malware research obtains the ground truth by consulting AV products, and uses the same label set for training and testing. However, there is a mismatch between how the ML system has been evaluated, and the true purpose of using ML system in practice. The goal of applying ML is not to reproduce or verify the same potentially imperfect knowledge, but rather to produce something that is better — closer to the ultimate ground truth about the apps’ maliciousness. Therefore, it will be more meaningful to check their zero-day detection capabilities than detection accuracy for known malware. This evaluation strategy is aligned with how an ML algorithm can potentially benefit malware detection in practice, by acknowledging that any ML classifier has to be trained on imperfect knowledge, and such knowledge evolves over time. Besides the traditional malware prediction approaches, we also examine the mislabel identification approaches. Through extensive experiments, we demonstrate that: (a) it is feasible to evaluate ML based Android malware detection systems with regard to their zero-day malware detection capabilities; (b) both malware prediction and mislabel identification approaches can be used to achieve verifiable zero-day malware detection, even when trained with an old and noisy ground truth dataset. Scholar Commons Citation Li, Yuping, "Similarity Based Large Scale Malware Analysis: Techniques and Implications" (2018). USF Tampa Graduate Theses and Dissertations.
OPCFW_CODE
Check out a free preview of the full Intermediate React, v5 course The "useTransition" Lesson is part of the full, Intermediate React, v5 course featured in this preview video. Here's what you'd learn in this lesson: Brian uses the useTransition hook to defer showing a loading state in the UI until all other high-priority rendering is completed. Transcript from the "useTransition" Lesson >> Okay, transition, where we are. We want to say that if I click Submit, it shows them like a little loading spinner where the submit button is and it doesn't allow them to click it multiple times because that makes sense, right? We can use a transition to handle that. This is more useful than deferred value. I can actually see some of us having use cases where useTransition can be fine. The nice thing about useTransition is that it can show you an intermediary state, and then if it needs to be interrupted, it's also interruptible. So it's both low priority and helps us show a good loading state in these intermediary periods, all A plus to me. So what we're gonna do here is we're gonna say, in SearchParams.jsx, we're gonna load a useTransition. Okay, down in our form submit, so down here, I think I forgot to put this in here. Okay, so, here we're gonna say const = useTransition(), and I gotta remember the exact call signature here, cuz I forgot to put it in my notes. So you're gonna have isPending as a Boolean that it gives back to you, and then the second function is startTransition. Pretty sure I'm getting this right. And then it doesn't take anything in there as well. Okay, on my form transition here, or my form function here, what I wanna do is right after here, I'm gonna put call this function called startTransition. This takes in a function where I can say SetRequestParameters(object), so I can actually just move this inside of here, rather. What you're doing is you're identifying to React like, I'm gonna call setState in here, some sort of setState for my hooks. Whatever happens inside of here, low priority stuff can be interrupted, that's fine with me, okay? So that's step one. Step two is we're gonna go down to our button, And we're just going to say, hey, if this is pending, Then show me my loading pane. So I'm gonna say div className = "mini loading-pane". h2 className = "loader". And let's put a spinning poodle in here, just for funsies. If it's not loading, then just show me my button, right? So I'm going to move my Submit button into here. Okay, so to show you the desired effect here, I have cat here, and then if I hit Submit, notice when I click Submit, it's gonna spin a poodle for just a second. Didn't even do that for me. So we're identifying that setting the requestParams as the object is the lowest priority transition. That actually requires almost no re-rendering, right? That actually literally requires no re-rendering. So if this is always coming back as not being deferred, yep. So the part that we would have to defer then, we would probably be deferring the pets rather than doing it this way, right? Because that's actually the expensive part of the re-rendering. This in and of itself requires no UI re-rendering. Yep, that makes sense. So this in and of itself, just setting the object here, that is not necessarily the best way to do it. Better way of doing it for my data, going back to this, yeah, we have to interact more with React Query to get the best way that we would want to, right, to show that intermediary state. Yep, Okay, In any case, it would have been hard to proceed anyways. So I have a good note here though. Some of you might be wondering, when do I use useDeferredValue versus when do I use useTransition? So a good way to keep them kinda straight is for useTransition, it's to say, I have something new that I'm about to give you, but it's low priority, so feel free to do this at your own speed. Whereas useDeferredValue, it's more after the fact, is like, hey, I just got something that's going to be new, so please re-render this thing that I already have at your leisure, right? So useTransition, you know you're about to go into something that's heavy. With useDeferredValue, it's like, I got something heavy, please do this at your leisure, right? So the key there is startTransition with useTransition, right? That you're identifying to React, this is something new and heavy, please do this at your leisure. Learn Straight from the Experts Who Shape the Modern Web - In-depth Courses - Industry Leading Experts - Learning Paths - Live Interactive Workshops
OPCFW_CODE
M: Free Ways to Get Landing Page Feedback from Real People - alangibson https://blog.loudlist.io/5-ways-to-get-landing-page-feedback/ R: albertgoeswoof > The first step to launching a product is setting up a good landing page Hmm R: skilled Hmm, indeed. This is the only blog post on the site and the landing page looks like a placeholder for the time being. Maybe this is his way of getting free feedback? Hmm! R: alangibson First of many. That placeholder look is thanks to my acute lack of graphic design skills. Getting better one day at a time... > Maybe this is his way of getting free feedback? Close. It's really content marketing for getting signups for LoudList. The world gets a list of resources I genuinely found useful, and not so easy to locate, and I (hopefully) get a mailing list. I've gotten feedback on the landing page at [https://loudlist.io](https://loudlist.io) (rough draft though it may be) already from the resources on this list. That's where I got the idea for the blog post. R: skilled Great, there's a real person behind this! Didn't mean any harm with my comment. It's a nice resource with sites I didn't know about, so I'm keeping them in mind for any future references. For me personally, the signup form doesn't feel aesthetically pleasant on a desktop screen; I think having both div's centerfold is the mustard here. On mobile, it's fine.
HACKER_NEWS
Support for IPv6 in MAAS is similar to support for IPv4. A rack controller in an IPv6 context needs to have the region API server URL specified with brackets: You can access the Web UI and the MAAS CLI (logging in to the API server) in the same way on both IPv4 and IPv6. To use an IPv6 address in a URL, surround it with square brackets. For example, on the local machine ( ::1, the IPv6 equivalent of MAAS can only control most BMCs using IPv4. Quick questions you may have: - What should I know about enabling IPv6? - How do I configure an IPv6 subnet? - What should I know about routing? You enable IPv6 networking in the same way that you enable IPv4 networking: configure a separate rack controller interface for your IPv6 subnet. The IPv6 interface must define a static address range. Provided that you already have a functioning IPv6 network, that’s all there is to it. The following sections explain requirements, supported operations, and what to do if you don’t yet have a functioning IPv6 network. An IPv6 interface can use the same network interface on the rack controller as an existing IPv4 network interface. It just defines a different subnet, with IPv6 addressing. A machine that’s connected to the IPv4 subnet also connected to the IPv6 subnet on the same network segment. Configure an IPv6 subnet Define a reserved static IP range and machines deployed on the subnet will get a static address in this range. Since IPv6 networks usually are 64 bits wide you can be generous with the range size. Leave the netmask and broadcast address fields blank. You may want MAAS to manage DHCP and DNS, but it’s not required. Machines do not need a DHCP server at all for IPv6; MAAS configures static IPv6 addresses on a machine’s network interface while deploying it. A DHCPv6 server can provide addresses for containers or virtual machines running on the machines, as well as devices on the network that are not managed by MAAS. The machines do not need DHCPv6. MAAS will not be aware of any addresses issued by DHCP, and cannot guarantee that they will stay unchanged. In IPv6, clients do not discover routes through DHCP. Routers make themselves known on their networks by sending out route advertisements. These RAs also contain other configuration items: - Switches that allow stateless configuration of their unique IP addresses, based on MAC addresses. - Switches that enable them to request stateless configuration from a DHCP server. - Switches that In any allow them to request a stateful IP address from a DHCP server. Since a network interface can have any number of IPv6 addresses even on a single subnet, several of these address assignment mechanisms can be combined. However, when MAAS configures IPv6 networking on a machine, it does not rely on RAs. It statically configures a machine’s default IPv6 route to use the router that is set on the cluster interface, so that the machine will know their default gateway. They do not need DHCP and will not autoconfigure global addresses. You may be planning to operate DHCPv6 clients as well, for example, on machines not managed by MAAS, or on virtual machines hosted by MAAS machines. If this is the case, you may want to configure RAs, so that those clients obtain configuration over DHCP. If you need RAs but your gateway does not send them, install and configure radvd somewhere on the network to advertise its route.
OPCFW_CODE
In this conclusion to a four-part series on building an online bookstore application in Ruby-on-Rails, you'll learn how to edit and delete authors, and more. This article is excerpted from chapter two of the book Practical Rails Projects, written by Eldon Alameda (Apress; ISBN: 1590597818). Managing Authors in an Online Bookstore (Page 1 of 4 ) Viewing an Author As you might have noticed, creating the Author model also created a file calledauthors.ymlintest/fixtures. This is called a fixture file. Fixtures are mock data that can be used to populate the database with consistent data before each test method. Since the test database is purged before every test method, you know that all the data that exists in the database at that point came from the fixture files. It would be handy to have a few authors in the database for testing our view functionality, so we go ahead and create a couple of author fixtures inauthors.yml: joel_spolsky: id: 1 first_name: Joel last_name: Spolsky jeremy_keith: id: 2 first_name: Jeremy last_name: Keith Putting the linefixtures :authorsin the beginning of our functional test class makes Rails load the author fixtures automatically before every test method inside that class: class Admin::AuthorControllerTest < Test::Unit::TestCase fixtures :authors ... end Now we can rest assured that when we start testing viewing an author, we have two items in ourauthors table. We’ll keep the show author page very simple. We just want to make sure that we’re fed the right template and that the author is the one we’re expecting. Add the following test case to the bottom ofauthor_controller_test.rb: def test_show get :show, :id => 1 assert_template 'admin/author/show' assert_equal 'Joel', assigns(:author).first_name assert_equal 'Spolsky', assigns(:author).last_name end Here, we simply request the show page for one of our fixture authors and check that we get the correct template. Then we use theassigns helper to check that the author instance variable assigned in the action is the one it should be.assignsis a test helper method that can be used to access all the instance variables set in the last requested action. Here, we expect that theshowaction assigns a variable@authorand that the variable responds to the methodsfirst_nameandlast_name, returning “Joel” and “Spolsky,” respectively. The controller code for theshowaction is simple. We fetch the author from the database and set the page title to the author’s name. def show @author = Author.find(params[:id]) @page_title = @author.name end Now let’s open the view file,app/views/admin/author/show.rhtml, and add the template code:
OPCFW_CODE
Your expected return, during a run of 2016 blocks (about 3 days) is: expected_return = 50 LTC/block * 24 blocks/hour * your_hashrate / network_hashrate To be clear, this network hashrate is the software's estimate of the network hashrate over the course of the previous 2016-block run. It doesn't actually store that though, it stores "nBits", which is commonly reported as a floating-point difficulty number, such that: network_hashrate = difficulty * 2^32 / 150s so you can simplify that first equation down to: expected_litecoins_per_hour = your_hashrate_in_kH_per_s / difficulty * 0.0419 or, to within 1%, expected_litecoins_per_day = your_hashrate_in_kH_per_s / difficulty Again, this difficulty is nailed down at the start of each 2016-block run, and the equation holds for the rest of the run, regardless of changes to your hashrate during the run. So written this way, the trick is predicting future difficulty changes. The most direct way is to keep an eye on the block generation rate and run through the difficulty recalculation algorithm yourself, which is quite simple. At the end of each 2016-block streak, it looks at the time it took to generate all those blocks, and divides 2016 by it to work out the actual block generation rate, then adjusts the old difficulty linearly to compensate for the error. Effectively it picks a difficulty which would have made the previous 2016-block run complete in the right amount of time. For example, take the 153-difficulty sequence which ended this morning: Last block time: 2013-04-06 01:10:35 Last block of previous run: 2013-04-03 10:39:05 Total time for 2016 blocks: 3751.5 minutes Average time per block: 1.86 minutes Drift factor: 2.5 / 1.86 = 1.34 New difficulty: 1.34 * 153 = 205 If you're running this calculation yourself before the end of a run, e.g. using the last 100 blocks as a guide, then make sure the sequence of blocks you use all have the same difficulty value (i.e. they don't cross a 2016-block boundary). You can use this calculation to get unstable short-term readings or more stable medium-term readings, depending how many blocks you look at. The best estimate for the next difficulty will come from using all of the blocks since the start of the current 2016-block run. In the end though statistics can help you find trends in past data, but guessing the future is up to the economists. Economics is about models, not truth, and there's no one correct model. So this is about as far as you can go with strict equations. The future network hash rate fluctuates according to profitability of mining; also bear in mind the time it takes a new miner to get equipped, which will cause some lag in the hash rate's response to exchange price fluctuations for example. There's also a varying degree to which miners pay attention to the true costs of their operations, e.g. power costs, whether it's worth running on obsolete hardware, whether they have the ready cash to upgrade, etc. And of course there's the predicted flood of bitcoin miners who may or may not switch to litecoin if ASICs lead to diminishing returns for them in the bitcoin market, and if litecoin's exchange rate stabilizes high. Then, what happens if the litecoin rate drops - will people stop mining? Or will they leave their rigs on anyway, now that they're set up, hoping for a future rise? There are lots of things you can model, and you really need to think about it before deciding which factors to include.
OPCFW_CODE
Learn more about - Adjoint AD Tool Support for Numerical Patterns in Finance - Exact First- and Second-Order Greeks by Algorithmic Differentiation - Tool-Based Approach to Algorithmic Differentiation of Adjoint Methods - Video clip - Adjoint Algorithmic Differentiation of a GPU Accelerated Application - Prize winning student paper: AD in optimizing a LIBOR Market Model - Adjoint Methods in Computational Finance NAG has a range of AD services and solutions to enable organizations to use this technique. NAG is continuously developing solutions and tools for AD with increasing focus on addressing users' specific needs. Significant contributions have come from Vehicle Engineering and Financial Services, as well as Ocean and Climate Modelling. The NAG Numerical Services team is engaged by clients to advise on, evaluate, specify, write and support custom AD solutions. What is AD Algorithmic Differentiation, also sometimes called Automatic Differentiation or Computational Differentiation, is a technique for augmenting numerical simulation programs with the ability to compute first- and higher-order mathematical derivatives. In sharp contrast with classical numerical differentiation by finite differences, AD delivers gradients, Jacobians, and Hessians with machine accuracy by avoiding truncation - it uses analytic differentiation of individual statements within an arbitrarily complex simulation. AD has been applied in particular to optimization, parameter identification, nonlinear equation solving, the numerical integration of differential equations, and combinations of these. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, and accurate to working precision. The adjoint (also reverse) mode of AD is of particular interest in the context of large-scale sensitivity analysis and nonlinear optimization. Calculations that take years can be done in hours - for problems where adjoint AD techniques are available they can dramatically speed up gradient calculations. What does NAG provide? NAG provides expert help, support, training and consulting services that deliver substantial insight into this powerful technique in order to achieve robustness and efficiency for your specific AD application. You can rely on The Numerical Algorithms Group to advise, develop, implement and maintain the best AD solution for your problem type, software environment and hardware platform. NAG collaborates with RWTH Aachen to deliver AD solutions to clients. AD projects are usually a combination of services covering training, applying AD tools to clients own codes (C, C++ or Fortran) and implementing AD versions of numerical functions. NAG Software Tools The dco (derivative code by overloading) Library is a run time AD library that uses operator and function overloading techniques*, offered by C++ as well as Fortran, to implement the transformation of a given simulation program into first and higher derivative code. This Library is particularly efficient with a very flexible interface structure. NAG Fortran Compiler for AD For simulation programs written in Fortran a version of the NAG Fortran compiler has been extended to serve as a pre-processor to dco. The seamless integration of AD into a complex build system is facilitated. Hence, the amount of modifications to be made by the user in the original source code can be minimized or even be eliminated entirely. * Operator overloading: The operator overloading features of a programming language allow mathematical operators ( +,-,/,* ) to be changed so that they compute derivative values along with their usual elementary computation function. Fig: Illustration from GPU Accelerated Application paper - The Art of Differentiating Computer Programs. An Introduction to AD. - Adjoint Parameter Estimation in Computational Finance Differentiation Enabled Fortran Compiler Technology The CompAD (Compiler for AD) research project is investigating the integration of AD capabilities to the NAG Fortran Compiler. This collaboration with computer scientists at the University of Hertfordshire in Hatfield and at RWTH Aachen University in Germany is funded by EPSRC. Investigators and Research Associates at the University of Hertfordshire (UH) and RWTH Aachen University (RWTH): Professor Bruce Christianson (Principal Investigator, UH) Professor Uwe Naumann (Co-Investigator, RWTH and UH) Jan Riehme (Research Associate, UH) Dmitrij Gendler (Research Associate, UH)
OPCFW_CODE
TransUnion Consumer Interactive Hackathon 2017 UI/UX design, research + ideation processes, information architecture, visualization concepts, wireframing, high fidelity mockups Pen + Paper, Adobe Illustrator Design and implement engaging experiences/dashboards within the domain of credit and identity monitoring, utilizing the data provided by TransUnion (database with fictitious credit user data). Build a Credit Fitness App that empowers users to keep a healthy credit score by enriching the experience. Goal is to engage and entice users to regularly engage in the product and receive feedback on their choices. The aim is to accomplish essentially the same thing as an app like "Mapmyrun" does: enrich an experience, make tracking progress more engaging, empowering and motivating users to keep a "healthy" credit score with an informative, engaging, and useful application. UI/UX designer with a team of 1 developer, 2 marketers, and 1 other designer. I lead the research and ideation processes, including creating personas, task flows and customer journey maps for specific use cases, visualization concepts, low-fidelity prototypes for needs validation, and the user interface design of the mobile application. research + customer journey We first pondered the question of "what challenges are we trying to solve here?" We came up with 4 main goals — - Presenting credit scores as more approachable, inviting, and trustworthy. - Active engagement + retention of users in a digital playground. - Credit Fitness awareness and community referrals/involvement After, we jumped into creating personas, a way to better understand our target audiences and how each demographic would benefit from TransMission. - 18 year old, beginner level with no experience with credit scores. His goal is to learn the ins and outs of credit scores and eventually achieve good credit. - 50 year old mother who has established decent credit but would like to keep checking up on it. - 28 year old with a low credit score due to some poor decisions in the past and is trying to learn how to improve. After becoming clear with the task flows and the customer journey map, we were able to pinpoint specific questions that would inform our design strategy and guide our design intentions. What can we do to achieve our goal of user engagement and high retention rates? What are we trying to help the user achieve? We decided it was = personal empowerment, the ability to make smarter decisions, and ultimately it would lead to a brighter, more secure future. We took advantage of the team setting and spent an hour jotting down all our ideas, no matter how crazy or convoluted they were. The idea was rapid brainstorming; just laying out all of our ideas on the table. Once we got the ball rolling, one idea led to the next, and the next. Pulling left on hamburger menu: View our pitch and live demo here. CONSTRAINTS + CHALLENGES I realized we didn't fully understand the business goals of TransUnion's and as a result, our app's focus was more on the game aspect rather than credit score fitness. Part of the process is learning and adapting to what we uncover along the way. Since we could only explore the problem and develop our solution within a 12 hour frame, our failure to conform to the requirements and provide a valuable application to TransUnion is something we all immensely learned from. Additionally, we were unable to perform any user interviews or usability tests within the constricted time frame. We spent more time ideating and developing the education aspect of our application, than designing the visual user interface. Overall, I'm glad with what we were able to take away and learn from during this hackathon as everything is a learning experience.
OPCFW_CODE
Run As Accounts and Profiles This version of Operations Manager has reached the end of support. We recommend you to upgrade to Operations Manager 2022. Run As accounts define which credentials will be used for certain actions that are carried out by the Operations Manager agent. These accounts are centrally managed through the Operations console and assigned to different Run As profiles. If a Run As profile isn't assigned to a particular action, it will be carried out under the Default Action account. In a low-privilege environment, the default account may not have the required permissions for a particular action, and a Run As profile can be used to provide this authority. Management packs may install Run As profiles and Run As accounts to support required actions. If this is the case, their documentation should be referenced for any required configuration. Default Run As accounts The following table lists the default Run As accounts that are created by Operations Manager during setup. |Domain\ManagementServerActionAccount||This is the user account under which all rules run by default on management servers.||Domain account specified as the Management Server Action account during setup.| |Local System Action Account||Built-in System account used as an action account.||Windows Local System account| |APM Account||Application Performance Monitoring account used to provide keys for encrypting secure information collected from the application during monitoring.||Encrypted binary account| |Data Warehouse Action Account||Used to authenticate with SQL Server hosting the OperationsManagerDW database.||Domain account specified during setup as the Data Warehouse Write account.| |Data Warehouse Report Deployment Account||Used to authenticate between the management server and SQL Server hosting Operations Manager Reporting Services.||Domain account specified during setup as the Data Reader account.| |Local System Windows Account||Built-in SYSTEM account used by the agent action account.||Windows Local System account| |Network Service Windows Account||Built-in Network service account.||Windows NetworkService account| Default Run As profiles The following table lists the Run As profiles that are created by Operations Manager during setup. If the Run As account is left blank for a particular profile, the Default Action account (either the Management Server Action account or the Agent Action account depending on the location of the action) will be used. |Name||Description||Run As account| |Active Directory Based Agent Assignment Account||Account used by Active Directory-based agent assignment module to publish assignment settings to Active Directory.||Local System Windows Account| |Automatic Agent Management Account||This account will be used to automatically diagnose agent failures.||None| |Client Monitoring Action Account||If specified, used by Operations Manager to run all client monitoring modules. If not specified, Operations Manager uses the default action account.||None| |Connected Management Group Account||Account used by Operations Manager management pack to monitor connection health to the connected management groups.||None| |Data Warehouse Account||If specified, this account is used to run all Data Warehouse collection and synchronization rules instead of the default action account. If this account isn't overridden by the Data Warehouse SQL Server Authentication account, this account is used by collection and synchronization rules to connect to the Data Warehouse databases using Windows integrated authentication.||None| |Data Warehouse Report Deployment Account||This account is used by Data Warehouse report auto-deployment procedures to execute various report deployment-related operations.||Data Warehouse Report Deployment Account| |Data Warehouse SQL Server Authentication Account||If specified, this sign-in name and password is used by collection and synchronization rules to connect to the Data Warehouse databases using SQL Server authentication.||Data Warehouse SQL Server Authentication Account| |MPUpdate Action Account||This account is used by the MPUpdate notifier.||None| |Notification Account||Windows account used by notification rules. Use this account's e-mail address as the e-mail and instant message 'From' address.||None| |Operational Database Account||This account is used to read and write information to the Operations Manager database.||None| |Privileged Monitoring Account||This profile is used for monitoring, which can only be done with a high level of privilege to a system; for example, monitoring that requires Local System or Local Administrator permissions. This profile defaults to Local System unless specifically overridden for a target system.||None| |Reporting SDK SQL Server Authentication Account||If specified, this sign-in name and password is used by SDK Service to connect to the Data Warehouse databases using SQL Server authentication.||Reporting SDK SQL Server Authentication Account| |Reserved||This profile is reserved and must not be used.||None| |Validate Alert Subscription Account||Account used by the validate alert subscription module that validates that notification subscriptions are in scope. This profile needs administrator rights.||Local System Windows Account| |SNMP Monitoring Account||This account is used for SNMP monitoring.||None| |SNMPv3 Monitoring Account||This account is used for SNMPv3 monitoring.||None| |UNIX/Linux Action Account||THis account is used for low privilege UNIX and Linux access.||None| |UNIX/Linux Agent Maintenance Account||This account is used for privileged maintenance operations for UNIX and Linux agents. Without this account, agent maintenance operations won't work.||None| |UNIX/Linux Privileged Account||This account is used for accessing protected UNIX and Linux resources and actions that require high privileges. Without this account, some rules, diagnostics, and recoveries won't work.||None| |Windows Cluster Action Account||This profile is used for all discovery and monitoring of Windows Cluster components. This profile defaults to used action accounts unless specifically populated by the user.||None| |WS-Management Action Account||This profile is used for WS-Management access.||None| Understanding distribution and targeting Both Run As account distribution and Run As account targeting must be correctly configured for the Run As profile to work properly. When you configure a Run As profile, you select the Run As accounts you want to associate with the Run As profile. After you create that association, you can specify the class, group, or object for which the Run As account is to be used for running tasks, rules, monitors, and discoveries against. Distribution is an attribute of a Run As account, and you can specify which computers will receive the Run As account credentials. You can choose to distribute the Run As account credentials to every agent-managed computer or only to selected computers. Example of Run As account targeting: Physical computer ABC hosts two instances of Microsoft SQL Server: instance X and instance Y. Each instance uses a different set of credentials for the sa account. You create a Run As account with the sa credentials for instance X, and you create a different Run As account with the sa credentials for instance Y. When you configure the SQL Server Run As profile, you associate both Run As account credentials—for example, X and Y—with the profile and specify that the Run As account instance X credentials are to be used for SQL Server instance X and that the Run As account Y credentials are to be used for SQL Server instance Y. Then you must also configure each set of Run As account credentials to be distributed to physical computer ABC. Example of Run As account distribution: SQL Server1 and SQL Server2 are two different physical computers. SQL Server1 uses the UserName1 and Password1 set of credentials for the SQL sa account. SQL Server2 uses the UserName2 and Password2 set of credentials for the SQL sa account. The SQL management pack has a single SQL Run As profile that is used for all SQL Servers. You can then define one Run As account for UserName1 set of credentials and another Run As account for UserName2 set of credentials. Both of these Run As accounts can be associated with the one SQL Server Run As profile and can be configured to be distributed to the appropriate computers. That is, UserName1 is distributed to SQL Server1, and UserName2 is distributed to SQL Server2. Account information sent between the management server and the designated computer is encrypted. Run As account security In System Center Operations Manager, Run As account credentials are distributed only to computers that you specify (the more secure option). If Operations Manager automatically distributed the Runs As account according to discovery, a security risk would be introduced into your environment as illustrated in the following example. This is why an automatic distribution option wasn't included in Operations Manager. For example, Operations Manager identifies a computer as hosting SQL Server 2016 based on the presence of a registry key. It's possible to create that same registry key on a computer that isn't actually running an instance of SQL Server 2016. If Operations Manager were to automatically distribute the credentials to all agent managed computers that have been identified as SQL Server 2016 computers, the credentials would be sent to the imposter SQL Server and they would be available to anyone with administrator rights on that server. When you create a Run As account using Operations Manager, you're prompted to choose whether the Run As account should be treated in a Less secure or More secure fashion. “More secure” means that when you associate the Run As account with a Run As profile, you've to provide the specific computer names that you want the Run As credentials distributed to. By positively identifying the destination computers, you can prevent the spoofing scenario that was described before. If you choose the less secure option, you won't have to provide any specific computers, and the credentials will be distributed to all agent-managed computers. The credentials you select for the Run As account must have at a minimum, logon locally rights; otherwise, the module will fail.
OPCFW_CODE
"Infura is an Ethereum node and API service that is used by most of the Ethereum Dapps ecosystem, including MetaMask, 0x Protocol, CryptoKitties, Truffle, Uport; nearly 13 billion JSON-RPC requests per day on the Ethereum network are channeled through Infura infrastructure and has more than 15,000 registered Ethereum developers" - Founded in 2016 "It was at that last company that I met one of my cofounders who, just as a water cooler topic, asked something that was like, how would you verify a piece of data that somebody gave you if you didn't actually trust this person? I joked around with him and said, are you interviewing somewhere? Is this like an interview question you want me to answer for you? He’s like, no, no, it's something that's really interesting going on in the Bitcoin space right now. I was aware of Bitcoin, I was following up but not heavily involved. He was the one that convinced me that I should be paying more attention to the space. I was getting a little bit bored at my current role, and I wanted to be closer to the R&D side of the industry, and it really seemed like a lot of people were fascinated and interested in the technology behind Bitcoin and what was starting to be called blockchain. I wanted to be a part of that and figure out what it could be used for. He was already acquainted with Joe Lubin, and he introduced me to Joe over a call. It was very informal. At the end of the call, Joe was saying like, do you want to join us, work at ConsenSys and find a product or project that you're interested in working on? Shortly after joining ConsenSys, I met a couple of other people who were equally interested in infrastructure and what infrastructure for blockchain could mean, and we started working on Infura almost immediately as initially an internal project for ConsenSys teams and then opening it up as a full-blown public offering." Launch and Consensys connections - "Securing funding from ConsenSys, Infura arrived as a “spoke” (start-up) within the ConsenSys family of companies. (in 2016)" - “INFURA is a foundational part of the ConsenSys family and the emerging decentralized ecosystem. An important challenge faced by Dapp developers and users is the need for Dapps to interface with Ethereum and IPFS nodes. The mission of INFURA is to provide the world with secure, stable, fault tolerant, and scalable Ethereum and IPFS nodes.” - Then became a ‘Core Component’ of Consensys. - And finally it has been officially acquired by Consensys in 10-2019. - There is onething: Infura is operated by a single provider – ConsenSys – and relies on cloud servers hosted by Amazon. As such, concerns exist that the service represents a single point of failure for the entire network. Audits & Exploits "Infura, a service which many Ethereum applications use to outsource running their own Ethereum nodes, was running the old, buggy version of Geth. This caused applications using Infura to break. Infura is back up at the time of writing, and so was DeFi." - Whitepaper or docs can be found [insert here]. - Code can be viewed [insert here]. - Built on: Ethereum. “The most well-known portion of the Infura infrastructure is the network of hosted Ethereum clients that spans four Ethereum networks: Mainnet, Ropsten, Rinkeby, and Kovan (by Parity). “These are load-balanced groups of nodes, that we can scale to meet demand fairly easily, and that we keep up-to-date and secure,” says Cocchiaro. “We have TLS-enabled APIs including JSON-RPC, REST and websocket endpoints as ways to access our node network as if it was your local node. Infura also has additional features built on top of these endpoints for reliability and added value, like the feature we call Transaction Assurance.” - Uses technology built by Parity. - Programming language used: How it works - Ethereum nodes are only one part of Infura stack: “We also host IPFS nodes and a public IPFS gateway. We’re in the process of building additional decentralized storage products based on both IFPS and Swarm, that we will detail in the near future.“ "According to Twitter user @0xdev0, on Monday, Web3 development platform Alchemy and Infura.io blocked remote procedure call (RPC) requests to cryptocurrency mixer Tornado Cash, preventing users from accessing the applications." - Is planning to launch a decentralized protocol in 2023. The network’s working title is the “Decentralized Infrastructure Network.” - Says they run 'dozens of nodes (30-11-2020). - “We now have more than 15,000 registered developers, we’re serving over 6 billion API requests per day and transferring roughly 1.6 petabytes of data per month” - After the bug of 11-11-2020 it was clear Binance and Uniswap both also rely on Infura. Uniswap however still was able to have swaps due to its decentralized contract. Projects that use or built on it - From their website (13-4-2020): - From their FAQ (13-4-2020): "Which clients does Infura use? - Geth: Go Ethereum is one of the three original implementations (along with C++ and Python) of the Ethereum protocol. It is written in Go, fully open source and licensed under the GNU LGPL v3. - Used by Metamask, CryptoKitties, uPort, Cipher Browser, Radar Relay Status and UJO for scalable blockchain solutions. Pros and Cons Infura+ and Centralisation - Premium Ethereum API subscription service. From the Token Economy newsletter "Infura is finally monetizing one of the most centralized points of failure in DeFi land. I guess that's good news, meaning that it reduces the risk that they go away, but it would ideally be better to be able to do away with such a centralized model in the first place." - From a Delphi Digital report on Ethereum (3-2019): “Infura, a ConsenSys spoke, is both an important tool and centralization concern for the Ethereum network. It is Infrastructure-as-a-Service (IaaS) and allows decentralized applications (DApps) to process information on Ethereum without the developers needing to run a full node. It processes more than 10 billion requests per day and serves over 50k developers/DApps. Infura provides an easy way for developers to build on Ethereum without the need to maintain the necessary infrastructure themselves. However, the concern is that Infura is owned and operated by a single company, ConsenSys, while being hosted on AWS. Since many popular Ethereum services/DApps rely on Infura (e.g. MetaMask), it creates a single point of failure for the network. Infura services a disproportionate amount of the network's traffic and accounts for 5%-10% of all nodes. Michael Wuehler, Infura Co-Founder, recently said in an interview “If every single DApp in the world is pointed to Infura, and we decided to turn that off, then we could, and the DApps would stop working. That’s the concern and that’s a valid concern”. “
OPCFW_CODE
Main features of Notebook Hardware Control G o o g l e l i n k s In the status section you can see the actual status of the notebooks hardware. You can also start, stop and adjust the NHC monitoring function in the status section. The NHC status section will show you the actual: - cpu clock, cpu load, cpu speed and cpu voltage - battery charge rate and life time - cpu, case and hard disk temperature - graphic card's core and memory clock or system memory information - System power-on time The NHC monitoring function allows you to draw the current hardware status. You can select to draw transparent or with background; on the desktop or on top. You can also save the data on the hard disk. You can enable to monitor the cpu clock, cpu load, cpu voltage, battery charge, battery charge rate, cpu temperature and hard disk temperature. CPU Speed Control (CPU policy) This section allows you to change the speed of your processor and change the Windows Power Scheme. This following cpu speed settings are supported: Keeps the CPU at maximum speed. (i.e. a Intel Pentium Dothan 1600MHz runs at 1600MHz continuously) Keeps the CPU at the minimum speed. (i.e. a Intel Pentium Dothan 1600MHz runs at 600MHz continuously) Keeps the CPU at the minimum speed and allows further throttling depending on remaining battery power. Switches between the minimum and maximum speed according to current CPU utilization. (i.e. a Intel Pentium Dothan 1600MHz switches automatically between 600MHz and 1600MHz) Custom dynamic switching allows you to change easily the default dynamic switching behavior of your CPU. The possibility of changing the minimum and maximum multiplier and changing the minimum and maximum CPU load vales gives you full control over the CPU dynamic switching steps. Custom dynamic switching also doesn't have short Voltage drops like on default dynamic switching if CPU Voltage Control is enabled. To show the custom dynamic switching steps click on the icons in the cpu speed section. With CPU Voltage Control you can change the default CPU Voltages to reduce heat dissipation, power consumption and prolong the battery lifetime. When you set new Voltage, Notebook Hardware Control will make a short CPU stability check to test, if the CPU is stable on this Voltage. If the CPU Voltage is to low, the System can stop responding and you have to restart the Computer. If the CPU is stable in the short CPU stability check there is not guarantee that the CPU is stable over a longer period of time. to be safe it is recommended to use the full NHC CPU stability check or programs like Hot CPU Tester Pro With the ACPI Control System you can visualize and control and hardware components like LCD brightness, WLAN, notebook temperature or fan of the notebook. The ACPI Control System use open source classes and so it can be adapted easily from each user to to every notebook with ACPI. If the ACPI Thermal Zone is available on your notebook, NHC will show you all available temperature information like CPU temperature. To avoid overheating you can set the CPU temperature on which the notebook should reduce the CPU clock or shut down the computer. ACPI control system details The ACPI Control System option window will show you the actual configuration of the ACPI Control System. The ACPI Control System is controlled by open source classes with the programming language c#. For each manufacturer and notebook model it is possible to define a new class and adapt it to the notebook hardware easily. More information on how program the ACPI Control System you will find in the section: Programming the ACPI Control System If you have some programming skills it is not difficult to write an ACPI Control System class for your system. If you have created a working class for your system please publish this new class in the Every user with the same system as yours can then use this new class and will be happy :-). I will also add the source file in the next NHC release. For example on the Samsung P40 Notebook it is possible to control with the ACPI Control System and the open source class "SP40S" a lot of hardware components like: - Change the LCD Brightness level (Graphic section) - Switch on and off the wireless LAN - Control the FAN speed and the turn on/off temperatures ACPI Thermal Zone information Advanced Battery settings In the Advanced Battery settings section you can customize the appearance of the battery icons in the taskbar and system tray. The option Hide the 'Default System Battery Icon' on battery operation will remove the normal windows battery icon. The option Use alternative function to detect the actual 'System Power Status' is a workaround. Use this option if you have problems with the keyboard or touchpad when using NHC. For example when you type, the keyboard misses some keystrokes. Example of the battery icons in the taskbar and system tray: LCD Brightness Control NHC allows you to set different LCD Panel Brightness levels on AC Line operation and Battery operation. On Battery operation you can define tree different brightness levels. The first brightness level from 0 to 33%, the second brightness level from 34 to 66% and the third brightness level from 67 to 100% Hard Drive SMART Information NHC utilizes S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) technology to monitor your hard drive's internal S.M.A.R.T. attributes and drive temperatures, to predict possible drive fail and prevent data loss. NHC lets you know about a potential disk health problem before you lose valuable data. The Power-On Time Count will display you how much time your hard drive was already switched on. If the reported time is wrong or not possible you can correct the time with the Correction value and Power-On Time reference With the NHC profile editor you can add, move, rename and remove NHC profiles in a easy way. You can set different icons for each profile to differentiate the profiles easily. If you create a new profile, NHC will copy the settings of the active NHC profile to the new profile. You can open the NHC profile editor by clicking on the edit button in the NHC settings section. The profile editor is only available on the Professional Edition. Return to top
OPCFW_CODE
I’ve managed to get the tabbar working in firefox and every other browser, but the tabbar container is being pushed to the right with no tabs in IE8. I’ve attached a screenshot of how it looks and the following are the div statements I’m using to create the tabbar and first tab. Could you give me any ideas of why it is doing this in IE8, please? tabbar in ie.zip (50.6 KB) we need a demo to recreate the problem locally. What do you mean by a demo, because I thought some screenshots would show the problem? There are two things that seem to be happening: Although the tabbar area seems to be being rendered completely, it is being offset to the right so that it starts exactly halfway across it’s parent element (a td), hence it’s right half disappears. No tabs are being rendered. I hope that helps. If you could let me know what you would want as a demo, I’ll try to put it together for you. the demo means - sample (html page and all files that are necessary for this sample). Please check the sample in the tabbar package: I’ve managed to identify the problem with IE. It seems to be a problem with the css text-align attribute. Please see the attached file for an example of the problem. If you display the file as is (you’ll have to adjust the paths to reflect where your tabbar css and js files are located), the tabbar container has no tabs and is positioned with it’s left edge at the center point of the page. If you delete the text-align attribute from the center-prob style, the tabbar is displayed perfectly. This page displays perfectly in Firefox etc. no matter whether the center attribute is there or not, so seems to be an IE only quirk. It also only seems to apply to elements (e.g. divs or tables) that surround the tabbar div that are centered e.g. if you delete or re-instate the text-align attribute of the first div, the inside table or the tabbar itself, it doesn’t affect the centering of the tabbar container. I need to be able to center things at different points in my page including an element that surrounds a tabbar (which may have elements other than the tabbar in it) and have it be consistent in all browsers, so could you suggest how I can do this, please? centering-problem.zip (584 Bytes) try to set text-align:left for the tabbar container to solve the problem Well that seems to have worked. Thank you. However, I had to do bit of tweaking as well to get to the point where the tabbar wasn’t clashing with other parts of my page. What I ended up finding was that I can’t surround my tabbars with another div, because that div seems to adopt the alignment of the tabbar i.e. being left justified as well, instead of being centered as per it’s css style definition. However, if I replace the surrounding div with a table, then it seems to work fine and I can use any kind of alignment in my stylesheet without it being affected by the tabbar div styling. Not sure why this would be the case, but I thought it might help someone else with a similar problem. Alexandra, could you tell me how the tabbar functionality detects what divs to convert into tabs? Does it convert every div it finds inside the tabbar div into a tab or can I tell it which tabs to convert, so I can include some divs of my own inside a tabbar without them being affected by the tab conversion process? how the tabbar functionality detects what divs to convert into tabs? Yes, each child node will be converted to the tab I can include some divs of my own inside a tabbar without them being affected by the tab conversion process?
OPCFW_CODE
using System.Linq; using System.Text.RegularExpressions; namespace _01_BookStatistics { public static class WordMatcher { public static string[] ExtractWords(string text) { // This regex attempts to match any words and wordlike combinations. // Here is a detailed breakdown of what each group does and why: // // [+-]?\d+[\p{L}’-]*|[\p{L}\d’-]+|[„“][\p{L}\d’-]+[„“]-[\p{L}’-]+ // // [+-]?\d+[\p{L}’-]* // This group matches numbers and numberlike words. // // [+-]? : Optional sign in front of the number // \d+ : The digits constituting the number // [\p{L}’-]* : Any letters following the number (e.g. 42nd, 42-nd) // // // [\p{L}\d’-]+ // This group matches all common words. // // A word can be any combination of: // \p{L} : Letter characters, including all languages and cultures. // Source https://stackoverflow.com/a/48902765/6317168 // \d : Digits (e.g. C4, H2O) // ’ : Apostrophe for words which include it (e.g. t’aime in French) // - : Hyphen for complex words (e.g. au-revoir) // // // [„“][\p{L}\d’-][„“]-[\p{L}’-]+ // This group matches any combination of words in inverted commas, directly preceding // an article. This construct may only apply to Bulgarian phrases, for example // "„о“-то". // // [„“] : Open inverted commas. Allow both opening and closing inverted commas // to be more flexible. // [\p{L}\d’-] : The word part between the inv commas. See previous group. // [„“] : Close inverted commas. // - : Hyphen for the article // [\p{L}’-]+ : The article. // // string pattern = @"[+-]?\d+[\p{L}’-]*|[\p{L}\d’-]+|[„“][\p{L}\d’-]+[„“]-[\p{L}’-]+"; var wordRegex = new Regex(pattern); var words = wordRegex.Matches(text) .Select(m => m.Value) .ToArray(); return words; } } }
STACK_EDU
Get started with Bitcoin: find a wallet, buy bitcoin, shop with bitcoin, read bitcoin news, and get involved on the forum.The public address and private key will. first do a test make sure you are able to decrypt the printed. Follow these five easy steps to learn exactly what to do when getting started with Bitcoin.Please only trade small amounts of money till you trust your trading partner.Finally, from the Bitcoin exchange wire or EFT your money to your bank from the exchange.The Bitcoin Foundation contracted with BitcoinPaperWallet to design a limited. Getting started with Bitcoin - WeUseCoinsThe Mastercoin protocol layer gains community support to create a new.Bitcoin Vanity address, a custom bitcoin address that you can use to identify yourself. These work similar to barcodes at the grocery store, and can be scanned with a smartphone to reveal your bitcoin address.If this question (or a similar one) is answered twice in this section, please click here to let us know.Many big companies, including Amazon and Sears, offer gift cards via Gyft, an online marketplace that supports Bitcoin.If you create custom MAC address pools, the following restrictions apply: If you want to divide one of the default pools into smaller custom pools, you.Entrepreneurs in the cryptocurrency movement may be wise to. How to Pay with Bitcoin | BitPay DocumentationThis should compile vanitygen and allow you to create custom bitcoin addresses. How do I create a bitcoin payment request? - Stack OverflowCreate your free digital asset wallet today at Blockchain.info.You may need to wait 10-20 minutes for a confirmation, but if you did everything correctly you should now see the small amount of bitcoin you sent in your personal wallet. Much like the Internet, bitcoin is pseudonymous and somewhat trackable.Bitcoin Mining Guide - Getting started with Bitcoin mining. Bitcoins are sent to your Bitcoin wallet by using a unique address that only belongs to you.Thank you so much for visiting The Affluence Network in your.Bitcoin.org is a community funded project, donations are appreciated and used to improve the website. Bitcoinker - The Best Bitcoin Faucet, Claim Every 5 Minutes!A simple python program to create bitcoin burn. burn-btc: create a bitcoin burn address: By James C.While paper wallets are highly hacker resistant, they are cumbersome when it comes time to spend your bitcoins.A wallet in the realm of bitcoins is equivalent to a bank account. How to Import Your Bitcoin Private Key | Vircurvault How to Use Bitcoin. for example) on a custom computer that turns math equations into bitcoin. Create a public Bitcoin address.Creating a Bitcoin address to receive Bitcoin payments is done in a single click.No doubt Bitcoin protocol is pretty simple and smart, but the Bitcoin address. Create your own Faucet Rotator | Bitcoin BarrelIn this article I will show you how to easily create and start using your first Bitcoin wallet.A public bitcoin address will be a long string of seemingly random letters and digits like 16BPS8xb5k36MeNLWmfZ1zpjCqbDhgyaHg. Is it possible to make fake bitcoins? - QuoraBitcoin is the first digital currency to eliminate the middleman.Creating an account at an exchange is a similar process to opening a new bank account, you will likely need to give them your real name, contact information and send them money. NBitcoin : The most complete Bitcoin port (Part 1 : CryptoWatch our guide below to learn how to get started with bitcoin payments to BitPay.Five Ways to Lose Money with Bitcoin Change Addresses. A Bitcoin address can be thought of as the digital.I am in a project that needs auto Bitcoin payout among wallets using a custom. Think of a public address like an email address in that you can share it with anyone you want to send you email or in this case Bitcoin.Different countries and currencies have different Bitcoin exchanges that are best to use in each geography. Many cities around the world offer a bitcoin ATM where you can trade cash for bitcoin.Carefully copy the address and exact bitcoin amount from the invoice to.An easy way to increase privacy is to create a new Bitcoin address each time you conduct a.Any address you create here will remain associated with your Coinbase account forever.
OPCFW_CODE
Planning in a Rut? Organizations have invested in ERP solutions that are good at tracking activity, and supporting Business Intelligence (BI) solutions that can tell “what happened”. The need to plan in a more agile manner has increased due to the exponentially faster pace of business today, and has many asking “Is there a more modern way to do planning? Modern Planning in your BI solution: Microsoft Power BI with Visual Planning. Over the last few years many leading companies have started to actually plan in their enterprise BI platforms. Yes, you read that right; Plan, inside their BI platform. Power ON has seen a new trend where many Fortune 500 companies not only moved to Power BI as their standard Business Intelligence platform, but then added Planning and Forecasting processes directly inside of Power BI, aligning all data collection, planning & reporting on one platform. Some of these converts have saved hundreds of hours and millions from their current planning process. Who Benefits with Planning in BI? What types of companies have benefited from moving to Planning in Power BI? Here are just a few we know of personally. - 4 Largest Oil & Gas Companies - Largest Semiconductor Chip Maker - Largest Computer Hardware Manufacturer - Largest Beverage Company - Space Launch Company with eyes on Mars Why Plan in Power BI? Here are the Top 5 reasons we’ve found that organizations chose to Plan in Power BI: - Unified Platform for Visual Analytics and Planning - Holistic Business Rules and Security Roles - Collaboration and simplicity of one User Interface (UI) - Benefit from Microsoft’s Investment and Innovation in Power BI and Azure - Secure Hosting in Private Cloud Tenant Learn from their takeaways and motivations Below we will dig deeper into the rationale Fortune 500 companies put into moving Planning and Forecasting processes into Power BI. Whether you are a small firm, medium size, or closing in on the big guys, we are providing these tips to help your organization succeed with Planning in Power BI. You will learn from years of experience, best practices, and thoughts on Power BI from these large companies. 1. Unified Platform for Visual Analytics and Planning 2. Holistic Business Rules and Security Roles 3. Collaboration, and Simplicity of One User Interface (UI) 4. Benefit From Microsoft’s Investment and Innovation in Power BI and Azure 5. Secure Hosting in Their Own Environment We at Power ON are enjoying all we do, and thus provide a 100% Money Back Guarantee if any of our customers don’t like our software. To further encourage you and your company to try us out, we are sharing some additional features and facts from our implementations. Customers are looking for enhanced Commentary Functionality Our now largest Enterprise Deployment at a medical device company, started their journey with Power ON for Text Commenting. 3 years ago, an employee of said company of said company, visited our booth at a Microsoft conference. She was introduced to text commenting on data points with Power ON Visual Planning, and she burst out: “I need that. NOW!”. It was implemented a week later, and now in 2021, they have 1,000’s of users planning with Billions of Dollars in Sales, Inventory, Services, Employees, Supply Chain, Bonuses, and Project Management. You should try. Power ON supports writeback to 200 different data sources. If your corporation has implemented Snowflake, HANA, Oracle, Databricks, Dynamics, SalesForce, we write to that as well. We enable write-back to 200+ different databases directly from Microsoft Power BI. We are looking forward to learning about your Power BI Story and potentially extending the reach of Power BI in your organization.
OPCFW_CODE
Hello everyone. I hope everyone is doing okay. So I have a friend who has been wanting to start his own small company. I decided to help him bring up a Nethserver to run his Website / Mail and some other items that are made possible by Nethserver. We have decided to stick with NethServer 7.9 for the time being as we want certain things in NS8 to mature before we try to use it for what we are using it for. In saying that we have been challenged with getting Wordpress using some plugins because he is only running PHP7.3. It tells us that we need to upgrade PHP to PHP7.4 and above. Really all we want it to upgrade PHP to PHP7.4 at the very least, or upgrade it to 8.0 / 8.1 / 8.2 / or 8.3 if possible. This will allow us to move forward with him building his website to his spec. I have tried to look through the forums to find upgrading PHP7.3 to PHP7.4 and above. I have found many articles for challenges other people have, but not just install PHP and activating the new version of PHP for Wordpress to use. I found the following - PHP by software collections (*it says This module won’t be updated the official way now is cockpit with the virtualhosts panel ) So when I finalized this and got Wordpress running I then noticed I needed to update PHP to at least PHP7.4 to allow newer plugins to stay secure and have the plugins work. When trying to do so I found out that the official way to update it now is in cockpit through the virtualhosts panel. When using the steps in the Nethserver Wiki for Wordpress (blog), it doesn’t seem to be setup in a vhost. Is there a way to transfer the existing Wordpress install to a newly created vhost or adjust a newly created vhost to point to Wordpress so I can use/change the PHP version in vhost, or do I have to start from scratch and make a vhost first and then load Wordpress into it somehow? Is there a post or a wiki that goes in to more detail on how to do this that you can share with me specifically? I did some searching for the current location of the Wordpress, but I am uncertain of where Wordpress installed to at the moment. I looked under /var/www/html/ and /var/lib/nethserver/vhost/ but the Wordpress install doesn’t seem to be there. Oops. Sorry @stephdl . I didn’t realize you were on vacation. My bad. Get back to relaxing and family. Don’t worry about this. I do appreciate you chiming in. I will look into what you relayed to me. Thanks again. Understand. I still appreciate all you do. Glad you got a breather from all the work. As stated somewhere above It’s always good to get away and have sometime to reflect, relax and family time. Also glad to have you back.
OPCFW_CODE
Add WalletState -> SyncingMempool? After: https://github.com/nopara73/HBitcoin/issues/6 Related: https://github.com/nopara73/HBitcoin/issues/11 Right now the Wallet have 3 states: Doing nothing, syncing and synced. Probably it'd be a better idea to have: 4: doing nothing, syncing blocks, syncing mempool, synced If you look at the pooler, blocks and trx are all the time syncing so the wallet doesn't need to be in a state, the tracker just needs to handle new blocks and new trx (trx concurrently) and manage reorg when a new block is pushed which is not referencing prev. GitHub repository search doesn't find match to the word "pooler" neither in the StratisBitcoinFullNode, nor the Breeze repository. I'll answer to this other question here, since they are related. At that issue the question was what my mempoool, more specifically MemPoolJob does: Until blocks are synced mempool level transactions should not be coming, they should not use bandwidth at that point. This is what the "StallMempool" issue (https://github.com/nopara73/HBitcoin/issues/11) is created for. I implement this logic, but I lose 1-2 seconds at: between the last block is downloaded and the mempool starts syncing. So what does it do? Until blocks are fully synced: nothing. Keeping an up to date list of all transaction hashes the nodes I am connected to have in their mempool. Raising an event when a new transaction arrives (not only its hash). I don't see how can you avoid having "syncing mempool state", since you have to initially download/go through all the transactions from the mempool and that's not small data. In fact there are 3 mempool specific states I defined in the ApiSpecification: mempoolStalling syncingMempool synced The mempoolStalling state unrelated to issue https://github.com/nopara73/HBitcoin/issues/11 , in that issue the mempool is stalling, because headers are changing, however the mempoolStalling state in the ApiSpecification is API user indicated mempool stopping with the GET /wallet/mempool/?allow=[true/false] - Allows or disallows mempool syncing. In fact it might be better to call it mempoolStopped or something like this, anyway this is not implemented yet, but not big deal. The rest 2 states are evident. ah ops a type I meant the block poller. I don't think you need to sync mempool's from peers, this is heavy and unnecessary. if anything then we only need to do that once on start. once we are connected to nodes we will start to listen for broadcast transactions from peers (this is done using a NodeBehavior and a TransactionSignaled object), when a trx is received we immediately check if it's "ours" then keep it and notify the GUI of a pending trx. if its not ours we just discard (or keep the hash so we don't check again). Then when a block is received the trx is marked as confirmed I think actually as oppose to a full node we need to listen to trx even if the node is not synced as you might be syncing then also receive a trx to your wallet Its very likely you open the wallet expecting a trx but the node is still catching up. What? I don't think you need to sync mempool's from peers, this is heavy and unnecessary. && I think actually as oppose to a full node we need to listen to trx even if the node is not synced as you might be syncing then also receive a trx to your wallet Its very likely you open the wallet expecting a trx but the node is still catching up. This is a contradiction, if you don't sync the full mempool you'll not see any transaction that has been propagated before the user opened the wallet and blocks are not confirmed. When? if anything then we only need to do that once on start.... Yes, this would resolve that. The question is when? Before the blocks are synced. After the blocks are synced. You proposed (1) before the blocks are synced, what can lead to unexpected behaviours and slow down the block syncing, too, as you put it "this is heavy". Basically I aim to follow an order, which reflects the states and with that make sure things doesn't go wrong: not started -> header download -> block download -> mempool stopped -> mempool syncing -> mempool synced You can always hack and break things later, so I'd propose to keep the order of the things and go with the (2) after the blocks started syncing. Missing There is one thing that's missing from your proposed scheme and I don't see how to resolve, you might do. I wan't an up-to-date list of mempool transactions. Right now I only store the hashes for performance, but I could easily sotore all transactions if I'd wanted to. I will most likely need this later, however I need it today at only one place. The Tracker is tracking mempool transactions too. If the tx malleated or never confirmed (fall out of mempool) then it should not be tracked anymore. Right now this is how I solve this: foreach(var tx in trackedMemPoolTransactions) { // If we are tracking a tx that is malleated or fall out of mempool (too long to confirm) then stop tracking if(!MemPoolJob.Transactions.Contains(tx.GetHash())) { Tracker.TrackedTransactions.TryRemove(tx); Debug.WriteLine($"Transaction fall out of MemPool: {tx.GetHash()}"); } } As you see I use the up-to-date transaction hashes of the mempool of the connected nodes. One sloppy solution would be to do this check at the initial mempool syncing, however (1) it is sloppy, what if for example someone runs our software forever? It'll never detect the above described falling out transactions? (2) it will not have an updated list of transactions in the mempool and therefore limiting my future possibilites, possibly TumbleBit integration. You'r current approach of having a mirrored mempool of peers and continuously sync with peers to keep up to date is not what I had in mind. If we ask peers to relay trx then we don't need to ask the mempool (only once at start) after that we will keep getting notifications when new trx appear on the network. When a transaction "falls out" its up to each node to decide if it doesn't want to keep that trx in memory anymore, to be honest once a trx not form the nodes wallet is relayed and the node is not a mining node I don't see why it needs to keep transactions (maybe for compact blocks or inv requests). https://github.com/nopara73/HBitcoin/commit/987dfc6dc753708ee8e249bb4d44f039713d7fa6
GITHUB_ARCHIVE
Shiying Xiong is a Postdoctoral Researcher working with Prof. Bo Zhu in the Visual Computing Lab at Dartmouth College. He received a Ph.D. in Fluid Mechanics at Peking University in 2019, advised by Prof. Yue Yang. Before starting his Ph.D., he obtained a Bachelor's degree in Physics at Jilin University in 2014. His research interests include Computational Physics, Vortex Dynamics, and Scientific Machine Learning. He has published papers at SIGGRAPH, ICLR, NeurIPS, JFM, JCP, PoF, etc. More information can be reached via: https://shiyingxiong.github.io/ Talk title: A Clebsch Method for Free-Surface Vortical Flow Simulation Abstract: We propose a novel Clebsch method to simulate the free-surface vortical flows. At the center of our approach lies a level-set method enhanced by a wave-function correction scheme and a wave-function extrapolation algorithm to tackle the Clebsch method's numerical instabilities near a dynamic interface. By combining the Clebsch wave function's expressiveness in representing vortical structures and the level-set function's ability on tracking interfacial dynamics, we can model complex vortex-interface interaction problems that exhibit rich free-surface flow details on a Cartesian grid. We showcase the efficacy of our approach by simulating a wide range of new free-surface flow phenomena that were impractical for previous methods, including horseshoe vortex, sink vortex, bubble rings, and free-surface wake vortices. University of Utah Dr. Yin Yang is currently an Associate Professor with the School of Computing at the University of Utah. Before joining Utah, he was a faculty member at University of New Mexico and Clemson University. He received Ph.D. degree of Computer Science from The University of Texas, Dallas in 2013 (the awardee of David Daniel Fellowship Prize). He was a Research/Teaching Assistant at UT Dallas as well as UT Southwestern Medical Center. His research mainly focuses on real-time physics-based computer graphics, animation and simulation with a strong emphasis on interdisciplinarity. He was a Research Intern in Microsoft Research Asia in 2012. He received NSF CRII (2015) and CAREER (2019) awards. Dr. Yang has published over 70 conference/journal articles in areas of computer graphics, animation, machine learning, computer aided design, and medical imaging. He serves as the TPC member for many international conferences and the reviewer for almost all the top journals/conferences in computer graphics and animation. Talk title: Accelerate Penetration-free Simulation Abstract: Incremental potential contact (IPC) is an emerging numerical solution to handle collisions and contacts between virtual objects. While demonstrating excellent robustness and accuracy, the computational overhead of IPC remains significant. In this talk, I would like to share with you a few or our recent projects aiming to accelerate IPC-involved simulations. With a dedicated reduced-order model, IPC can be speeded up by orders of magnitude. Stiff object simulation can also greatly benefit from reduced IPC simulation. As IPC relaxes the complementary slackness, penetration-free variational optimization becomes compatible with other optimization and simulation algorithms. We demonstrate one of possibilities using projective dynamics. He is now the chief Professor of the State Key Laboratory of CAD&CG, College of Computer Science and Technology, Zhejiang University, and the Changjiang Scholar of the Ministry of Education. He was a POSTDOCTORAL fellow at Ritsumeikan University, a researcher at the NETWORK GRAPHICS GROUP of Microsoft Research Asia, and a DISTINGUISHED Professor of QianJIANG scholars of Zhejiang Province in Hangzhou Normal University. His research interests include computer graphics, including 3D reconstruction, deep learning, physical simulation, and 3D printing. He has published more than 80 papers in high-level academic conferences and journals nationally and internationally, including more than 40 CCF-A papers such as ACM Transactions on Graphics, IEEE TVCG, IEEE CVPR, AAAI, etc. China and the United States authorized 15 patents. The developed 3D registration and reconstruction techniques have been applied in high precision scanners and 3D human body reconstruction systems. In 2014, supported by the National Natural Science Foundation for Excellent Youth, he presided over one key project of National Natural Science Foundation and won the second prize of natural science in Zhejiang Province.
OPCFW_CODE
Can people feel the low heat radiation from very cold surfaces? Here's a thought experiment about the way that heat is transferred through radiation. Humans can physically feel when a hot object radiates heat on them, such as a campfire or an infrared-based space heater. But can humans feel cold objects the same way? Say that a scientist is working in a research station in the South Pole in winter, when the outside temperatures reach -80 celsius. (Feel free to also imagine it as 0.1K for the sake of the experiment.) Say that the research station is a big building with multiple floors and rooms within each floor. It has a strong environmental control system that keeps the air at a pleasnt 25 degrees all over the building. However, the outside walls of the building are still very cold, say -40 celsius. The internal walls are at room temperature. Say that our scientist is walking from an internal room in the building to a room that has one or several external walls. Say that his body is surrounded by an equal 50-50 mix of external walls and internal walls. The air temperature is still 25 degrees. Will our scientist feel the coldness of the wall? Will the wall feel like it's radiating cold on the person? Follow up question: Assuming the answers to the previous questions are yes. Say that we place a person in a closed room, where the walls on all sides, as well as the floor and ceiling are at a temperature of 0K. The air in the room is kept at 25 degrees celsius. The person is not touching the walls or the floor (say they have incredibly insulating shoes). Will that person freeze to death in minutes as they would in space? You might be interested in reading about Pictet's experiment, where's the apparent (and puzzling) reflection of cold. https://en.wikipedia.org/wiki/Pictet%27s_experiment Regarding feeling the coldness: Sure, humans feel when their skin emits more IR radiation that is received. You can test this easily by opening a freezer and standing some ways away from it (avoiding the cold air itself). For the follow up question, we can calculate it: Assuming the person is naked, their outer surface is about 33°C = 306 K. If they are standing up, surface area is roughly 1.8 m². The total emitted black body radiation can be calculated as: $$P_\text{net} = A \sigma \varepsilon \left( T^4 - T_0^4 \right)$$ Where $\sigma$ is the Stefan–Boltzmann constant and $\varepsilon$ is the emissivity, which is close to 1. With $T$ = 306K and $T_0$ = 0K, this gives 895 W. The human body internally produces about 200 W of heat in mild exercise or when shivering. This gives net transfer of 700 W. The cooling effect of the emitted radiation is balanced by the heating effect of the air once skin temperature drops below 25°C. Based on a rough value of convection in otherwise still air of 20 W/m²K, the balance point is skin temperature of 12°C. This is too low for long term survival, so we can be sure that eventually the person will die. In the initial state the conduction to air would further cool the skin at 288 W, but skin temperature will quickly drop to near the air temperature. How fast will this happen? Based on thermal capacity of water and a body weight of 80 kg, with 700 W net heat emission, body temperature would drop by 8°C every hour. Hypothermia would begin in about 15 minutes and loss of consciousness in an hour. Human body's mechanisms to reduce blood flow to extremities will give a little more time by reducing internal heat conduction. Compared to vacuum, cooling down would be slower because of less evaporation, but otherwise the situation is similar. Of course in vacuum you would suffocate long before freezing. Even a thin layer of clothing would greatly prolong the survival, and thick enough clothing would prevent freezing altogether. Complicating factors: Crouching down to a ball shape would reduce surface area by about half. It is not realistic to keep air at 25°C close to the 0 K walls without significant air flow. Air flow would increase initial cooling, but bring in heat once skin temperature drops below 25°C. Heavy exercise would temporarily increase body heat production and also air flow. Emissivity of the walls matters too. If the walls were made of uncoated metal, they would reflect back some of the IR radiation. Yes they will feel the coldness, but it's the lack of heat radiation, not radiating coldness. Essentially, the scientist's body is radiating more heat than it is receiving, and it is this difference that they are feeling. If the scientist doesn't know how heat and infrared radiation work, they may be hard pressed to tell that it's lack of heat radiation and not cold radiation just from what they can feel. But will this actually be felt? It seems the power of the IR emissions from body-temperatured object is so low that human sensors will be unable to notice the unbalance between body radiation and irradiation by the cold object. @Ruslan Most definitely yes. On a clear, windless night it is easily felt and measured with an IR thermometer. The atmosphere radiates with effective temperatures of about -40 °C, even if the temperature of the air and the ground is about 0 °C. The top of your hand will feel much colder than the bottom which is irradiated by the ground, similar to (but weaker of course) standing close to a warm fireplace. @Martin'Kvík'Baláž What an IR thermometer measures is irrelevant when talking about sensitivity of human skin. Near the ground there's the air that can conduct the heat, especially from bottom to top due to the convection, so what you described doesn't really prove that at these temperatures the radiation is powerful enough to be sensed. While I have not done the numbers for your very particular follow up question (nevermind that it's impossible to reach 0K), I have every reason to believe the person will survive. Consider the night sky. It can be as cold as -40C. The difference in radiation between -40C and -273C is pretty minimal, so that's basically the same situation for a 37C human. We regularly survive with air temperatures far worse than 25C, so we have every reason to believe a human would survive in the situation you lay out. Will the wall feel like it's radiating cold on the person? Nothing "radiates cold". Everything radiates some amount of infra-red radiation and hotter things radiate more IR. If you are surrounded by a mixture of warm and cold walls with the ambient (air) temperature maintained at a constant value, then you might be able to notice the directional difference in the IR radiation reaching your exposed skin - although I think the difference in temperature between the warm and cold walls would have to be quite high for this to be noticeable. More likely you would notice directional convection currents in the surrounding air. If you are surrounded by cold walls but the ambient temperature is kept constant then you will not freeze to death no matter how cold the walls are (as long as you are not in contact with the cold surfaces). Otherwise we would all freeze to death every night as soon as the sun sets.
STACK_EXCHANGE
It's been 10 years since I joined Kexi and thus the KDE community. I think writing down some history and summary makes sense. 2003-03-28: first touch on Kexi sources for porting It all started at a technology fair in Warsaw, 2003. I wasn't too keen to go but got free tickets and free time. I met a founder of OpenOffice Polska LLC (later renamed to OpenOffice Software) from Warsaw presenting its adaptation of deeply localized, nicely prebuilt office suite based on OpenOffice.org. The office suite has been open sourced StartOffice over two years before by SUN and then localizations or user handbooks basically did not exist. During the meeting among other topics we also discussed apparent missing bit in the OpenOffice.org suite: a rival of MS Access. I proposed to perform some research on how the app can be added. I got hired and engaged full-time from March 2003. The business model was largely similar to what is known from server Linux tools or support subscriptions: offer the tools for free, with the source code, and build products and services (such as support) on top. I was already confident that my adventures with open source would start soon, I just wasn't sure what project to join. My initial attempt should have been obvious: say hello to OpenOffice.org to start a MS Access clone project within it. That was nice theory but has never worked since OpenOffice.org project wasn't even semi-openly governed. Talking to OO.org meant talking to SUN Microsystems. A small company is rarely a part in such relations. So another solution was to start from scratch or join existing open source project that shares our goals. By that time a great smart guy Lucijan Busch from Austria already started his work on Kexi which he launched as a summer project in 2002. (a hint for all of you who think you're too young to start doing some KDE Junior Job, you are wrong, Lucijan was only 16 years old when he started Kexi!) Based on the business model of OpenOffice Polska, Kexi had to run natively on Windows to integrate well with to OS. So a side effect of my project was the KDE on Windows initiative that resulted in another general-purpose target for KDE software which is now a subproject on its own. Initial selection of features (such as larger parts of KDElibs) was closely related to needs of Kexi Windows port. I owe a big credit to my employer that it allowed me to contribute in a sane way instead of just accepting code forks. Naturally, for some time I was the only hacker using MSVC for actual KDE code. Having the Linux KDE Desktop around all the time but about first three years of my Kexi development was happening in a MSVC IDE. The primary reason for that at the time was poor availability of debugging tools for Linux and slow gcc compiler. And Kexi was already a really complex layered app for me, developed with limited head count and time pressure. On arrival of powerful hardware, Qt Creator and KDevelop tools that reasoning more or less disappeared. Even though I still see certain teams working this way, this is no longer happening in the Kexi project. The investment resulted in Windows version of Kexi dynamically linked with commercial Qt as GPL version was not available for Windows back then. The software at the KDE side was fully LGPL, so that option was all legal. Just like it was the case with our OpenOffice offerings, our Kexi customers were coming from Windows. To make everybody’s live easier I prepared combined installer offering the OpenOffice apps and Kexi bundled side-by-side on a single CD. Another nice credit for my employer could be that there were no unpleasant "code drops" once a year of so. Full code of Kexi always immediately landed in the KDE repository what was a result of development happening directly within the KDE infrastructure. And for direct benefit of KDE users on Linux the Kexi code had Linux/Windows multiplatform nature inspired by Qt itself, with Mac versions available too. After leaving OpenOffice Software LLC, I am following this track for any development within Kexi even while its LGPL would permit different strategies. As a result Kexi was relatively popular on Windows among people using the company's OpenOffice flavour and especially outside of Poland (both English and Polish localizations were supported). By "relatively" I mean Kexi is more specialized tool than a general purpose office apps such as a word processor. And as expected not many paying customers were interested in non-Windows versions. Summing up, given the amount of freedom, the employment at OpenOffice Software felt a bit more like a sponsorship. It lasted until late 2008 and my full-time development of Kexi lasted until late 2007. In summer 2004 Lucijan left the Kexi project and I took over the maintainership. After six months or so I realized I have some thoughts to share with my new KDE friends. To give back to those from whom I am learning new things. Now I see a majority of my 132 blog entries were devoted to Kexi, KOffice/Calligra or KDE on Windows. (By the way. I'd like to encourage those of you that are "just coding" to start blogging too. It's not necessary to be a fan of philosophy, sociology or psychology like Aaron :) But at least I can see many of my contributions that were inspired by one or another blog or talk. Or just a comment. So maybe this process is recursive? When you work remotely why wouldn't you let others to better understand your dificulties, reasons for certain decisions or conversely? It will never hurt) Coding is only a part of the story. During the time like many others I gave a few presentations on Kexi at conferences or meetings devoted to education or open source, including KDE Akademy. Kexi was topic at numerous KOffice/Calligra developer meetings. During the development thousands lines of IRC discussion have been exchanged, dozens of design documents (wikis) created and tutorial written as well as two handbooks. I exchanged hundreds of emails with contributors and users. Recently we started to advertise to use the dedicated forum. Managed to push many releases. It is always interesting to realize how diverse are needs of Kexi users. Supporting wide spectrum of use case can give extra satisfaction. While I've been working on Kexi for two years or more already, I noticed SUN finally started its OpenOffice.org Base project, truly the only OO.org app that wasn't migrated from the StarOffice suite. I quickly noticed that the SUN's Java and servers mindset left its stamp on Base. Being based on a Java database back-end it was more resource-hungry for available desktop computers that were available at the time. Secondly, Base was also internally more based on Writer than an independent app. It apparently followed the habit of OpenOffice to mimick the MS Office's look and feel. It was clearly a huge and risky effort for a new app. Thirdly, Base's the local storage was based on compressed XML (to me another sign of SUN's Java and servers mindset), which is useful for documents, systems integration and standards such as ODF but which is also a nonsense for anyone accustomed with physics of relational databases (including MS Access' Jet back-end). After realizing this I was rather happy I had no opportunity to join the project. My employer shared this opinion. The Kexi project operates in specialized area within a hard market filled with proprietary "enterprise" solutions. In the meantime while Kexi was evolving a number of respected and already powerful open source competitors passed away. Rekall was open-sourced in 2003 but faded when the company behind it disappeared already in KDE 3 times. The same applies to Knoda. As of now Glom database tool for GNOME is actively maintained. And while OpenOffice/LibreOffice Base is of course maintained, these do not seem to be as actively developed as other apps from the respective office suites. That said, primary competitors such as Filemaker or Microsoft Access are big and enjoying diversified funding. They are apparently trying to secure their market share by using closed document formats, something especially important for this data-oriented software. Despite of all its "interoperability buzz" around the proprietary DOC, XLS and PPT formats MS never agreed to open up its MDB format. This way programs and databases based on MDB remain the most closed ones; realistically you cannot switch a compiler or database engine. Through its history a number of talented developers joined the Kexi project and contributed with their valuable time. Two of them shared especially long time, Sebastian Sauer and Adam Pigg. Adam is still the contributor to Kexi and Sebastian is active in Calligra too. Of course one big contributor is also the Calligra and wider KDE community as a whole because Kexi shares the common development infrastructure and various initiatives. In 2012 new developers joined and are already contributing. There is also pretty much high interest in Google Summer of Code tasks related to Kexi even while at most one slot is available every year. The question of developer workforce needed to generate a snowball effect appears constantly. Rational users know that most complicated improvements would need sponsored developers and coordinated effort. Perhaps dedicated Kexi Foundation accepting donations worldwide would be the answer? That would be similar to what the sister Calligra's project Krita just did last year. 10 years. So yes, I am that old. A software architect who still remembers how to program and actually still programs. All that wouldn't be possible without supportive people that one can meet on the way. And never to forget, special thanks should go to my lovely wife and the whole family. For curious here is a visual time line presenting large part of computing history. History of Kexi and KOffice, Calligra and KDE has been combined into a single graph completed with a bit of historical context, even with competing OO.org and MS Office projects. From the "Lines of code" column (counted using the SLOCCount tool) you can note that size of the code base of Kexi did not increase too much after 2009. This means the activity leans towards removing defects from the app and improving its usability and existing features rather than developing new ones. This makes difference for the users given the available resources are limited. Even while users are hungry of new cool features only a few simpler features have been added during this period.
OPCFW_CODE
'use strict' const test = require('mvt') const tp = require('./index.js') const q = [ 'ary=a', 'ary[]=b', 'boolval=false', 'first=andrew', 'zip=37615', 'last=carpenter', 'zip=37601&q=%3F', 'number=22.55', 'nil=undefined', 'flag', 'amp=%26', 'eq=%3D', 'ary=c', 'encodedAry%5B%5D=1', '#we=are&totally=done' ].join('&') const p = tp(`http://localhost:80/base/path/resource?${q}`) test('First name matches', (assert) => { assert.is(p.first, 'andrew') }) test('Last name matches', (assert) => { assert.is(p.last, 'carpenter') }) test('Encoded question mark is correctly parsed', (assert) => { assert.is(p.q, '?') }) test('Encoded ampersand is correctly parsed', (assert) => { assert.is(p.amp, '&') }) test('Encoded equal sign is correctly parsed', (assert) => { assert.is(p.eq, '=') }) test('Numeric value parses as such', (assert) => { assert.is(p.number, 22.55) }) test('Null and undefined stings parse as null', (assert) => { assert.is(p.nil, null) }) test('Boolean value parses as such', (assert) => { assert.is(p.boolval, false) }) test('Flag (key with no value) parses to true', (assert) => { assert.is(p.flag, true) }) test('Zip code array is correctly parsed', (assert) => { assert.is(JSON.stringify(p.zip), '[37615,37601]') }) test('Array values are correctly parsed', (assert) => { assert.is(JSON.stringify(p.ary), '["a","b","c"]') }) test('Encoded array values are correctly parsed', (assert) => { assert.is(JSON.stringify(p.encodedAry), '[1]') }) test('It ignores things after #', (assert) => { assert.falsy(p.we || p.totally) }) test('Parse undefined', (assert) => { assert.notThrows(() => tp()) }) test('Parse empty string', (assert) => { assert.notThrows(() => tp('')) }) test('Parse only host', (assert) => { assert.notThrows(() => tp('http://localhost:80')) }) test('Parse trailing slash', (assert) => { assert.notThrows(() => tp('http://localhost:80/')) }) test('Parse trailing question mark', (assert) => { assert.notThrows(() => tp('http://localhost:80?')) }) test('Parse trailing ampersand', (assert) => { assert.notThrows(() => tp('http://localhost:80?key=value&')) }) test('Parse trailing ampersand again', (assert) => { assert.notThrows(() => tp('http://localhost:80&')) }) test('Parse trailing question mark and ampersand', (assert) => { assert.notThrows(() => tp('http://localhost:80?&')) })
STACK_EDU
import math import torch import torch.nn as nn import torch.nn.functional as F from models.commons.initializer import init_rnn_wt, init_linear_wt class Attention(nn.Module): def __init__(self, H, method='general'): super(Attention, self).__init__() self.method = method if self.method == 'general': self.W = nn.Linear(H, H) init_linear_wt(self.W) elif self.method == 'concat': self.W = nn.Linear(H * 2, H) self.v = nn.Parameter(torch.FloatTensor(1, H)) init_linear_wt(self.W) stdv = 1. / math.sqrt(self.v.size(0)) self.v.data.normal_(mean=0, std=stdv) self.W_c = nn.Linear(H * 2, H) init_linear_wt(self.W_c) def forward(self, K, V, Q): # K: Keys -> [B, L, H] # V: Value -> [B, L, H] # Q: Query -> [B, 1, H] # ====================== # E: Energy -> [B, 1, L] # returns [B, 1, H] tensor of V weighted by E # Calculate attention energies for each encoder output # and Normalize energies to weights in range 0 to 1 e = F.softmax(self.score(K, Q), dim=2) # [B, 1, L] # re-weight values with energy c = torch.bmm(e, V) # [B, 1, H] h = torch.tanh(self.W_c(torch.cat((c, Q), dim=2))) # [B, 1, H] return h, e def score(self, K, Q): if self.method == 'dot': # bmm btw [B, L, H] and [B, H, 1] => [B, 1, L] return torch.bmm(Q, K.transpose(1, 2)) elif self.method == 'general': return torch.bmm(self.W(Q), K.transpose(1, 2)) elif self.method == 'concat': # luong attention B, L, _ = K.shape E = self.W(torch.cat((K, Q.repeat(1, L, 1)), dim=2)) # [B, L, 2H] -> [B, L, H] return torch.bmm(self.v.repeat(B, 1, 1), E.transpose(1, 2)) # [B, 1, H] x [B, H, L]
STACK_EDU
Support for KLEE in Debile Name: Marko Dimjašević Email: firstname.lastname@example.org (PGP Key ID: 1503F0AA) - IRC nick: mdim (on oftc.net and freenode.net) IM (Jabber): email@example.com Background: I am a 4th year computer science PhD student at the University of Utah, USA (the UTC-6 timezone). I also did research at NASA. My research is in software verification in general and symbolic execution in particular (list of publications and conference talks). I'm fluent in C (15 years), Java (4 years), and Python (3 years), but I speak Bash (4 years) and C++ (15 years) too. For the last 10 years I've been a free software enthusiast and for the last 3 years a Debian user. Last year I started contributing to Debian by packaging software verification tools. There are contributions I've made in code to several free software projects and in non-coding ways to a few as well, including KDE. I have some system administration skills as I host my email, web, and cloud services, and I've also worked on packaging software verification tools, as it can be seen on Alioth and mentors.d.n. A research project I've been working on is on applying the KLEE verification tool to real-world software. In 2012 and 2013 I successfully participated in Google Summer of Code as a student. With all that in mind, I believe I am a great candidate for this project, which I proposed to Debian. A list of my current and recent projects can be found on my website. Project title: Support for KLEE in Debile Project details: Debile is a Debian package analysis infrastructure. It can invoke a software analysis tool - among a few supported tools - that automatically analyzes a Debian package against a coding style or for common software errors such as a null pointer dereference. KLEE is a software analysis tool that performs symbolic execution of a target C program. The goal of this project is to add support for KLEE in Debile. A beneficial side-effect of this project is that packages with C code can have automatically-generated high code-coverage test suites thanks to KLEE as it explores as many as possible execution paths in a program. Synopsis: Writing and maintaining software is challenging. Software is complex and keeping it bug-free remains an unsolved problem. Debian Stretch, the next Debian stable release, comprises 850 million lines of software source code. Code written in the C programming language accounts for 41% of the lines, which is more than in any other programming language. Debile, Debian's package analysis platform, so far includes the ?CppCheck, Clang Static Analyzer, and Coccinelle tools for packages with C, C++, and Objective-C code. However, all of the tools perform static analyses, which can result in false positives. Too many reported false positives hinder adoption of the tools. To address this issue, I propose to integrate the KLEE tool into Debile. KLEE performs dynamic analysis on C programs and reports only true positives, i.e. real bugs. It does so by systematically exploring as many as possible execution paths in a given program. Errors that KLEE reports exist only along feasible execution paths, i.e. they correspond to real bugs in the program. Benefits to Debian With Debian Stretch, the next stable release of Debian, having around 350 millions of lines of code written in C, a sound analysis tool KLEE for C programs integrated into Debian's package build infrastructure would be very appealing to Debian, as KLEE can automatically detect some categories of errors that evade a compiler's analysis. For every error KLEE detects it also generates a corresponding witness test case. Even if it will work for a fraction of all the C lines, it will still be very beneficial as it will make Debian more stable, robust, and secure. Deliverables: The deliverables are a patch to the Debile system that adds support for KLEE. Project schedule: I am expecting the project to last until the end of Google Summer of Code 2016. As I am hoping to become a Debian Developer eventually, I am planning to continue working on Debile even after GSoC is over. There are modifications that need to be done both to KLEE (such as implementing the Firehose file format) and to Debile (adding wrapping code to work with KLEE). I have already been working on the project - I have set up a Debile instance on my server, I have created initial versions of packages for KLEE and its so far non-packaged dependency, I have two patches merged upstream to Debile's code base, and I have a prototype tool flow for analyzing a Debian package with KLEE. Roughly, this is a timeline: - Get more familiar with Debile's and KLEE's code bases until May 23, - Implement the Firehose format in KLEE until June 10, - Integrate KLEE into Debile until July 22, - Test KLEE's operation in Debile and fix glitches until the end of the GSoC program. Exams and other commitments: No, I am not going to have any course exams as I passed all of them in my PhD program two years ago. Nevertheless, I am planning to have a dissertation proposal at some point during the summer. Other summer plans: My summer plan is to work on this project, which perfectly aligns with my research project at the university - analyzing Debian packages with KLEE. I might take a short vacation, but I will make sure to coordinate and communicate this with GSoC project mentors. Why Debian?: Debian is a free software community project. I am a strong proponent of free software. To the best of my knowledge, no other free software operating system has such a positive, big, vibrant, and independent community, not steered by any for-profit's interests. Debian as the operating system is also known for its stability. This is why I chose it to be my operating system of choice for both my laptop and servers. After being only a Debian user for 3 years, I felt like contributing back to Debian. My previous Debian contributions: - Two patches to Debile: A work-in-progress Debian package for the Simple Theorem Prover within the Debian Science Team (it was in the New queue once): https://anonscm.debian.org/git/debian-science/packages/stp.git A work-in-progress Debian package for KLEE: http://mentors.debian.net/package/klee Are you applying for other projects in SoC? No.
OPCFW_CODE
SPQuery, Dates and Regional Settings First, some setup: My Site is set to Eastern Time, and the server is located in Eastern Time. I'm trying to run a query against a list using the created date. I have tried using all variations I can think of and just can't get the results I want. My CAML query has a date specified as 2012-08-25T06:01:49Z which is in ISO8601 format. It is a UTC date. But when I execute my query, it goes to the database as 2012-08-25 06:01:49. So, this works as expected and I get the results I want. Now, if I change my user regional settings to Pacific. The same query sends to the database with 2012-08-25 09:01:49 and I don't get the results I expect. Since I provided the date in the CAML as UTC, is there any way to get the query to execute with the exact date I passed in instead of translating it using Regional settings? I've torn my hair out enough over this and was hoping someone might have some insight? Ok, so I figured this out (Edit: No I Didn't). It turns out, that there is a StorageTZ attribute on the Field element for CAML. You can set this value to UTC and then it will use the date as you pass it in instead of converting it to the users local time. For the Field element: http://msdn.microsoft.com/en-us/library/dd588183(v=office.11).aspx And this is where I first saw the StorageTZ attribute: http://msdn.microsoft.com/en-us/library/ms197282.aspx (found this link via the http://sladescross.wordpress.com/2012/05/28/spquery-iso-datetime/ link that @C. Marius provided, so thank you!) Ok, I may have spoken too soon. Even with this option the query is still sent to the DB with an automatic adjustment based on my regional settings... Oh SharePoint, how I hate thee sometimes. If you solved the problem stated in your original question, you should mark this answer as accepted answer to help future visitors. Find here (http://sladescross.wordpress.com/2012/05/28/spquery-iso-datetime/) an explanation on how the ISODate gets stored and used. In simple words DateTime values as stored as UTC and used as such, except for when column names are obtained via an Indexer, which reads date-time values according to Regional Settings for the site. Beyond, if you do code try always returning to ISO DateTime as such SPUtility.CreateISO8601DateTimeFromSystemDateTime(DateTime.UtcNow)); See here for more conversion exampls http://prasanjitmandal.blogspot.ch/2010/06/sharepoint-datetime-format-conversions.html Thanks for the reply and information. My biggest problem isn't getting things into or out of UTC. The problem is when I provide a UTC date to an SPQuery, it sends that date to the DB as the users local time instead of keeping it as UTC. Since I'm doing stuff that depends highly on date/time, this is causing results to be left off that should be included.
STACK_EXCHANGE
2 de jul de 2018 Very well structured for a refresher course. Thank you Professor Ghrist for your effort in putting this course together. A little additional outside research was required but well worth the effort. 9 de fev de 2021 Excellent introduction to Calculus, I wanted to review the material to tutor my child but I am very happy that I learned a whole new way of looking at Calculus.\n\nThank you so much Prof. Ghrist. 9 de out de 2016 wonderful class, and i am looking forward to the next part. por Roy G• 19 de set de 2020 A great course. The homework is challenging and rewarding. por Γιάννης Π• 26 de jun de 2020 easiest way to understand difficult mathemtatical concepts 24 de set de 2019 It is pretty instructional and the professor is also nice. por Xavier C• 26 de ago de 2019 Very interesting & difficult to me (i'm just a programmer) por RAJATH G K• 11 de jun de 2017 Its a very good course. I learnt a lot from this course... por Andrew M• 26 de mar de 2016 Difficult but rewarding, explains some really lovely math! por Deleted A• 29 de fev de 2016 Simply put, this is the best course on Coursera, part one. por Paul F• 10 de mar de 2020 Great course. Thank you for making this available online. por xiangdong z• 27 de mai de 2017 Great course teaches me through clear and detailed clues. por Tun L A• 29 de abr de 2016 This course help me so much. So, I would link to thank U. por Reema T• 15 de mai de 2020 videos should be more helpful for completing exercises por martin p• 25 de dez de 2017 Excellent approach to calculus with top notch material por Wenyuan D• 12 de jul de 2017 The teaching material is comprehensible and super fun. por Gadiel R• 10 de jan de 2017 Great lessons, challenging material, and good support. por Aleksander S• 5 de jan de 2016 Definitely one of the best courses on Coursera so far. por Emir H• 9 de jun de 2017 Really nice, well explained and entertaining course ! por Nirmala R• 25 de set de 2016 The visualizations were fantastic!! Loved the course. por augustin m k• 12 de nov de 2018 the course was good for me ,i undestand it very well por Dmitri M• 14 de mai de 2017 A useful refresher of my university courses! Thanks. por Snehal P• 24 de mai de 2020 It was to difficult for me but finally get enjoyed. por Aurora M G• 28 de fev de 2016 It is very interesting, but I miss a hard due date. por Thiago C C• 27 de jul de 2021 Great professor. Clear explanations. Fun lectures. 5 de dez de 2020 I'm very glad. But my certificate couldn't I took. 16 de mai de 2016 Very intuitive, I think 1.5 speed is more properly
OPCFW_CODE
Get Started with .NET PDF Library Using VB Help VB.NET User Have Quick Evaluation of .NET PDF SDK with Simple Sample Code for Creating Blank Page to PDF in VB.NET Look for HTML5 PDF Editor? EdgePDF: ASP.NET PDF Editor is the best HTML5 PDF Editor and ASP.NET PDF Viewer based on XDoc.PDF, JQuery, HTML5. and WebForms projects. As a leading professional third-party SDK supplier in the field of image and document management, RasterEdge has always been devoted to providing various effective and fully-functional imaging solutions for developers who are working on different .NET developing applications. Using this PDF SDK for VB.NET, you can easily and quickly complete PDF document creating and loading, PDF document conversion, PDF content redaction, PDF document annotation, PDF document protection and more in any type of a 32-bit or 64-bit .NET application, including ASP.NET web service and Windows Forms for any .NET Framework version from 2.0 to 4.6... display pdf in mvc, asp.net c# pdf viewer, best pdf preview in asp net c#, show image asp.net c#, imagedraw asp.net multipage tiff viewer, pdf editor in asp.net mvc, asp.net show excel file in browser. This page is designed to help users to get started with our standalone application RasterEdge XDoc.PDF SDK for VB.NET in high efficiency after you have finished downloading and installing RasterEdge .NET Imaging SDK on your PC. It will start from how to create a VB.NET console application, and create a blank page in PDF document. Create a VB.NET Console Application Open Visual Studio and click "New" from toolbar. Note, Visual Studio 2005 and above versions are available; Choose "VB Language" and "Console Application" respectively to create a project. How to Create a Blank Page in PDF in VB.NET Are you looking for a quite easy PDF creating and generating tool to allow for creating new PDF document with blank page? If so, you will work out this target just by using RasterEdge PDF document creating component within VB web or Windows application. Add necessary XDoc.PDF DLL libraries into your created VB.NET application as references. Use namespace "RasterEdge.Imaging.Basic"; Use namespace "RasterEdge.XDoc.PDF"; Note: When you get the error "Could not load file or assembly 'RasterEdge.Imaging.Basic' or any other assembly or one of its dependencies. An attempt to load a program with an incorrect format", please check your configure as follows: If you are using x64 libraries/dlls, Right click the project -> Properties -> Build -> Platform target: x64. If using x86, the platform target should be x86. Copy of the following VB.NET sample code to your application. Dim outputFile As String = "C:\output.pdf" ' Create a new PDF Document object with 2 blank pages Dim doc As PDFDocument = PDFDocument.Create(2) ' Save the new created PDF document into file
OPCFW_CODE
The demand for qualified individuals in cloud computing is on the rise and it doesn't look to be slowing down. As more industries start to transition to a cloud model the demand for cloud skills will continue to grow. The global public cloud revenue is expected to grow to over 💲300B in 2021. Now is a great time to learn a new skill and familiarize yourself with the rapidly changing field. But where should you start? In this article we'll start with the basics and learn all about cloud computing fundamentals. The term cloud is basically referring to a data center or multiple data centers that are connected to each other, and are then made available to users through the internet. Instead of having to manage computer resources, they can be accessed via the cloud on demand. Users do not have to directly manage the hardware or software, they can simply click to access these computing resources. To better understand what cloud computing is, let's first look at what a typical workflow might look like for a company before cloud computing became so popular. Let's say a company wanted to host a website. They might have a system administrator who will be responsible for the configuration and operation of the server that hosts the website. The sysadmin might be responsible for helping to: - Evaluate and choose the data center or hosting provider - Determine and provide server requirements to data center / hosting provider - Negotiate final services Any issues or problems that arrive require the sysadmins attention. Let's have a closer look at the two main methods of acquiring servers or computer resources. Before cloud computing became so popular, companies often directly dealt with data centers. A data center can house thousands of servers owned by private companies or individuals. They also might rent physical servers to companies who don't provide their own. - Dedicated space designed to house computer systems and any associated technology. - Data centers can contain redundant systems such as power backups and environmental controls - Data centers can help provide physical security to the servers - Any additional server specifications require a technician to physically implement -e.g., someone physically installs additional RAM - While the data center might have redundant systems and protections, if the entire facility failed so would the server. - The negotiations and pricing process can take some time. Negotiating and pricing requires a bit more effort with data centers as the server configuration, associated components such as internet connection, power and cooling need to be determined. When you're dealing larger projects this process can take a long time and if rushed can end up costing a lot of money. Generally with a hosting provider, instead of buying or renting a physical server in a data center you pay a monthly fee and get a dedicated server, the hosting provider takes care of the rest. You choose the available servers that the hosting provider offers and any additional configuration. The hosting provider deals with all the physical requirements of the server. A hosting provider is often just a company that owns servers in a data center and charges a fixed price to rent those managed servers. - Monthly fixed price for dedicated server. - Easier to get started and configure - The hosting provider will connect with the data center to implement changes to your server - Any additional server specifications require a support ticket - .e.g., contact the hosting provider to request additional RAM, response time will vary. - Cost can be steep. Pay a fixed price regardless of usage. One of the biggest downsides of a hosting provider besides the fixed cost, is the reliance on them to implement changes to the server in timely manner. For example, if you want to increase the RAM you submit a ticket and then wait for them to respond. Response times will vary and in fact it can be an additional cost for priority support. So far we've looked at two ways to acquire computer resources. Data centers, where a physical server is housed and hosting providers that charge a fixed monthly fee for a dedicated server. With those options, we either have to have someone physically upgrade our server, or in the case of the hosting provider, create a ticket and wait for them to apply the upgrades. Let's look at how cloud computing helps us avoid those obstacles. Cloud computing is a model that makes computer resources available as a service. Users can easily launch these services without any human intervention. No more waiting around to hear back from a data center or hosting provider. Here are three of the main characteristics of cloud computing: - Services are on-demand & self-serviced - A user can implement without any manual intervention. - Elasticity - Must be able to scale up and down anytime - Measured - Only pay for what you used, no fixed cost. Cloud computing providers offer their services via three main models. Cloud providers may offer services in more than model, or they might only specialize in one area. In the SaaS model, the cloud providers install an application in the cloud and handle all the infrastructure and necessary platforms for the application to run. Users can access the application from cloud clients, this way users do not need to download or install the application. The application is running in the cloud and it can be accessed via the browser. In this model cloud providers will setup the necessary infrastructure and platforms for users to develop an application. This can include an operating system, web server and other necessary elements. Users can then develop and deploy applications on that platform. As its name suggests, this model provides infrastructure as a service. Typically this means things like servers and databases. There is no shortage of choice when looking for a cloud provider. It can be very easy to spend unnecessary money if you rush to decide. Take your time when deciding, most of the cloud providers have some sort of free tier if you're interested in trying before buying. Here are a few popular cloud providers: There are a number of smaller providers out there. Not all of them will offer the comprehensive computing services that AWS or Azure will but they might be more economical. In this article we saw that the cloud is really just a data center or multiple data centers that are made available through the internet. We saw how things were often done before the cloud became popular. We learned about the characteristics of cloud computing and looked at different cloud models like SaaS, PaaS and IaaS and some of the major providers. We also began to look at the architecture of cloud environments. In future articles we'll dive deeper into the infrastructure of the cloud. - Hosting Provider / Dedicated servers / Cloud - OVH - This is a well known hosting provider that also provides cloud services. - Cloud Provider - Digital Ocean - National Institute of Standards and Technology - Free for dev - Large list of SaaS, IaaS and PaaS services, aimed at DevOp/SysAdmin
OPCFW_CODE