Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Auto Initialization of local variables I have the following code snippet. int j; printf("%d",j); As expected, I get a garbage value. 32039491 But when I include a loop in the above snippet, like int j; print("%d",j); while(j); I get the following output on multiple trials of the program. 0 I always thought local variables are initialized to a garbage value by default, but it looks like variables get auto initialized when a loop is used. I am using VC++ 6.0 IDE and compiler and I don't get different results. Garbage values are by their definition garbage. You can't rely on them having a certain value. A gargabe value can be anything included 0. It is having indeterminate value. It can be anything. Quoting C11 §6.7.9 If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. [...] Automatic local variables, unless initialized explicitly, will contain indeterminate value. In case you try to use a variable while it holds indeterminate value and either does not have the address taken can have trap representation the usage will lead to undefined behavior. If the variables in the OPs code have automatic storage duration (which we can't tell), accessing them is undefined behavior, but not because of trap representations. See this. @Lundin Thanks for the pointer sir, updated my answer, is it better now? As expected, I get a garbage value. Then your expectation is unjustifiably hopeful. When you use the indeterminate value of an uninitialized object, you generally get (and for your code snippets alone you do get) undefined behavior. Printing a garbage value is but one of infinitely many possible manifestations. I always thought local variables are initialized to a garbage value by default, but it looks like variables get auto initialized when a loop is used. You thought wrong, and you're also drawing the wrong conclusion. Both of your code snippets, when standing alone, exhibit undefined behavior. You cannot safely rely on any particular result. You can't know that it is undefined behavior because we can't tell the scope based on the OPs code. Similarly, you can't assume that the OP is using a system with trap representations. Therefore this is possibly incorrect: "When you use the indeterminate value of an uninitialized object, you get undefined behavior". See this. @Lundin, thanks for pointing that out. I have qualified the answer a bit to allow myself what I think is sufficient weasel room. Anyone interested in why I do so (i.e. the details of when the behavior is not undefined) can follow the link you kindly provided.
STACK_EXCHANGE
Have you ever shopped on an absolutely awful e-commerce site? Do you remember why you hated it? - You couldn’t find the item you were searching for. - Browsing through the site’s inventory was slow and frustrating. - The checkout process took way too long. Anyone who’s used an e-commerce site has dealt with these problems. When it comes to e-commerce problems, companies like Edgecase, are leveraging AI to create better product recommendations for customers and improve conversion rates. Taking advantage of AI through machine learning (ML) can give you the right tools to solve many types of organizational problems. But, first you have to break down the problem into a form your ML model can understand and solve. That involves framing your problem, or in other words, understanding the question you want to answer and choosing the right dataset to solve it. Let’s dive into why you need to frame problems correctly and how your dataset affects which questions you’ll ask ML model to answer. Framing Your Problems Correctly Ensures Your ML Model Can Solve It We can understand complex problems and datasets relatively quickly, a unique skill machine learning does not share. For example, we can drive in a new location based on what we can see, but an AI would require immense amounts of data and training to perform that task, which we consider relatively simple. Because of machine learning’s limitations with complex problems, it’s important to frame your problem in a way that your ML model will understand. To do so, you have to understand what your ML is capable of. To help illustrate, let’s talk about bananas. Let’s say you need to transport bananas, but because of the large quantities, you need help determining the amount you’ll be shipping. In setting up your machine learning algorithm, you create a training dataset (training datasets are used to teach ML algorithms how to do the job they were created for) that has images of bananas. The pictures look like this. Will your algorithm be capable of learning how to count these bananas accurately? Not likely. This is because of the limitations of what an ML model can and can’t do. Consider what you see in the image. - A bunch of bananas with some in front of the others. What does an ML model see? - A few shapes that indicate bananas, along with some yellow patches. (When zoomed in the bananas start to blend together, even to the human eye.) You know that the yellow patches behind the main bananas are more bananas, but an ML model can’t tell. It has no concept of object permanence. No matter what annotations your dataset uses, your model will have an extremely difficult time distinguishing those yellow patches as separate bananas. That’s not to say that bananas are your ML model’s kryptonite (that would be bananas). If you focus on bananas that are not obscured by other objects, then your model can identify them. You just need to define what percent of a banana has to be in the image for it to register as a banana. When framing your question, you need to come up with a strict definition for things. For instance, if you want an ML model to tell when bananas are ripe, you have to decide what ripe bananas look like. Since people’s definitions of what is ripe can vary, you must have a strict definition for your model. In the above photo, one person may consider number 5 to be ripe, while another waits until a banana looks like 6 or 7. You also have to determine how specific or general you want your model to be. So if you’re identifying bananas, you have to figure out if you’re going to say all bananas are the same or if you are going to separate by variety. Framing your issue calls for an understanding of machine learning’s limitations and for you to set specific definitions and parameters relevant to your problem. But there’s more to it than that. Framing your problem also depends on what your dataset looks like. Framing Your Problem Through Datasets: How It Works Together Your dataset dictates how your problem is framed. By teaching your ML algorithm what it’s identifying and what the environment looks like, your dataset helps your ML model understand the problem. There are two main ways you can set up your dataset, and each one has a different impact on your ML model. One: you discover a problem that you want to solve and then build a dataset around the problem. - Advantages: because you’re building your dataset around your issue, you know that your images are relevant. You can also find exactly what you need when you’re annotating data. - Disadvantages: Since you’re focused on what you think you need, your data collection is biased, and you may miss valuable insights. These biases may not be something you’re aware of until you start building your training dataset. Imagine you’re building an ML model that identifies wedding dresses. What is your definition of a wedding dress? - Long, lacy white dresses? - Elaborate, red dresses? - Colorful, brightly patterned dresses? Depending on your culture and background, you probably chose only one of these options. However, they’re all considered wedding garments in different parts of the world. Restricting your ML model to your knowledge of the subject introduces bias, meaning you’ll miss out on certain types of wedding dresses. The second way of setting up your dataset is by using data you’ve already collected and mapping a question to it. So if you have a large collection of car pictures, you may try to identify any jeeps in your pictures. - Advantages: Your data represents the real world because it is not based on a single question or idea. You also don’t have to build a dataset from scratch. - Disadvantages: You don’t know what’s in your images or where to find the items you want to identify, which leads to a lot of time spent curating images. Since you don’t know what’s in your dataset, there’s also a chance you’ll miss out on important factors. A construction site safety management company ran into this when they tried to make a scene detection AI. They wanted their AI to identify demolition sites from images, but their customers quickly pointed out an issue with the finished product. The company hadn't considered indoor demolition. If you want to avoid making a mistake like this, you have to be intentional when you’re building a dataset. Being intentional involves knowing the full range of the problem and ensuring your dataset will cover all aspects of the issue. When it comes to training data, it has to be as close as possible to the real world conditions your ML model will be working in. For instance, your images can’t be overly staged, but they have to be clear enough to be useable. Because your ML model is based on your dataset, you have to curate good data to make the most of your model. While each dataset is different depending on what you want your model to accomplish, there are some basic principles you don’t want to ignore. The Basics of Building a Good Dataset What would you do if you needed to speed up production on a new aircraft line? This was the problem Airbus had to solve. As the company started building its new A350 aircraft, it needed to accelerate its production time. The model had to identify disruptions in the process early and match these problems to applicable solutions. Airbus knew what it wanted from its ML model. However, their ML algorithm wasn’t fully trained from the outset. Airbus collected data on all the problems and actions that took place during the production process. This data was fed into the model to teach it how the building process should run. The result? Their ML model was so successful that Airbus has even bigger plans to use machine learning in the future. Because the training data accurately represented working conditions, was sufficiently comprehensive, and specific, Airbus’ model improved its production process. Your training data will determine whether your machine learning model works as intended or not. The right dataset makes your model as effective as it can be. However, before you can build a good dataset, you have to decide on your question. Step 0: The Question Every part of your ML model, including your dataset, depends on your question. The question needs to be as unambiguous as possible. Adding specificity to your question means you’re less likely to leave out important details in your dataset. Going back to the wedding dress example, you may want to answer the question, “is there a wedding dress in this picture?” However, it’s almost impossible to input every type of wedding dress. Plus, some people wear regular, everyday dresses for their wedding. Restricting your question to a specific type of wedding dress improves your model’s accuracy. Step 1: Ensuring the Quality and Relevancy of Your Data Once you have the right question, your main focus shifts to ensuring that your dataset closely represents the environment your model will work in. Your data not only has to reflect the real world conditions, but it also has to take into account all possible variables. If you’re building a dataset from scratch, setting up data collection sites or sending out field workers can help you get the full range of the subject. Your value comes from getting a large sample of data since it will cover more variables. In contrast, datasets that have already been created need to be sorted through to find and mark relevant information. You also have to make sure your data relates to the problem you’re solving. For example, self-driving car models use video that people have taken of roads as they drive. This gives the car a large amount of information about the city. However, this data doesn’t transfer from one place to another. If you record data about San Francisco, the car will be able to navigate fairly well. But move the car to Las Vegas, and suddenly all the data it’s collected is useless. Even though Las Vegas is also a major city in the U.S., it’s different enough from San Francisco to require a specialized dataset. Of course, there’s only so much data you can collect before you have to test your model and see if it’s feasible. It’s time for a beta test. Step 2: Beta Testing Beta testing shows you whether your ML model is even feasible. By training your ML model as early as possible, possibly with as little as 100 images, you can start to get a sense of its potential. Testing your model also reveals the gaps in your dataset. You can then revisit these areas and fix the annotations or add more information. Finally, testing early gives you a human failsafe in case it doesn’t work. Your employees can step in to fix and refine your model as they discover issues. Beta testing is a great way to find out if your dataset is working for your ML model or not. Either way, improving your data will help you frame your problem in a way your model can understand and solve. It’s No Problem Framing your problems does more than help your machine learning model work correctly. It makes it easier to build the best dataset for your needs, which saves you time and money. Your training data has to suit your problem for it to be effective. At Labelbox, we can give you the tools to create superior datasets. We can also connect you with data labeling teams to speed up your annotation process.
OPCFW_CODE
Consider following algorithms for placing processes into holes in memory: best fit. Your assignment is to write a simulator that will take processes of varying sizes, load them into memory according to each of those rules, and swap processes out as needed to create a larger hole. Following assumptions are made: 1. Memory size is 128MB 2. Size of each process will be some number (integer) in the range 1 to 128MB. 3. An initial list of processes and their sizes is loaded from a file. These processes should be loaded into a queue of processes waiting to be loaded into memory. 4. Memory is initially empty. (We will ignore the operating system.) 5. If a process needs to be loaded but there is no hole large enough to accommodate it, then one or several processes should be swapped out, one at a time, until there is a hole large enough to hold the process that needs to be loaded. 6. If processes need to be swapped out, then the process that has been “in memory” the longest should be the one to select. 7. When a process has been swapped out, it goes to the end of the queue of processes waiting to be swapped in. 8. Once a process has been swapped out for a third time, we assume that that process has run to completion, and it is not re-queued. Note: not all processes will be necessarily swapped out for a third time. 9. The simulation terminates when the queue of processes are empty. The "process file" will be in the following format: Process id <space> process size (an integer). Here is a sample process file: Your program should do the following: 1. Each time a process is loaded into memory, a memory map and statistics line should be printed as according to following example: 2. Memory Map showing all the memory filled and empty. Number of processes in memory = 5 Number of holes = 3 Memory usage = 41%, Cumulative memory = 40 • Memory usage refers to the percent of memory that is currently occupied by processes, • Cumulative memory (in percent) gives the average of all the memory usage up until and including the current process load. 4. When the queue is empty, following should be printed: Total process loaded = 33 Average number of processes in memory= 14.4 Average number of holes = 6.3 9 фрилансеров(-а) готовы выполнить эту работу в среднем за $189 This problem can be solved in java by making a class to represent the process and apply the rules accordingly as required in the project. With me everything will be done according to the requirements.
OPCFW_CODE
package eu.mikroskeem.shuriken.instrumentation.validate; import eu.mikroskeem.shuriken.common.Ensure; import eu.mikroskeem.shuriken.reflect.ClassWrapper; import eu.mikroskeem.shuriken.reflect.Reflect; import org.jetbrains.annotations.Contract; import org.jetbrains.annotations.Nullable; import org.objectweb.asm.ClassReader; import org.objectweb.asm.util.CheckClassAdapter; import java.io.PrintWriter; import java.io.StringWriter; import java.lang.reflect.Method; import java.util.Arrays; import java.util.stream.Stream; /** * Class validation tools * * @author Mark Vainomaa * @version 0.0.1 */ public final class Validate { /** * Private constructor, do not use */ private Validate() { throw new RuntimeException("No Validate instance for you!"); } /** * Verify class bytecode * * @param classBytes Class data * @return Class data, if it was valid * @throws ClassFormatError If class wasn't valid */ @Contract("null -> fail") public static byte[] checkGeneratedClass(byte[] classBytes) throws ClassFormatError { Ensure.notNull(classBytes, "Class data shouldn't be null!"); ClassReader cr = new ClassReader(classBytes); StringWriter sw = new StringWriter(); PrintWriter pw = new PrintWriter(sw); try { CheckClassAdapter.verify(cr, false, pw); } catch (Exception ignored) {} if(sw.toString().length() > 0) { throw new ClassFormatError(sw.toString()); } return classBytes; } /** * Check fields availability in class agains info defined in * {@link FieldDescriptor} objects * * @param clazz Class to perform check on * @param fields {@link FieldDescriptor} objects */ @Contract("null, _ -> fail") public static void checkFields(Class<?> clazz, FieldDescriptor... fields) { ClassWrapper<?> cw = Reflect.wrapClass(clazz); Stream.of(fields).forEach(fieldDescriptor -> { try { Ensure.ensurePresent( cw.getField(fieldDescriptor.getFieldName(), fieldDescriptor.getFieldType()), String.format("Field %s %s not found", fieldDescriptor.getFieldType(), fieldDescriptor.getFieldName()) ); } catch (Exception e) { throw new NullPointerException(e.getLocalizedMessage()); } }); } /** * Check fields availability in wrapped class agains info defined in * {@link FieldDescriptor} objects * * @param cw Wrapped class to perform check on * @param fields {@link FieldDescriptor} objects */ @Contract("null, _ -> fail") public static void checkFields(ClassWrapper<?> cw, FieldDescriptor... fields) { Ensure.notNull(cw, "ClassWrapper shouldn't be null!"); Stream.of(fields).forEach(fieldDescriptor -> { try { Ensure.ensurePresent( cw.getField(fieldDescriptor.getFieldName(), fieldDescriptor.getFieldType()), String.format("Field %s %s not found", fieldDescriptor.getFieldType(), fieldDescriptor.getFieldName()) ); } catch (Exception e) { throw new NullPointerException(e.getLocalizedMessage()); } }); } /** * Check methods availability in class against info defined in * {@link MethodDescriptor} objects * * @param clazz Class to perform check on * @param methods {@link MethodDescriptor} objects */ @Contract("null, _ -> fail") public static void checkMethods(Class<?> clazz, MethodDescriptor... methods) { Stream.of(methods).forEach(methodDescriptor -> Ensure.notNull(getMethod(clazz, methodDescriptor.getMethodName(), methodDescriptor.getReturnType(), methodDescriptor.getArguments()), String.format("Method %s(%s) not found", methodDescriptor.getMethodName(), Arrays.toString(methodDescriptor.getArguments()) )) ); } /** * Check methods availability in wrapped class against info defined in * {@link MethodDescriptor} objects * * @param cw Wrapped class to perform check on * @param methods {@link MethodDescriptor} objects */ @Contract("null, _ -> fail") public static void checkMethods(ClassWrapper<?> cw, MethodDescriptor... methods) { checkMethods(Ensure.notNull(cw, "ClassWrapper shouldn't be null!").getWrappedClass(), methods); } /** * Check constructors availability in class against info defined in * {@link ConstructorDescriptor} objects * * @param clazz Class to perform check on * @param constructors {@link ConstructorDescriptor} objects */ @Contract("null, _ -> fail") public static void checkConstructors(Class<?> clazz, ConstructorDescriptor... constructors) { Ensure.notNull(clazz, "ClassWrapper shouldn't be null!"); Stream.of(constructors).forEach(constructorDescriptor -> { try { Ensure.notNull(clazz.getConstructor(constructorDescriptor.getArguments()), String.format("Constructor (%s) not found", Arrays.toString(constructorDescriptor.getArguments()) )); } catch (Exception e) { throw new NullPointerException(e.getMessage()); } }); } /** * Check constructors availability in wrapped class against info defined in * {@link ConstructorDescriptor} objects * * @param cw Wrapped class to perform check on * @param constructors {@link ConstructorDescriptor} objects */ @Contract("null, _ -> fail") public static void checkConstructors(ClassWrapper<?> cw, ConstructorDescriptor... constructors) { checkConstructors(Ensure.notNull(cw, "ClassWrapper shouldn't be null!").getWrappedClass(), constructors); } /** * Checks class extending/implementation against info defined in * {@link ClassDescriptor} objects * * @param classDescriptor {@link ClassDescriptor} objects */ @Contract("null -> fail") public static void checkClass(ClassDescriptor classDescriptor) { Ensure.notNull(classDescriptor.getDescribedClass(), "Class is null"); Stream.of(classDescriptor.getExtendingClasses()).forEach(clazz->{ if(!clazz.isAssignableFrom(classDescriptor.getDescribedClass())) { throw new NullPointerException(String.format("Class doesn't extend %s", clazz.getSimpleName())); } }); } @Nullable @Contract("null, null, null, _ -> fail") private static Method getMethod(Class<?> clazz, String method, Class<?> returnType, Class<?>... arguments) { Ensure.notNull(clazz, "Class shouldn't be null!"); Ensure.notNull(method, "Method name shouldn't be null!"); Ensure.notNull(returnType, "Return type shouldn't be null!"); try { Method m = clazz.getDeclaredMethod(method, arguments); m.setAccessible(true); if(m.getReturnType() != returnType) throw new NoSuchMethodException(); return m; } catch (NoSuchMethodException e) { return null; } } }
STACK_EDU
SSH proxy support for create and upgrade I'm hoping that a "No TCP SSH check" flag could be added to allow for SSH proxy support with docker-machine create. Example ~/.ssh/config: Host * !192.168.* !10.* ProxyCommand ssh -aY bastion 'nc %h %p' I don't have any experience with go to write this myself. Skipping the TCP SSH check For users in different environments, the only way they can access devices may be through a bastion of some kind leveraging a SSH or SOCKS5 proxy. Currently machine leverages a WaitForTCP check that calls ssh.go to run the TCP checks, and it will not move forward with provisioning until this test passes. Being able to skip this would allow for users on networks with policies that more strictly restrict outbound SSH traffic to access nodes. Note: I have verified that the nodes being spun are accessible via SSH when accessed via the proxy using the ~/.docker/machine/machines/<SERVER NAME>/id_rsa key. Name=swarm-master-9f895aa148c950e46cd3f74bee2055721163fe79a2476896d5d35b9d01002f39 Authenticating to Rackspace. Username=<USERNAME> Creating OpenStack instance... FlavorId=general1-1 ImageId=598a4282-f14b-4e50-af4c-b3e52749d9f9 Creating machine... Waiting for the OpenStack instance to be ACTIVE... MachineId=33d231d9-e1f9-4289-8a0e-79edf538174b Looking for the IP address... MachineId=33d231d9-e1f9-4289-8a0e-79edf538174b IP address found IP=<IP_ADDRESS> MachineId=33d231d9-e1f9-4289-8a0e-79edf538174b Get status for OpenStack instance... MachineId=33d231d9-e1f9-4289-8a0e-79edf538174b State for OpenStack instance MachineId=33d231d9-e1f9-4289-8a0e-79edf538174b State=ACTIVE Getting to WaitForSSH function... Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 Testing TCP connection to: <IP_ADDRESS>:22 The Testing TCP connection to: <IP_ADDRESS>:22 seems to go on forever. I ctrl+c'd out of it after 5+ minutes. I have tried using docker-machine upgrade as a temporary workaround, but it fails as well while proxying the SSH connection. Both docker-machine ssh and docker-machine scp work once a machine is up. docker-machine version: $ docker-machine --version docker-machine version 0.3.1 (HEAD) I'm curious how Machine would continue to provision if it cannot reach the instance on port 22? Thanks for the response! This is definitely a TL;DR issue :) In this case, I can only talk to things via SSH if I jump through a bastion/jump box. I cannot SSH to things directly due to network policies. This is why the direct TCP connection from my workstation to port 22 is failing. If we are able to skip the TCP connection check, and the provisioning follows my client SSH config to connect to the box for provisioning, Machine will succeed. Case and point: When I use docker-machine create, I can actually login via docker-machine ssh once the VM becomes active. However, provisioning fails since it's running trying to make a direct TCP connection from my workstation to the VM. Does that make sense? So if I understand correctly, you can SSH into a bastion box, but direct connection to other hosts is choked out by a firewall? @nathanleclaire: That's right! Direct connections to to port 22 for anything other than the jump host/bastion are explicitly blocked by the corporate firewall. Our SSH traffic has to go through a bastion. This article is probably the best description of what we have to do so we can proxy through the jump hosts: https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Proxies_and_Jump_Hosts#Passing_through_a_gateway_using_netcat_mode If we have an option to skip the TCP check to port 22 within Machine (since that won't abide by our ~/.ssh/config), I think it will allow us to provision nodes using docker-machine. An example of an SSH config is included in the first comment. docker-machine ssh does work on VM's created using create, even though the actual docker provisioning is never completes due to the TCP check of 22 failing. Here's another project where we had to implement the same thing: https://github.com/test-kitchen/kitchen-rackspace#usage You'll see the no_ssh_tcp_check option. That's something that we had to add to get around this problem in corporate network environments where egress traffic is more strictly controlled. What this looks like from our shell: Telnet to port 22 rejected by firewall: $ telnet <IP_ADDRESS> 22 Trying <IP_ADDRESS>... telnet: connect to address <IP_ADDRESS>: Connection refused telnet: Unable to connect to remote host SSH connections hops through a bastion/jump box based on our SSH client config and connects successfully: $ ssh <IP_ADDRESS> Warning: Permanently added 'bastion' (RSA) to the list of known hosts. Warning: Permanently added '<IP_ADDRESS>' (RSA) to the list of known hosts. Last login: Tue Aug 11 22:50:47 2015 from bastion brint@host:~$ We :heart: running things locally. I'd be happy to do jump on a hangout to go into more detail if it would help. I'd like to see a feature to facilitate this too. In our production AWS environment I cannot use --private-address-only as the WaitForSSH() function never succeeds. @prologic @brint Would https://github.com/docker/machine/pull/1685 solve it for you? It uses only SSH to check daemon availability without dialing the IP:port directly. Hi @nathanleclaire - I might be wrong on this one, but it doesn't look like it would be a fix for this SSH issue. It looks like #1685 fixes instances where machine is waiting for the docker daemon to show up as responsive on a port. In reading through the branch, it looks like SSH connections will still try the TCP check of port 22 prior to SSH'ing. Here was the path I followed to jump to this conclusion: https://github.com/nathanleclaire/machine/blob/daemon_wait_over_ssh/libmachine%2Fhost.go#L125 https://github.com/nathanleclaire/machine/blob/daemon_wait_over_ssh/drivers%2Futils.go#L79-L80 https://github.com/nathanleclaire/machine/blob/daemon_wait_over_ssh/drivers%2Futils.go#L66-L69 https://github.com/nathanleclaire/machine/blob/daemon_wait_over_ssh/ssh/ssh.go#L13 Ah, yes, sorry, I had confused that issue with a general direction of removing non-SSH dialing. I think I would definitely like to remove the TCP check. This should now be fixed on master (no more lone TCP dial)
GITHUB_ARCHIVE
Doing some work on an Excel spreadsheet this morning reminded me that there are some great products that have been developed that enormously enhance the ease of use, flexibility, and general usefulness of Excel. Whilst this blog is not really about promoting Excel products, there is one product that I believe stands head and shoulders above any other out there. I am not a great fan of installing Excel addins, they usually have 200 functions of which I only want 1 or 2, but I have installed this addin and I don’t believe there is ever a day that I do not use it. Because of this, and because the price is spot on (it is free), I am going to shout the praises of Jan Karel Pieterse’s NameManager addin. This tool has been around for a number of years, and has been indispensable if you use Excel names extensively (which I do). There is a debate to be had about whether we should use names, some swear by them, some swear at them, but that is for another day. Using the names dialog in pre 2007 versions of Excel was painful. I am of course referring to the Insert>Name>Define… dialog which threw up this incredibly helpful beast There were a few other concessions to usability, Debra Dalgleish is highlighting the facility on her blog today, but generally it was hard work. That is, until JKPs addin came along. Suddenly, it was possible to see all of your names in a sensibly structured dialog, there were filtering options, you could evaluate names, see if they were being used, and much more. Compare to that previous dialog, look at the richness of facilities, the options, but most of all the sensible presentation. When managing names, it is imperative in my view to see as much information as possible, limited by my choice, not the limitations of the tool. Of course, MS have revamped Excel, and in Excel 2007, they introduced their own version of Name Manager. With the experience of running the old dialog for many years; the example of better version to draw upon (JKPs addin); and the fact that they can tap into the heart of Excel, MS were bound to produce the definitive Name Manager. Right? Well, not quite. This is an example of the dialog It is undoubtedly better than MS’ previous attempt. Seeing the names in a resizable dialog, with the Refersto Value and the scope is good, but it still falls far short of JKPs NameManager. It is cleaner than JKPs NameManager, but that is because it lacks so much. There is no option to evaluate a name, not all names resolve to a single value, which is incredibly useful; no option to highlight where names are used; no capability to redefine the scope of a name (if you try in the Edit dialog, it tells you that the scope cannot be changed – why?); changing a name’s name does not interact with VBA as NameManager does; but worst of all, it seems totally oblivious to hidden names. (BTW, you can add comments to names in Excel 2007. I cannot see where they appear, so fail to see their usefulness. Does anyone think this is a good addition that they will use?). All in all, the 2007 Name Manger is a big disappointment to me, and JKP’s NameManger cannot be retired just yet. If you use names a lot, do yourself a favour, rush out and buy a copy of JKPs NameManger today. You CAN afford it, it is available here. Perhaps JKP should rename it to ‘The Real NameManager’.
OPCFW_CODE
A few days ago we just got yet another reason to leave centralized, a-social networks behind. You probably do not want crazy billionaires serial innovators to reinnovate your virtual neighborhood without mercy. Aside of that there are the secret timeline algorithms, of which is little known beside that they primarily amplify hatred and biased, extremist opinions for the sake of maximizing impressions, users’ interactions, and platform revenue. At the same time one’s own personal timeline is being polluted and tampered with all kinds of paid advertisement whilst the underlying personal data of each and every user is generously sold in all directions. But wasn’t this all suppose to be just about “… connecting with friends and the world around you”? Exactly. But for that we have the Fediverse and when it comes to microblogging there is Mastodon. As you probably have already heard about those, let’s directly dive into organizing your Twitter exodus step by step. Last week GitHub and its parent company Microsoft announced“GitHub Copilot – their/your new AI pair programmer”. E.g. The New Stack, The Verge or CNBC have reported extensively about it. And there is a lot of buzz around this new service, especially within the Open Source and Free Software world. Not only by its developers, but also among its supporting lawyers and legal experts, although the actual news is not that ground breaking, because it is not the first of its kind. Similar ML-/AI-based offers like Tabnine, Kite, CodeGuru, and IntelliCode are already out there, which have also been trained with public code. Copilot currently is in “technical preview” and planned to be offered as commercial version according to GitHub. The core of it appears to be OpenAI Codex, a descendant of the famous GPT-3 for natural language processing. According to its homepage it “[…] has been trained on a selection of English language and source code from publicly available sources, including code in public repositories on GitHub”.Update 2021/07/08: GitHub Support appears to have confirmed that all public code at GitHub was used as training data. Great, in what amazing times we are living in! Sounds like with Copilot you do not need your human co-programmers any longer, who assisted you during the good old times in form of pair-programming or code review. Lucky you and especially your employer. On top you will save precious time because it will help you to directly fix a bug, write typical functions or even “[…] learn how to use a new framework without spending most of your time spelunking through the docs or searching the web”. Not to forget about copying & pasting useful code fragments from Stackoverflow or other publicly available sources like GitHub. At the same time, two essential questions arise, in case you care a bit about authorship: Did the training of the AI infringe any copyright of the original authors who actually wrote the code that was used as training data? Will you violate any copyright by including Copilot’s code suggestions in your source code? Let’s not talk about another aspect that GitHub mentions in their FAQs – personal data: “[…] In some cases, the model will suggest what appears to be personal data – email addresses, phone numbers, access keys, etc. […]” The results of the Open Source Impact Study tasked by the European Commission have been widely discussed mainly because of its numbers. Though being announced just now, the study identified for the year 2018 a contribution of 0.4% to the GDP worth EUR 63 billion by FOSS, if measured by the increase in commits. 10% more contributors would even raise the GDP of the European Union by 0.6% (EUR 95 billion). The overall cost-benefit ratio is estimated with at least 1:4. But it gets even more interesting, when looking into the results of the accompanying survey covering about 900 stakeholders (mainly companies) from all around Europe. For them, incentives for using and investing in Open Source have been, sorted by relevance: finding technical solutions avoiding vendor lock-in carrying forward the state of the art of technology As benefits they have seen: support of open standards and interoperability access to source code independence from proprietary providers of software Within the participants the cost-benefit ratio has been estimated even with 1:10. The current circumstances also forced conferences (those gatherings with really large audiences) completely into cyberspace. Some sticked with traditional approaches to stream talks via off-the-shelf videoconferencing applications and built upon the integrated very limited interaction features offered by these poor proprietary tools. Others have gone complete new ways and brought fascinating and well working concepts on how to still successfully connect the crowds to enable lively conversations and facilitate the exchange of knowledge and experiences in a distant environment. Let’s start with rc3 and its virtual conference venue in form of rc3 world, implemented with Work Adventure. In a pixel-2D-adventure-style you could walk around the area and as soon as you are approaching other characters, a live audio and video stream with those humans or other live forms controlling the character would open. Limited to 4-5 persons at a time, it allowed you to talk directly with each other – face to face. Due to the limitation of participants you were still able to have a working conversation. Somehow you needed to get used to having an unexpected and sudden interaction with one and another – on live video, but still it brought back the heavily missed opportunity to get in personal touch with other participants who are sharing possibly similar interests. The FOSDEM 2021, the worlds biggest conference on Free and Open Source Software usually taking place in Bruxelles, had for me a very convincing overall concept. The organizers and infrastructure artists have done a tremendous job that allowed for the most impressive conference experience so far and for long. Naturally and purely based on Free Software, at its core matrix, element, and Jitsi. How did it work and what was so great about it? Presentations of specific areas of interest had been summarized in virtual rooms with a fixed agenda, like in most physical conferences. Participants logged into a chat infrastructure which represented the rooms by group conversations. You would simply join the room(s) that you are interested in and could start texting with each other and the speakers like on IRC. Talks had been recorded beforehand and where automatically started – by the computer (systemd) – at their scheduled time. Its audio and video were streamed right above your chat window. When the talk ended, the Q&As were streamed live for a fixed amount of time within that room until the next talk started auto-playing according to schedule. During that first part of the Q&A session of a talk, moderators where clarifying upvoted questions and comments from the chat and interacting realtime with the presenters. Those interested could then continue discussing with the speakers and further extend their conversation by switching to a separate room. So per talk you had a dedicated room for the second part of the Q&A that would open shortly after and even allowed anyone there to interact live via audio and video. In sum that meant that you could check the schedule for topics you are interested in, connect at the announced time and be sure to really listen to that talk instead of watching tech staff doing mic checks or heavily delayed earlier talks whilst being unsure about if and when the one you came for would actually start. In addition the highly valued Q&A and following backstage (and off the record) conversations could still take place without interrupting or being interrupted by the subsequent talk. Just impressive and so useful! Thanks a lot to all who made this happen and work that well! These concepts are now here to stay, even when conferences will hopefully resume soon back in the physical world. A few days ago the oral hearing of the lawsuit between Oracle and Google were held at the U.S. Supreme Court, after it had been delayed by COVID-19. McCoy Smith shares his observations and interpretation in a detailed post “Oracle/Google” at Lex Pan Law. The litigation is over the copyrightability and if so infringement of certain parts of Java (mainly APIs) that were used within Android. If Oracle wins it will have significant impact on the whole software world and especially Open Source. Ultimately any API (use) would become subject to copyright. I started my digital photography life with a Nikon D80 and Lightroom 1.0 quite a while ago (2007). When Adobe stopped selling copies and only provided subscription options was one of the moments it became very clear that an alternative is needed. Let’s not talk about Lightroom CC, its unstable desktop app, and a recent user nightmare. To be independent from the business needs of a company, the only option is to go for an alternative that is licensed under an Open Source license. With that preference in mind and if it is about RAW processing, you have the choice between digiKam, RawTherapee, and darktable. I was following darktable since a few years. The 2.x versions have not really been working for me. In contrast the releases of 3.0 and 3.2 have been milestones in growing darktable into a serious and easy to use – not to say even more mature – alternative to Lightroom and it is time to do the final switch. Now or never. To share it upfront: I did not get disappointed nor frustrated by this decision. I am just wondering, why the hell did I not switch earlier? It has been instantiated for the sole purpose of trademark management (and enforcement?) for Open Source projects, who are said to be not well positioned to care by themselves. For a start Google assimilated their own projects: Angular, Istio, and GerritCode Review. Own Projects? Oh well, at least for Istio – that was co-developed with IBM – they now clarified who has ownership of its trademark. In their introduction statement they claim: “[…] Accordingly, a trademark, while managed separately from the code, actually helps project owners ensure their work is used in ways that follow the Open Source Definition by being a clear signal to users that, “This is open source.” […]” Josh Simmons, the president of the Open Source Initiative (OSI) maintaining the referenced definition has a diplomatic statement to that, which also serves well as a summary: “Of course, OSI is always glad when folks explicitly work to maintain compatibility with the Open Source Definition. What that means here is something we’re still figuring out, so OSI is taking a wait-and-see approach.” Or is this yet another project for the Google Cemetery because the Open Source community is not that into trademarks as cooperations are? There are more detailed summaries and discussions:
OPCFW_CODE
How to configure the second version of the popular reverse proxy Traefik for Nextcloud in Docker. Those who run their own Linux server at home and want SSL-protected access to their Nextcloud from the Internet will find Traefik to be a well-functioning and modern reverse proxy. Since the release of version 2.0, the many configuration examples found on the Internet are unfortunately incompatible with the current version. In this article I will show you how to configure your Docker and Traefik containers so that SSL certificates are obtained via TLS Challenge. I have also considered all settings that are necessary for the “HTTP Strict Transport Security” mechanism. In my Github repository you can see the complete Docker setup. The configuration of the Traefik version 2.x container At this point the general settings of the Traefik container are made and the certificate resolver is configured. It is important to distinguish that the configuration of the offered services is done on the side of the service container and not in the configuration of the Traefik container. bug version: "3.3" services: traefik: image: "traefik:latest" container_name: "traefik2" command: #- "--log.level=DEBUG" - "--api.insecure=true" - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--entrypoints.websecure.address=:443" - "--certificatesresolvers.mytlschallenge.acme.tlschallenge=true" #- "--certificatesresolvers.mytlschallenge.acmbuge.caserver=https://acme-staging-v02.api.letsencrypt.org/directory" - "--certificatesresolvers.mytlschallenge.acme.email=***youremail@here***" - "--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json" ports: - "443:443" - "8080:8080" volumes: - "./letsencrypt:/letsencrypt" - "/var/run/docker.sock:/var/run/docker.sock:ro" networks: - traefik_proxy - default logging: options: max-size: '12m' max-file: '5' driver: json-file networks: traefik_proxy: external: name: traefik_proxy default: driver: bridge The configuration of the Nextcloud container The Nextcloud container needs some labels that define which configuration Traefik offers for this container. Specifically, the router and the middleware, which modifies the HTTP headers, are configured here. labels: - "traefik.enable=true" - "traefik.port=80" - "traefik.http.routers.cloud.entrypoints=websecure" - "traefik.http.routers.cloud.rule=Host(`yourhostname`)" - "traefik.http.routers.cloud.tls.certresolver=mytlschallenge" - "traefik.http.routers.cloud.middlewares=cloud@docker" - "traefik.docker.network=traefik_proxy" - "traefik.http.middlewares.cloud.headers.customFrameOptionsValue=SAMEORIGIN" - "traefik.http.middlewares.cloud.headers.framedeny=true" - "traefik.http.middlewares.cloud.headers.sslredirect=true" - "traefik.http.middlewares.cloud.headers.stsIncludeSubdomains=true" - "traefik.http.middlewares.cloud.headers.stsPreload=true" - "traefik.http.middlewares.cloud.headers.stsSeconds=15552000" Please check my Github repository for complete docker-compose-files: https://github.com/bedawi/liberty-server Step 1: Prepare your linux server. Install docker and docker-compose. Here is an example how this works on Fedora 31: (Please make sure you understand what these commands mean – do not copy and paste!) $ curl -fsSL https://get.docker.com -o get-docker.sh $ sudo bash get-docker.sh $ sudo pip install docker-compose $ sudo dnf -y install git vim $ sudo usermod -aG docker YOURUSERNAMEHERE $ sudo systemctl enable docker.socket $ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0" --make-default $ sudo reboot Step 2: Prepare your folder Now create a folder where you want to store your containers’s configuration and the files of the nextcloud server. $ sudo mkdir /SOMEDATAFOLDER $ sudo chown YOURUSERNAME:YOURGROUPNAME /SOMEDATAFOLDER After this, cd into the new folder and clone my repository: $ cd /SOMEDATAFOLDER $ git clone https://github.com/bedawi/liberty-server.git Copy the example configurations for traefik and nextcloud to your data folder: $ cp -r liberty-server/traefik2 . $ cp -r liberty-server/nextcloud . $ cp -r liberty-server/database . Now create the needed docker networks for nextcloud and traefik: Step 3: Traefik $ docker network create nextcloud_backend $ docker network create traefik_proxy Now edit the docker-composer file in the traefik2 folder and set your eMail address. This is important for registration of your domain with Let’s Encrypt. If you do not provide a valid address here ou will not be notified when there is a problem with your certificates! After this, start up traefik: $ docker-compose up -d Step 4: Database To set up the database, cd into the folder, edit the docker compose file and set a password. Bring up the database after this: Step 5: Nextcloud Next, cd into the nextcloud-folder and edit the docker-compose file. Set your machine’s hostname and bring the container up. In the following example I also enable port 80 for bypassing traefik, just in case it does not work for some reason: Initialization takes a while. Wait until you see the following message: [Sun Jan 26 12:02:34.084218 2020] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND' Now try to access your machine on https://yourmachinename/ … if this dows not work, try the fallback on http://yourmachinename/ (without the s in https – this is the port 80 we opened deliberately for debugging) In the nextcloud setup dialog select “MySQL/MariaDB” and type in the database name, user and password from your docker-compose file. The server address is “cloud-db”. After your setup is completed, ctrl-c the “docker-compose up”-command in your terminal, remove the port 80 from the docker-compose-file and delete the created container with $ docker rm nextcloud-app Then start it with $ docker-compose up -d No individual support – Sorry! Dear readers. I know that docker administration can be confusing and frustrating for beginners. There is so much to learn and I wish I had the time to sit down with every new Linux container admin for tea and biscuits to go through his/her individual configuration. Unfortunately this is not possible because besides my daytime job I have a cat and a partner competing for my attention. Please do not post individual requests for help into the comments. Instead try to figure out how docker works on your own. To learn docker one could for example attend to an online course on linuxacademy.com. Found a bug? Everyone makes mistakes and my code is never 100% bug free. If you found an error on my project, please use the issues-page on github.
OPCFW_CODE
The archivist package is very efficient and advantageous when archived artifacts were created with a chaining code offered by the magrittr package. It is higly useful because the origin of the artifact is archived, which means that the artifact can be easly reproduced and it’s origin code is stored for future use. Below are examples of creating artifacts with a chaining code, that requires using a %>% and a %.% operators, offered by the magrittr and the dplyr package. Since the version 1.5 of the magrittr package has changed functionality of a mentioned pipe operator %>%, we copied (in version 1.3 of the archivist) functionality from version 1.0.1 and added old operator to the archivist package as Let us prepare a Repository where archived artifacts will be stored. Then one might create artifacts like those below. The code lines are ordered in chaining code, which will be used by the asave function to store an artifact and archive it’s origin code as a name of this artifact. One may see a vast difference in code evalution when using chaining code. Here is an example of a traditional R call and one that uses the chaining code philosophy. To simplify the code one can set globally the path to Repository using code as below. Now one no longer need to specify the repoDir parameter with every call. Many of various operations can be performed on a single data.frame before one consideres to archive these artifacts. Archivist guarantees that all of them will be archived, which means a code alone will no longer be needed to be stored in a separate file. Also an artifact may be saved during operations are performed and used in further code evaluations. This can be done when argument in asave is specified. # example 3 aread('MarcinKosinski/Museum/3374db20ecaf2fa0d070d') -> crime.by.state crime.by.state %a% filter(State=="New York", Year==2005) %a% arrange(desc(Count)) %a% select(Type.of.Crime, Count) %a% mutate(Proportion=Count/sum(Count)) %a% asave( exampleRepoDir, value = TRUE) %a% group_by(Type.of.Crime) %a% summarise(num.types = n(), counts = sum(Count)) %a% asave( ) Dozens of artifacts may now be stored in one Repository. Every artifact may have an additional Tag specified by an user. This will simplify searching for this artifact in the future. |4||group_by(cut, clarity, color)||860466a792815080957a34021d04c5c6| |3||summarize(meancarat = mean(carat, na.rm = TRUE), ndiamonds = length(carat))||820c5bf2ce98bbb4b787830fe52d98f3| |1||asave(userTags = c(“tags”, “operations on diamonds”))||434d4891ac1569883f80b2ec9fef0b95|
OPCFW_CODE
Library working not fully correct Instead of from strong_sort import StrongSORT it's neccessary to write from strongsort import StrongSORT 2)Also, it require yolov5 it think it's much better to make iе independent of classification+localization model, like yolo model weight should be Path result, not string as it was described was described here as default as str class ReIDDetectMultiBackend(nn.Module): # ReID models MultiBackend class for python inference on various backends def __init__(self, weights="osnet_x0_25_msmt17.pt", device=torch.device("cpu"), fp16=False): but later you made incorrect with string https://github.com/kadirnar/strongsort-pip/blob/main/strongsort/deep/reid_model_factory.py def get_model_name(model): for x in __model_types: if x in **model.name**: return x return None should be like that StrongSORT(model_weights=Path('resnet50_fc512_dukemtmcreid.pt'), device='cuda', fp16=False) 4) in strongsort\deep\reid_model_factory.py checkpoint = torch.load(weight_path) should be configurable, otherwise not working on windows. Should be like this in Windows checkpoint = torch.load(weight_path, encoding='latin1') Instead of from strong_sort import StrongSORT it's neccessary to write from strongsort import StrongSORT 2)Also, it require yolov5 it think it's much better to make iе independent of classification+localization model, like yolo model weight should be Path result, not string as it was described was described here as default as str class ReIDDetectMultiBackend(nn.Module): # ReID models MultiBackend class for python inference on various backends def __init__(self, weights="osnet_x0_25_msmt17.pt", device=torch.device("cpu"), fp16=False): but later you made incorrect with string https://github.com/kadirnar/strongsort-pip/blob/main/strongsort/deep/reid_model_factory.py def get_model_name(model): for x in __model_types: if x in **model.name**: return x return None should be like that StrongSORT(model_weights=Path('resnet50_fc512_dukemtmcreid.pt'), device='cuda', fp16=False) 4) in strongsort\deep\reid_model_factory.py checkpoint = torch.load(weight_path) should be configurable, otherwise not working on windows. Should be like this in Windows checkpoint = torch.load(weight_path, encoding='latin1') Hi, If you can send a pull request, I can accept it. I live in Turkey and there was an earthquake here. I don't have time to fix this. Instead of from strong_sort import StrongSORT it's neccessary to write from strongsort import StrongSORT 2)Also, it require yolov5 it think it's much better to make iе independent of classification+localization model, like yolo model weight should be Path result, not string as it was described was described here as default as str class ReIDDetectMultiBackend(nn.Module): # ReID models MultiBackend class for python inference on various backends def __init__(self, weights="osnet_x0_25_msmt17.pt", device=torch.device("cpu"), fp16=False): but later you made incorrect with string https://github.com/kadirnar/strongsort-pip/blob/main/strongsort/deep/reid_model_factory.py def get_model_name(model): for x in __model_types: if x in **model.name**: return x return None should be like that StrongSORT(model_weights=Path('resnet50_fc512_dukemtmcreid.pt'), device='cuda', fp16=False) 4) in strongsort\deep\reid_model_factory.py checkpoint = torch.load(weight_path) should be configurable, otherwise not working on windows. Should be like this in Windows checkpoint = torch.load(weight_path, encoding='latin1') Hi, If you can send a pull request, I can accept it. I live in Turkey and there was an earthquake here. I don't have time to fix this. Sure ,during this week. Take care
GITHUB_ARCHIVE
Why is Technology Used? IoT Tutorial for Beginners | Internet of Things (IoT) | IoT Training | IoT Technology | Edureka what is information and communication technology | what is ict | information technology management data analyst data analyst job data analyst job description data analyst nanodegree day in the life of a data analyst jobs in the future top 4 skills in demand for a data analyst what do data analysts do what does a data analyst do what is a data analyst what is data analyst do what is data analyst job what is the best career path for a data analyst what is the job description of a data analyst very helpful, thanks I graduated with a global business degree 10 years ago but went straight to being self Employee. Never went corporate. I'm so Intrigued in taking this new career path. Do you think it's. Possible? Do u read off a teleprompter while looking at the camera? I hope you answer, some tips can help me. I’m trying to start a YouTube channel. Please help Firstly thanks a lot for your efforts, but would you be able to guide me with certain head topics to study ? I don't want to go back to Uni (or pay ridiculous courses) – I have a Masters in B.A. – I believe its enough – plus the internets a large place . Would you be able to guide me as in depth as possible as to what subjects should I study please ? If I majored in accounting can I land a position as a data analyst? I am a physics student with limited programming skills. Is it possible for me to get an entry level job as a data analyst? you are pronouncing data wrong Hello, I'm a statistic student in Kenya with experience in R,Java and Sql…im really hopeful to join Data analytics .I'm enticed by the pays I hear. Kindly advice, I don't know where to start Nothing that i don't know… Hi, my major is international economic. I have studied some statistic program in my college such as stata, eview. Can i become a data analyst? Is those program help me if i want to become a data analyst? Thank you Difference between Data analyst and reporting analyst ….??? Damn! I don't like maths. But I will become Data scientist one day, for sure. Hi everyone! I am a senior in high school, I love math and computers, also working with others, this really seems so interesting to me. Does anyone have any advice on what career path I could take to become a Data Analyst? Hey i’m a college freshman with an undecided major. I’ve been trying to figure out what i want to do with my life and this seems to intrigue me but do you think i will be fine if i don’t really know anything on the subject currently? I'm currently working as a financial analyst…and I'm from commerce student…can I do data analyst course and which is the best one in hyd This is one of the best presentations offered for a very mysterious subject which eluded me for a long time…… Now I think I need to crack on… learning the essentials Where did you say the links were? Thank you Ben So much good information! Statistics Essentials for Analytics > Understanding the data> Probability and its uses> Statistical Inference> Data Clustering> Testing the data> Regression Modelling Math!!! Ok, I'm done…. Hi…! I Have experience On SQL/plsql and databases and advanced excel and also having java knowledge this is enough to Go with DA's. I graduate as Math major concentrate on applied mathematics I only took one semester of data analysis and I know mat lab ,python , R and c++ , you think i can survive to be data analysis ?? Hi, i am considering a cybersecurity course in a French university in Canada (i checked the job perspectives and I saw: IT data analyst also), but seriously I am not good at maths at all. So it means i have no chance for this anymore? or could you suggest any other ways that i can step into this with ease, please? thank you Hi Ben, I want to become a data analyst/scientist. My background as a structural engineer is sth far away from IT world. But I made a career change and I found a job in one consulting company that deals most with data warehouses. I am at the most junior position, learning SQL at the moment, and plan to start with python. Do you think consulting company is a good start for me? What would you recommend me to focus on the most? Cheers mate, thank you! Do you recommend getting a cert in the tools or languages? I'm interested in starting with getting really good with excel, sql, and tableau/qlikview. Any recommendations? I have my BA Degree on Sociology and double minor on Asian Studies and International Studies. I haven't found my career jon yet but im working two jobs and honestly it's hard because im working minimum wage, I'm a Care Provider and Cashier at Walmart. I need your help I know I could do more but the problem is where I live It's a small town and if I want to go to the city like Sacramento it's a hour and half drive commute what should I do?? Which is better doing big data analysis or data science for masters ?? I am a fresher and I want to build a career in data analyst. How should I start and which skills should I start learning. Great explanation, very helpful. Thanks i am scared of maths and statistics damn -_- currently i am working as python developer with around 2 year of experience,in python automation domain it is totally different field from data analyst.But now i am planning to move in data analyst job.But my concern is like after working in totally different domain can i switch into data analyst.Did companies will not ask for relevant experience during interview?? Hi Ben, I just saw your video and highly interested in making a career change from Healthcare Recruitment to Data Analyst. However, i currently don't have a degree of any kind and Im a bit worried about pursuing this opportunity. I understand the trend is beginning to open up and I want to ride the wave in. What possible advice can you offer or is it even worth it? Will a certificate be sufficient to gain entry-level employment…. I'm a social studies teacher, without a math background. The people at Thinkful say that I won't have any trouble finding a job in the data analysis field after completing their 6 month online course. What do you think, bogus or the real deal? Hi Ben, Thanks for the information. Can anyone tell me what are the topics we should know in Statistics before I start learning Data Analyst? I learnt Statistics in my collect but just aware about the terms. I have 6 years of experience in programming and knowledge about database. Nice video. When you say mathematics….is there anything specific ? thanks in advance… Im a civil engineering graduate and Im thinking on applying for data analyst for a temporary job until I get a license. Will it be hard for me? Thanks!! That was a great video thanks Ben Your email address will not be published. Required fields are marked * Save my name, email, and website in this browser for the next time I comment. Your Website URL
OPCFW_CODE
Version Control System to perform this test. Steps performed previously; $ sudo mkdir /var/lib/svn $ sudo mkdir /var/lib/svn/lamp svn = repos lamp = ProjectName $ sudo svnadmin create /var/lib/svn/lamp $ svn co file:///var/lib/svn/lamp Checked out revision 0. $ svn co file://localhost/var/lib/svn/lamp svn: 'lamp' is already a working copy for a different URL In order to make things simple I retain ONE ServerName. $ sudo /etc/init.d/apache2 force-reload * Forcing reload of apache 2.0 web server... Apache/2.0.55 mod_ssl/2.0.55 (Pass Phrase Dialog) Some of your private key files are encrypted for security reasons. In order to read them you have to provide us with the pass phrases. Server lampserver:443 (RSA) Enter pass phrase: Ok: Pass Phrase Dialog successful. [ ok ] $ svn co http://lampserver/svn svn: PROPFIND request failed on '/svn' svn: PROPFIND of '/svn': Could not read status line: connection was closed by server. (http://lampserver) $ svn co https://lampserver/svn "https" works but NOT "http". Strange? Error validating server certificate for 'https://lampserver:443': - The certificate is not issued by a trusted authority. Use the fingerprint to validate the certificate manually! - The certificate hostname does not match. - Hostname: Stephen - Valid: from May 3 03:47:58 2008 GMT until May 3 03:47:58 2009 GMT - Issuer: IT, Satimis, HK - Fingerprint: 0c:3a:2f:08:a6:15:ff:24:28:6b:fb:52:7f:5d:6a:28:20:e9:c9:6e (R)eject, accept (t)emporarily or accept (p)ermanently? svn: PROPFIND request fa iled on '/svn' Because I have no idea what to select, just cancel it wiht [Ctrt]+C. It prompts Please advise how to fix the problem. What shall I select? TIA PROPFIND request fa iled on '/svn' svn: PROPFIND of '/svn': Server certificate verification failed: certificate iss ued for a different hostname, issuer is not trusted (https://lampserver) What IP address I have enter on /etc/hosts? Correct, ServerName has to be resolvable to an IP address. $ cat /etc/hosts # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback Noted with thanks. If you have more than one name you want to use to open the same host, you put the primary name inside "ServerName", and the rest of the names inside "ServerAlias". Ex. ServerAlias lamp www.lamp www.lampserver Then all 4 lampserver, lamp, www.lamp will open the virtual host. Note they will only work if each of them can resolve to the server's IP address. Could you please explain in more detail? Thanks Also note that the /etc/hosts file will only apply to the local machine. If you want other machines to be able to type "lampserver" to get to your virtual host, you need ot add it to their hosts files, or add it to a DNS server.
OPCFW_CODE
Alibaba Cloud Public DNS supports various types of terminals such as PCs, browsers, and Internet of Things (IoT) devices. Public DNS Free Edition is provided for regular Internet users. Public DNS Commercial Edition is provided for enterprise users. Public DNS can be used in the following scenarios: anti-hijacking and access acceleration for mobile apps, and network security of enterprise users. Access acceleration for regular users Public DNS Free Edition provides a free resolution service for regular users. Target users: all Internet users, especially regular Internet users. Access method: Regular users can change the IP address of their DNS server to one of the IP addresses that are provided by Public DNS. Terminals such as PCs, laptops, and mobile phones are supported. For more information, see Access Alibaba Cloud Public DNS Free Edition as a regular user. Access acceleration: enables users to access the nearest nodes and improves the access speed. Access security: ensures access security and prevents man-in-the-middle hijacking. Service reliability: ensures service reliability based on the global distribution of nodes. Privacy protection for browser vendors Target users: browser vendors. Access method: Browser vendors can change the IP address of their DNS server to one of the IP addresses that are provided by Public DNS. They can also use Public DNS SDKs to access Public DNS. Benefits: accelerates access to domain names on browsers. The DNS Cache feature and multi-node capabilities of Public DNS enable accelerated access to all domain names on browsers. This improves the overall user experience. Privacy protection: Major international browsers such as Google Chrome and Mozilla Firefox support the DNS over HTTPS (DoH) protocol to protect user privacy. DoH encrypts DNS requests to prevent attacks and hijacking by Internet service providers (ISPs) or men-in-the-middle. This improves the data privacy and security of browser users. User privacy is ensured in Public DNS. Acceleration and stability for smart terminals Target users: providers of smart terminals and apps, such as smart speakers, smart routers, IoT devices, mobile phones, and mobile apps. Access method: Providers can enable smart terminals to access Public DNS by using API operations, Public DNS SDK for iOS, or Public DNS SDK for Android. More SDKs will be provided in the future. Support for multiple types of terminals: enables terminals to access Public DNS by using Public DNS SDKs. Accelerated access: allows terminals to cache DNS records on edge nodes. This accelerates the access speed of terminals. Access to the nearest nodes with low latency is supported. Privacy protection: provides secure data transmission for terminals. Security protection: prevents domain hijacking and provides basic anti-DDoS capabilities. Network stability: deploys multiple nodes. This ensures service stability in a 4G mobile network that has weak signals.
OPCFW_CODE
This track shows the primers for the SARS-CoV-2 sequencing protocol, also commonly referred to as Midnight. The primers enable amplification of the genome of SARS-CoV-2. This approach uses multiplexed 1200 base pair (bp) tiled amplicons. Briefly, two PCR reactions are performed for each SARS-CoV-2 positive patient sample to be sequenced. One PCR reaction contains thirty primers that generate the odd numbered amplicons ("Pool 1"), while the second PCR reaction contains twenty eight primers that generate the even numbered amplicons ("Pool 2"). After PCR, the two amplicon pools are combined and can be used for a range of downstream sequencing approaches. Primers were all designed using Primal Scheme and described in Nature Protocols 2017. This primer set results in amplicons that exhibit lower levels of variation in coverage compared to other commonly used primer sets. Display Conventions and Configuration Genomic locations of primers are highlighted. A click on them shows the primer pool. This is one of the few tracks that may be best displayed in "full" mode. RAPID primer sequences were downloaded from the Google Spreadsheet and converted to bigBed. More details are available in the paper referenced below or in the supplemental files on Zenodo. The raw data can be explored interactively with the Table Browser or combined with other datasets in the Data Integrator tool. For automated analysis, the genome annotation is stored in a bigBed file that can be downloaded from the download server. be converted from binary to ASCII text by our command-line tool bigBedToBed. Instructions for downloading this command can be found on our The tool can also be used to obtain features within a given range without downloading the file, bigBedToBed http://hgdownload.soe.ucsc.edu/gbdb/wuhCor1/bbi/rapid.bb -chrom=NC_045512v2 -start=0 -end=29902 stdout Please refer to our mailing list archives for questions, or our Data Access FAQ for more information. Freed NE, Vlková M, Faisal MB, Silander OK. Rapid and inexpensive whole-genome sequencing of SARS-CoV-2 using 1200 bp tiled amplicons and Oxford Nanopore Rapid Barcoding. Biol Methods Protoc. 2020;5(1):bpaa014. PMID: 33029559; PMC: PMC7454405
OPCFW_CODE
Can't make cloudfront accessible for public (as a test) Now that I have figured out my video viewing error, I can't figure out why i can't view my cloudfront. I keep getting: Error CodeAccessDenied/Code RequestID gibberish here /RequestId Hostid long gibberish /HostId /Error I'm coping the various urls out of cloud front, and trying them through http: and https: (Had to take the <> out because it inserted it as code) TIA Hi, I'm not sure I understand what you mean by "view your CloudFront". It sounds like you were able to view the videos generated by the solution via the CloudFront distribution. Is that correct? If you are trying to use your CloudFront distribution for other purposes other than viewing your video content, I suggest checking out the CloudFront developer guide documentation: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html so my issue, is i can put a link in a player (vlc as an example) and it will play the DASH and/or HLS, but actually viewing my videos as a webpage, isn't working lol, i know you didn't include that in the solution. Also, mediaconvert is rotating my mov (iphone) videos counterclockwise 270, i'm editing now to add the "Rotate": "AUTO" to the proper buckets and going to recommit to test it. --Felton On Mon, Apr 5, 2021 at 2:00 PM Joan Morgan @.***> wrote: Hi, I'm not sure I understand what you mean by "view your CloudFront". It sounds like you were able to view the videos generated by the solution via the CloudFront distribution. Is that correct? If you are trying to use your CloudFront distribution for other purposes other than viewing your video content, I suggest checking out the CloudFront developer guide documentation: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/video-on-demand-on-aws-foundations/issues/5#issuecomment-813578317, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATQIPAZE7ITCREGBMRIZ2WTTHICGFANCNFSM42JEHKTQ . As it's outside the scope of the solution, I suggest looking into web video players like Video.js that you can embed into your webpage that has support for HLS and DASH. Hope that points you in the right direction. Good luck! You're great Thank you! On Mon, Apr 5, 2021 at 4:14 PM Joan Morgan @.***> wrote: As it's outside the scope of the solution, I suggest looking into web video players like Video.js that you can embed into your webpage that has support for HLS and DASH. Hope that points you in the right direction. Good luck! — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/awslabs/video-on-demand-on-aws-foundations/issues/5#issuecomment-813653761, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATQIPAYA624ZOVCNF55325DTHIRZTANCNFSM42JEHKTQ . Closing this issue.
GITHUB_ARCHIVE
Incorrect or missed Content-type based on file extension on move from local to Swift What is the problem you are having with rclone? Problem with setting correct Content-type to storage object on move command from machine with centOS 7 to Swift OpenStack. File extension is mp4. Must have content-type video/mp4 on storage, but i have applilcation/octet-stream instead in result metadata. Therefore have problems with video players in browser. They need to get correct content-type What is your rclone version (eg output from rclone -V) rclone: Version “v1.42-056-g9c90b5e7β” Which OS you are using and how many bits (eg Windows 7, 64 bit) centOS 7, 64 bit Which cloud storage system are you using? (eg Google Drive) Swift OpenStack The command you were trying to run (eg rclone copy /tmp remote:tmp) [“rclone” “move” “/usr/records/st-ot -default_1533044441540_4fa6bbd0-94c7-11e8-a9b1-4108482fa2db.mp4” “swift-1:other- type-server-video-records/default” “–config” “/root/.config/rclone/rclone.conf” “-vv”] A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp) 2018/08/01 13:00:25 DEBUG : rclone: Version "v1.42-056-g9c90b5e7β" starting with parameters ["rclone" "move" "/records/st-ot- default_1533117530140_7b18b520-9571-11e8-bf57-e9d2024dba5c.mp4" "swift-1:other-t ype-server-video-records/default" "--config" "/root/.config/rclone/rclone.conf" "-vv"] 2018/08/01 13:00:25 DEBUG : Using config file from "/root/.config/rclone/rclone. conf" 2018/08/01 13:00:26 DEBUG : st-ot-default_1533117530140_7b18b520-9571-11e8-bf57- e9d2024dba5c.mp4: Couldn't find file - need to transfer 2018/08/01 13:00:26 INFO : st-ot-default_1533117530140_7b18b520-9571-11e8-bf57- e9d2024dba5c.mp4: Copied (new) 2018/08/01 13:00:26 INFO : st-ot-default_1533117530140_7b18b520-9571-11e8-bf57- e9d2024dba5c.mp4: Deleted 2018/08/01 13:00:26 INFO : Transferred: 9.018 MBytes (10.816 MBytes/s) Errors: 0 Checks: 1 Transferred: 1 Elapsed time: 800ms 2018/08/01 13:00:26 DEBUG : 5 go routines active 2018/08/01 13:00:26 DEBUG : rclone: Version "v1.42-056-g9c90b5e7β" finishing wit h parameters ["rclone" "move" "/records/st-ot -default_1533117530140_7b18b520-9571-11e8-bf57-e9d2024dba5c.mp4" "swift-1:other- type-server-video-records/default" "--config" "/root/.config/rclone/rclone.conf" "-vv"] Wrote 3 days ago at forum about this using question label. But now I think this is a real issue. Sorry. Have no idea how to add label to my issue. Looks like not trivial action. Sorry for the delay in responding, currently at a conference! What do you see if you do this on an .mp4 file? $ rclone lsf --csv -F pm rust.sh rust.sh,text/x-sh; charset=utf-8 rclone uses your system's Mimetypes when reading from local so these need to be set up correctly. command returns "application/octet-stream". thanks for advice. tried to search how to change this, but no luck. is any suggestions how to change this? Here is a walkthrough for how to do it for Redhat enterprise which should work on centos https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/file_formats Thanks for information. I have no gnome installed on the server machine and I found that video/mp4 is common mime type which one is already specified in default freedesktop.org.xml. I did the "update-mime-database /usr/share/mime" and all files uner that folder updated. I restarted machine, but still have "rclone lsf --csv -F pm myfile.mp4" as "application/octet-stream". I have "video/mp4" type in /usr/share/mime/type file Tried another mp4 file - same result. Is any other reason why rclone does not get iformation for mime type cache? Rclone uses the go standard library to look up mime types. The built-in table is small but on unix it is augmented by the local system's mime.types file(s) if available under one or more of these names: /etc/mime.types /etc/apache2/mime.types /etc/apache/mime.types Do you have one of those files? Does it have mp4 defined in it? Here is what I see on Ubuntu $ grep mp4 /etc/mime.types audio/mp4a-latm video/mp4 mp4 video/mp4v-es I added new one mime.types file founded in the internet to /etc and now "rclone lsf --csv -F pm xxx.mp4" returns "video/mp4". I tested the Content-Type returned from OpenStack by http request from new copied files - now it is "video/mp4". Problem solved. Thank you very much. Another abstract question - why I had no mime.types file in my default centoOS 7 system... Problem solved. Thank you very much. Great :-) Another abstract question - why I had no mime.types file in my default centoOS 7 system... A good question, but not one I can answer! I don't have any experience with centos, just debian & ubuntu.
GITHUB_ARCHIVE
Statsmodel weird behaviour I am using statsmodels in python to perform quantile regression. I am running quantile 99 and quantile 1 to see the extremes quantile of my distribution. from statsmodels.regression.quantile_regression import QuantReg from sklearn.datasets import load_iris import pandas as pd iris = pd.DataFrame(load_iris()['data']) y = iris[0] x = iris.iloc[:, [1, 2, 3]] qt = QuantReg(y, x) res99 = qt.fit(q=.99) res01 = qt.fit(q=.01) print('Quantile 99 description') print(res99.predict(x).describe()) print('--------------------') print('Quantile 01 description') print(res01.predict(x).describe()) print('--------------------') print('Target description') print(y.describe()) The output is Quantile 99 description count 150.000000 mean 6.713570 std 1.092909 min 3.972447 25% 5.798665 50% 6.745737 75% 7.447919 max 9.906085 dtype: float64 -------------------- Quantile 01 description count 150.000000 mean 5.196659 std 0.811265 min 3.110156 25% 4.555466 50% 5.270011 75% 5.777980 max 7.588576 dtype: float64 -------------------- Target description count 150.000000 mean 5.843333 std 0.828066 min 4.300000 25% 5.100000 50% 5.800000 75% 6.400000 max 7.900000 Name: 0, dtype: float64 That is, the distributions of the percentile 1, the target, and the percentile 99 are not supper different. Is this expected or am I missing something? you can look at a plot. The target doesn't have a lot of spread. Also, you didn't include a constant in the regressors X Does the intercept need to be included manually in statsmodels? yes, unless you use the formula interface.
STACK_EXCHANGE
TLB's port of Automatic1111 is broken for Hypernetwork training since 2.1 Run out of VRAM on a T4. I have to --medvram to get it to work with a BS 1 BUT I get Loss: nan and it really isn't working. This did work in 2.0 but two days later I tried HN again and it went to 2.1 to no longer work. Was reading over on the Automatic1111 issues and someone else had a sort of similar issue where they said to come here because of the way Ben does stuff differently than they do. the problem is with V2.1 not with the colab Is there any way I can go back to 2.0? yes, in the model download cell, use this in the "path to huggingface" box : stabilityai/stable-diffusion-2 make sure you insert your huggingface token above it yes, in the model download cell, use this in the "path to huggingface" box : stabilityai/stable-diffusion-2 make sure you insert your huggingface token above it and choose v2.1-768 in the custom model version I do not have that box, but here is what I do have: click twice on the cell and change wherever you find stabilityai/stable-diffusion-2-1 to stabilityai/stable-diffusion-2 So, change code then? click twice on the cell and change wherever you find stabilityai/stable-diffusion-2-1 to stabilityai/stable-diffusion-2 It works but how is it that I am forced to use BS=1 (512x512) on 15.1 gig T4 whereas locally, doing the same thing, they can do BS=2 with 10gigs? 24gig is BS=19 so my 15 gig should be 2 to 19 (around BS of 12-14). Mid training or at start. I have no answers im going to try and replicate Mid training or at start. I have no answers im going to try and replicate I got it to work but I used the techniques of 6GB card users where you do NOT do images every X amount of epochs as those take VRAM as well. Seriously, there is something wrong with this colab code. What did they do to 2.1 to cause this? Oh, and here is something else but the model used to train with has a completely different hash than the SAI one. https://github.com/Stability-AI/stablediffusion/commit/c12d960d1ee4f9134c2516862ef991ec52d3f59e What did they do to 2.1 to cause this? Oh, and here is something else but the model used to train with has a completely different hash than the SAI one. Stability-AI/stablediffusion@c12d960 No idea I am just saying that the model it is using on colab is not the same model we use locally. If we train on 2.0, does the .pt file still work in 2.1? If we train on 2.0, does the .pt file still work in 2.1? Sure does.
GITHUB_ARCHIVE
I jailbroke my iPad last week to see what everyone was raving about. I promptly crashed the device, and I'm impressed by how well the device and the jailbreak works. Apple have built a device that if it fails to reboot twice (approx 10-15 minutes on the black screen with the silver Apple) the device will go into safe mode. I didn't even know that there was a safemode for the iPad let alone any iDevices. What had happened is that since I was on an 3G iPad I was looking for software that would flesh out my iPad experience. I wanted a SMS tool so I could send and recieve text messages so I installed an Cydia App called I believe "iRealSMS 2.0" which didn't work just closes when you start it so I ignored it and that was the problem. I then installed Winterboard onto the device and Winterboard requires a reboot so when I rebooted I ened up in safe mode and was thinking that the problem was with Winterboard. Also when I held down my finger on the icon so that it wobbles and could be removed there was no (x) to remove the app. Eventually I found that the Cydia App has an installed section that allows you to remove installed apps. Since the Apple App Store doesn't have this I didn't even think about it. So I uninstalled Winterboard and rebooted but still ended up in safe mode so I then looked at what else I had installed and removed the iRealSMS app and rebooted and hey presto one fully working iPad. I am so impressed with how the Cydia App Store allows you to uninstall apps. This of course is because not all apps you buy on the Cydia Store are run via an icon. For example there is an App "RetinaPad" available for $2.99 this will take any and all iphone apps that have not been updated for the iPhone4 and smooth the rendering for you when you hit the (x2) button. The quality is amazing and I'll quote here "It's how Apple should have done it". Other benefits are Choose Web Browser. Which allows you to specify what browser to open when you click a link in youTube, Hopefully it will be updated to aloow yout to completely override the browser so that if you click a link it opens that browser rather than Safari. One thing I noticed is that most Apps are not free except for the thousands of themes that seem to fill the store and navigating the store is not great. I guess I just got used to the way that the Apple Store does things. There is a nice list of iPad Extensions & products designed for the iPad but there is no More button to see more of these type of apps. Also when searching there is nothing that tells you if the App works on an iPad. If the search result has got blue text then its a paid for app of its Grey then its free. I see that there is a iRealSMS 3.0 which is a paid for app $12.99 but thats a bit steep since it doesn't give a big description in the store. If anyone has any decent Apps that I should check out let me know in the comments.
OPCFW_CODE
When your server fails to start, consider the following cases: If you have configured Web Server to run on port 80, then you will need to start the server as 'root' user on Unix/Linux. However in Solaris 10, you don't need to run the Server as root to bind to port 80 (or < 1024). Execute the following commands: # su # /usr/sbin/usermod -K defaultpriv=basic,net_privaddr webservd When you encounter server startup issue, Server's error log or console output (on UNIX/Linux platforms) should most likely contain the reason for the startup failure. If the Web Server is configured to run in 64-bit and any of the plug-ins mentioned in the magnus.conf is of 32-bit, then Web Server 7.0 would fail to startup , throwing out error message like wrong ELF class: ELFCLASS32. Similarly, if you see an error message like wrong ELF class: ELFCLASS64 it means that 32-bit Web Server is trying to load a 64 bit NSAPI plugin or vice-versa. While starting up the Web Server, if you do not see the message info: CORE3274: successful server startup on UNIX/Linux platforms, then it is most likely that the server has startup issues. If you see the error message catastrophe ( 908): Server crash detected (signal SIGSEGV) in the server's error log file, this means that Web Server's daemon has detected a crash. Web Server crash during startup can happen because of various reasons including: Any of the configured 3rd party NSAPI plug-ins is either not following NSAPI specification. Improper Server Configuration. Web Server requires at least 512 MB of memory to operate optimally. If your system is running low on swap space then you might get error shown below: warning: CORE3283: stderr: Error occurred during initialization of VM warning: CORE3283: stderr: Could not reserve enough space for object heap catastrophe: CORE4005: Internal error: unable to create JVM failure: server initialization failed You will have to increase the swap space on your system. If you are running Web Server 7 under Solaris 10 zones, then you will need to increase the swap space within the global zone. Refer to your operating system document on how to add/increase swap space. Under heavy load condition, Web Server may run out of file descriptors. In such cases you will get an error like the following: [18/Dec/2005:20:01:03] failure ( 3014): HTTP3069: Error accepting connection (PR_PROC_DESC_TABLE_FULL_ERROR: file descriptor table full) Increase the file descriptor limit either per process or per system and restart the system. Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. These limits can prevent optimum performance of Web Server. The open file limit is one of the limits that can be tuned with the ulimit command. The command ulimit -aS displays the current limit, and ulimit -aH displays the hard limit (above which the limit cannot be increased without tuning kernel parameters). For setting the limit to hard limit, execute the following command: ulimit -n unlimited
OPCFW_CODE
I went through the various materials available on the topic of Data integration and completed the related courses. Nevertheless, with minimum practical experience in the field of data integration, I still feel a bit lost. What I am looking for is some kind of summary of plus and cons of the individual data integration methods, or some kind of decision tree to be able to select the best integration option for the customer. Do you guys have some materials on this topic in a structured form? If not, it'd be great if you just outlined your approach on how do you approach this topic when engaging with client practically. Data integration will be a natural evolution to the discussion and that might be when you introduce the idea of a center of excellence. Start with manual data integration. As Anaplan likes to say, manually loading data will not stop your project but bad data or problems with data integration will. Just have a strategy. Plenty of good best practices out there. @szechovsky let's keep the conversation going. You will find some of the brightest minds are on this Community Site. @jnoone, @ben_speight, @alexpavel, @scott.smith, and @kavinkumar are data integration pros and consistently get me out of a tight spot with data integration. Search on their names to see the articles and posts they've written. @jesse_wilson and @chase.hippen are Python pros. Read their best practice articles when you're ready. When you're ready for a checklist for study let us know. I can drop about 20 links for you to get you going. There's a lot to learn - so please, rely heavily on this Community site for nuances and any challenges you face. It's truly a gift that so many people here are willing to help. On Demand Videos As you begin to practice, pay particular attention to imports and exports. They're nuanced especially when you are required to manually create the chunks. Practice these until you get them right. For a benchmark, I probably invested 20-30 hours each really getting the hang of importing and exporting. Try different strategies like using basic authentication to start then start using a certificate. The certificate is the right way in my opinion because you don't have to worry about userid's and expiring passwords. Once you master Anaplan connect and Postman, it's time to move on to some really fun api work with Python. I would start with this post by @chase.hippen. This has to be one of the best Python posts out there. I pay homage to Chase every day I use Python. The third thing I would recommend is to then follow the Master Anaplanner Coursework on data integration. Click on this link and scroll down, you'll find 10 links to the data integration topics that all master anaplanners must understand. Some of them use ETL tools that are hard to come by but inside the user's guide are some amazing tips on how to leverage the API's. So it's definitely worth it! Lastly, use this Community site. Most data integration topics have already been answered - so you can start by searching this site. But if you're in a hurry, or you just can't find the topic your looking for, ask! You'll probably get 3-4 answers in the first hour! If you discover anything new or something you think others would benefit from knowing, post it! You'll get tons of kuddos. From my experience Anaplan data integrations is categorised into four main categories Manual :- This is where users will upload / download data into and from Anaplan manually via a dashboard button aur via anaplan provided excel add ins Anaplan Connect :- It uses windows batch files for uploading downloading data normally automated using windows scheduler . Third Party Connectors :- Multiple connectors available like mulesoft connector , informatica connectors etc. RESTful API :- Anaplan has surfaced API points which can be used to import / export data in and out of Anaplan . In terms of choosing which method :- It depends on many things like customers capabilities , scope, budget, scalability requirements, security policies etc. For eg. if your use case is working with multiple models and lots of data points segregated geographically ,things like manual or anaplan connect are not the best options . On the other hand if your use case is simple but your company policy mandates that it has to be through let say informatica then you have to use informatica connector or RestAPI. If you are doing a Proof of concept (POC) then its quicker to do manual etc. It is not an easy question to answer but from my exp I always prefer to go automated rather than manual . prefer to do most of my data transformations outside anaplan where applicable , always try to have an ETL later before anything goes into or outside anaplan Always try to have Single source of truth for data coming in and going out API is picking up a lot as it can be used by many tools and even programming scripts like python etc. With every thing moving to cloud , Use of API's is becoming very common .
OPCFW_CODE
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. Error 80072EE2 problems include computer crashes, freezes, and possible virus infection. Learn how to fix these Windows 7 runtime errors quickly and easily! Windows Phone 7 – support.microsoft.com – Aug 23, 2016 · Live Tiles on Start in Windows Phone 7.8 Windows Phone 7.8 features. A fresh Start. Resize your Live Tiles—small, medium, or large—for a totally. Today, MSN is mainly a popular Internet portal with mailing services like MSN Hotmail and MSN Messenger now known as Windows Live Hotmail and Windows Live Messenger respectively. Even after being the 17th most visited. A public instant messenger contact. Office Communicator enables you to communicate with instant messaging clients from AOL, Yahoo, MSN, and the Windows Live™ network of Internet. Why do I see the Exchange Connection error. Feb 25, 2014 · How to easily fix Windows Update error 80072ee2 As an system administrator, I know how difficult it can be to solve Windows Update errors. This is particularly the case with Windows Update, because Windows Update. Error 80072EE2 problems include computer crashes, freezes, and possible virus infection. Learn how to fix these Windows Live Messenger runtime errors quickly and easily! The only IM client is Microsoft’s own, which doesn’t support Google Chat or AIM, only Facebook and Windows Live Messenger (good for 14-year-olds. down on the quantity and incomprehensibility of error messages, but as I. Oct 24, 2009. Im running Windows 7 Home Premium 32 bit edition, Microsoft Security Essentials, and windows firewall. I keep getting the 80072EE2 error. Auto-start applications like Windows Live Messenger and Steam came right up. A Windows Live OneCare error message also came right up, but that wasn’t unexpected as the application is currently incompatible with Vista. How To Fix 80072ee2 In 3 Steps. Free Download. 100% Guaranteed. SmartPCFixer™ is a fully featured and easy-to-use system optimization suite. With it, you can clean windows registry, remove cache files, fix errors, defrag disk. Dec 06, 2012 · Hello,I have been experiencing a frustrating problem with my home computer over the last few months.I can not access Windows Live Messenger, it. Error Message 0xf78d2524 db:: 3.32::wireless problem after reinstalling windows from. – Hello everybody, I had some problems today and I decided to reinstall windows xp from the original cd Dell provided me with. I have a dell laptop Inspiron. Dec 15, 2010. Resolved Blue screen error "STOP: 0x0000007B (0xF78D2524, 0xC0000034, Had the same error message and after a Windows Live Messenger 2009 Error 80072efd. Windows 7 Application Compatibility http://social.technet.microsoft.com/Forums/windows/en-US/57b91c7c-0492. Contrary to some published reports, Internet Explorer does not get special treatment in Windows 7 Starter Edition. If you’ve read anything. and a couple of instant messenger windows, you can do it. You won’t see this warning. Troubleshooting Error Code 80072EE2 – Windows Update Web Site. This is not a VBScript problem, but a problem accessing Microsoft's Update service. The Windows Live service — which will be found at www.live.com — includes new versions of the company’s Hotmail and Messenger communications services. “I understand the concern raised by this error in judgment by an MS. No Disc Error Cd Changer May 27, 2013. When I load a CD into my player is says 'NO DISC'. Is this more likely to be the laser (which I can see doing its normal flash of light when the. CD player says NO DISC to some CDs | AVForums – My old ,but quite good quality Sony CD player
OPCFW_CODE
Each year, 8,000+ developers, engineers, software architects, dev teams, managers and executives from 70+ countries gather for DeveloperWeek (conducted Feb 17-19, 2021) to discover the latest in developer technologies, languages, platforms, and tools. “When it comes to technology, there’s incremental change, and then there’s fundamental innovation. Developer technology, from blockchain and artificial intelligence to big data and quantum computing represents fundamental innovation that people can build on for years. We are in the DevTech Age, where developer technologies and tools are now the most disruptive and fundamental technology innovation in the marketplace. When you build tools for developers, you are not just implementing a small incremental use case, you are building platforms, frameworks, and APIs that will enable entirely new web, mobile, and IoT innovation.” Read more about DeveloperWeek here. Deepfactor’s Presentations at DeveloperWeek: Breaking News: DevSecOps Is Broken without RUNTIME Observability - Speakers: Dr. Neil Daswani, Stanford Advanced Cyber Security Program, Co-Director; Kiran Kamity, Deepfactor, Founder & CEO; Mike Larkin, Deepfactor, Founder & CTO - Abstract: This panel of RUNTIME observability and security developers and experts will discuss the what, why, and how Deepfactor’s Continuous Observability platform: - Automatically observes more than 170 parameters—across system call, library, network, web, and API behaviors in every thread of every process in every running container of your application—and detects security and compliance risks in your CI pipeline - Detects insecure behaviors that only manifest at runtime and cannot be caught with code scanning or just looking at known CVE databases - Reduces alert volume by prioritizing the findings of your SCA tools with runtime insights from observability tools - Empowers Engineering leadership to accelerate productivity and decrease mean-time-to-remediate (MTTR) security and compliance risks pre-production as their teams ship secure releases on schedule - Takeaways: You’ll leave this session armed with the knowledge to immediately leverage continuous observability to consistently deploy apps with confidence. So You Think You Know the Behavior of Your Containers? Would You Stake Your Job on It? - Speakers: Mike Larkin, Deepfactor, Founder & CTO; John Day, Deepfactor, Customer Success Engineer - Abstract: You’ve developed a fabulous application in a container/Kubernetes Continuous Integration (CI) pipeline. The application works like it should, and the static scans look secure, but, is it actually operating securely? Are any 3rd party components you’ve integrated doing something they shouldn’t be doing? How do you know?To be confident about the behavior of your app, active inspection of running binaries within a container, utilizing live telemetry is key. Pre-production observability enables this by filling the gaps that static code (SAST) and dynamic external inspections (DAST) don’t cover.During this technical session, you’ll see pre-production observability in action and the benefits the solution delivers to developers and their teams. Mike Larkin, CTO at Deepfactor, and John Day, Customer Success Engineer at Deepfactor, will discuss a straightforward method to obtain this information from any container to deliver extracting metric data with minimal overhead. This information can then be processed to indicate issues that may affect the unknowing behavior of your container be it security, performance, or operational intention. - Takeaways: You’ll leave this session armed with the knowledge to immediately leverage pre-production observability to consistently deploy apps with confidence. - Click here to watch a replay. Deepfactor Founder & CEO, Kiran Kamity’s, Key Takeaways I attended DeveloperWeek as a speaker, attendee, and booth staffer, which gave me a 360° experience. I focused on the sessions in DevOps Summit, Containers & Kubernetes & Cloud Security. The overall conference experience with the virtual platform was nice—within the same tab you could attend sessions, ask questions, and speak with the booth staff. But, I certainly missed the physical booth & face-to-face interactions and relationship building. Sessions were informative and education for the most part. I noticed that there was a lot of discussion around enabling DevOps in organizations(while we “Silicon Valley types” take it for granted, there are several companies that don’t have even CI yet!). I am always drawn to new technologies – and observability, security, and technologies like fuzzing were certainly among the up and coming technologies used in the context of DevOps. Honeycomb’s session, “Observability for Software Teams”, demonstrated using observability for performance troubleshooting. We [Deepfactor] demonstrated using Continuous Observability for security & compliance insights, and some other startups talked about using fuzzing tools to identify bugs in web/API layers of apps. The booths were generally busy. Deepfactor’s booth was packed with a lot of attendees—almost 500—throughout Thursday and Friday. Deepfactor’s Customer Success Engineer, John Day’s, Key Takeaways Out of all the virtual events I’ve attended since the beginning of COVID, this has been the most interactive session out of any of them. Being an engineer, having a virtual event feel like an in-person event made it that much easier to engage. I attended sessions such as, “Recipe for Doing Devops within Your Enterprise with Kubernetes” presented by Salesforce and “GitOps, Kubernetes, and Secret Management” presented by CloudBees. Both sessions reinforced the need for understanding what’s happening inside your containers. Being inside the application allows us [Deepfactor] to observe behaviors with more semantic knowledge than other techniques (sidecars, eBPF programs, etc.). GitHub and GitLab have introduced dependent module vulnerability scanning services as part of their enterprise offerings. But these checks are performed at the source code check-in time; what happens if your code is dynamically importing something from a container’s base image or from the base operating system? This is where Deepfactor provides the missing piece with runtime visibility. And, since we have such a strong car analogy that makes the point about the need to test-drive running code, we were able to draw large crowds during our speaking sessions and at the booth to learn more about Deepfactor’s Continuous Observability platform. I think that having a “live” booth where we can chat with the booth visitors in real time is the best way to interact with potential customers with different roles – from developers to the AppSec teams to Engineering leadership. I look forward to participating again next year. Deepfactor gives you peace-of-mind knowing that you’ve created a framework for the AppSec teams and dev teams to work together in harmony. Engineering teams will be shipping faster with decreased alert fatigue and fewer security risks; across the board, productivity will skyrocket. Deepfactor enables your organization to have a ‘security at the source’ mindset by allowing application security to START left. You no longer need to choose between shipping fast versus secure to production—Deepfactor empowers you to deliver both with confidence.
OPCFW_CODE
package rfc6242_test import ( "encoding/xml" "fmt" "io" "strings" "github.com/andaru/netconf/rfc6242" ) // This example shows an example NETCONF client processing the beginning of a // session with a NETCONF server, including processing the server's <hello> // element and the decision to upgrade to :base:1.1 or "chunked-message" // framing protocol. // // See RFC6242 section 4.1 for further information about this process. func Example_framingProtocolUpgrade() { // serverSessionData is mock data representing the entire contents of the // NETCONF server session. With a real server, the server's <hello> message // below occurs at the same time the client sends its own <hello> message. // // Our (implied) client then sent an <rpc> request document with message-id // "m-112". // // The remainder of the server session data is the response to this RPC, // encoded in :base:1.1 chunked-framing format because the client and // server both offered the :base:1.1 capability in their <hello> messages. serverSessionData := `<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <capabilities> <capability>urn:ietf:params:netconf:base:1.1</capability> </capabilities> <session-id>42</session-id> </hello> ]]>]]> #1 < ## #100 rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="m-112"> <ok></ok> </rpc-reply> ## ` // Create an io.Reader to represent the client's input stream from the // server. In real uses this is replaced with a network transport, such // as SSH or TLS. This is the raw RFC6242 input, including framing data. framedInputStream := strings.NewReader(serverSessionData) // serverTransport is the raw server output and will be assigned the SSH or // TLS transport writer in a real usage. var serverTransport io.Writer // Create the output stream the client will use to send to the server transport. outputStream := rfc6242.NewEncoder(serverTransport) // outputStream implements io.Writer var _ io.Writer = outputStream // Create a *rfc6242.Decoder (io.Reader and io.WriterTo) to offer the // decoded input stream. inputStream := rfc6242.NewDecoder(framedInputStream) // Decode XML from the input stream netconfNS := "urn:ietf:params:xml:ns:netconf:base:1.0" nameHello := xml.Name{Space: netconfNS, Local: "hello"} nameRPCReply := xml.Name{Space: netconfNS, Local: "rpc-reply"} // :base:1.1 protocol capability capBase11 := "urn:ietf:params:netconf:base:1.1" // *xml.Decoder uses the io.Reader interface of the inputStream var _ io.Reader = inputStream // Process the NETCONF server session input d := xml.NewDecoder(inputStream) for { token, err := d.Token() if err != nil { if err != io.EOF { fmt.Printf("Token() error: %v\n", err) } break } switch token := token.(type) { case xml.StartElement: switch token.Name { case nameHello: // <hello> type helloElement struct { Capabilities []string `xml:"capabilities>capability,omitempty"` SessionID string `xml:"session-id"` } hello := helloElement{} if err := d.DecodeElement(&hello, &token); err != nil { fmt.Printf("DecodeElement() error: %v\n", err) } else { fmt.Printf("saw <hello>: %v\n", hello) for _, capability := range hello.Capabilities { if capability == capBase11 { rfc6242.SetChunkedFraming(inputStream, outputStream) fmt.Println("upgraded to :base:1.1 chunked-message framing") break } } } case nameRPCReply: // <rpc-reply> fmt.Println("saw <rpc-reply>") } } } // Output: // saw <hello>: {[urn:ietf:params:netconf:base:1.1] 42} // upgraded to :base:1.1 chunked-message framing // saw <rpc-reply> }
STACK_EDU
# Apply a watermark to a PDF file import os import shutil from tempfile import TemporaryDirectory from looptools import Timer from pdfconduit.utils import add_suffix, open_window, Receipt, Info from pdfconduit.modify.draw import WatermarkDraw from pdfconduit.modify.canvas import CanvasConstructor from pdfconduit.conduit.lib import IMAGE_DEFAULT, IMAGE_DIRECTORY from pdfconduit.conduit.encrypt import Encrypt from pdfconduit.conduit.watermark.add import WatermarkAdd class Watermark: def __init__(self, document, remove_temps=True, move_temps=None, open_file=False, tempdir=None, receipt=None, use_receipt=True, progress_bar_enabled=False, progress_bar='tqdm'): """ Watermark and encrypt a PDF document. Manage watermarking processes from single class initialization. This class utilizes the draw, add and encrypt modules. :param document: str PDF document full path :param remove_temps: bool Remove temporary files after completion :param open_file: bool Open file after completion :param tempdir: function or str Temporary directory for file writing :param receipt: cls Use existing Receipt object if already initiated :param use_receipt: bool Print receipt information to console and write to file """ self.time = Timer() self.document_og = document self.document = self.document_og self.watermark = None self.remove_temps = remove_temps self.move_temps = move_temps self.open_file = open_file if not tempdir: self._temp = TemporaryDirectory() self.tempdir = self._temp.name elif isinstance(tempdir, TemporaryDirectory): self._temp = tempdir self.tempdir = self._temp.name else: self.tempdir = tempdir self.progress_bar_enabled = progress_bar_enabled self.progress_bar = progress_bar self.use_receipt = use_receipt if use_receipt: if isinstance(receipt, Receipt): self.receipt = receipt else: self.receipt = Receipt(use_receipt).set_dst(document) def __str__(self): return str(self.document) def cleanup(self): runtime = self.time.end if self.use_receipt: self.receipt.add('~run time~', runtime) self.receipt.dump() if self.move_temps: if os.path.isdir(self.move_temps): shutil.move(self.tempdir, self.move_temps) if self.remove_temps: if os.path.isdir(self.tempdir): shutil.rmtree(self.tempdir) else: open_window(self.tempdir) return self.document def draw(self, text1=None, text2=None, copyright=True, image=IMAGE_DEFAULT, rotate=30, opacity=0.08, compress=0, flatten=False, add=False): """ Draw watermark PDF file. Create watermark using either a reportlabs canvas or a PIL image. :param text1: str Text line 1 :param text2: str Text line 2 :param copyright: bool Draw copyright and year to canvas :param image: str Logo image to be used as base watermark :param rotate: int Degrees to rotate canvas by :param opacity: float Watermark opacity :param compress: bool Compress watermark contents (not entire PDF) :param flatten: bool Draw watermark with multiple layers or a single flattened layer :param add: bool Add watermark to original document :return: str Watermark PDF file full path """ im_path = os.path.join(IMAGE_DIRECTORY, image) if os.path.isfile(im_path): image = im_path # Add to receipt if self.use_receipt: self.receipt.add('Text1', text1) self.receipt.add('Text2', text2) self.receipt.add('Image', os.path.basename(image)) self.receipt.add('WM Opacity', str(int(opacity * 100)) + '%') self.receipt.add('WM Compression', compress) self.receipt.add('WM Flattening', flatten) co = CanvasConstructor(text1, text2, copyright, image, rotate, opacity, tempdir=self.tempdir) objects, rotate = co.img() if flatten else co.canvas() # Run img constructor method if flatten is True # Draw watermark to file self.watermark = WatermarkDraw(objects, rotate=rotate, compress=compress, tempdir=self.tempdir, pagesize=Info(self.document_og).size, pagescale=True).write() if not add: return self.watermark else: self.add() return self.cleanup() def add(self, document=None, watermark=None, underneath=False, output=None, suffix='watermarked', method='pdfrw'): """ Add a watermark file to an existing PDF document. Rotate and upscale watermark file as needed to fit existing PDF document. Watermark can be overlayed or placed underneath. :param document: str PDF document full path :param watermark: str Watermark PDF full path :param underneath: bool Place watermark either under or over existing PDF document :param output: str Output file path :param suffix: str Suffix to append to existing PDF document file name :param method: str PDF library to be used for watermark adding :return: str Watermarked PDF Document full path """ if self.use_receipt: self.receipt.add('WM Placement', 'Overlay') if not watermark: watermark = self.watermark if not document: document = self.document self.document = str( WatermarkAdd(document, watermark, output=output, underneath=underneath, tempdir=self.tempdir, suffix=suffix, method=method)) if self.use_receipt: self.receipt.add('Watermarked PDF', os.path.basename(self.document)) if self.open_file: open_window(self.document) return self.document def encrypt(self, user_pw='', owner_pw=None, encrypt_128=True, allow_printing=True, allow_commenting=False, document=None): """ Encrypt a PDF document to add passwords and restrict permissions. Add a user password that must be entered to view document and a owner password that must be entered to alter permissions and security settings. Encryption keys are 128 bit when encrypt_128 is True and 40 bit when False. By default permissions are restricted to print only, when set to false all permissions are allowed. TODO: Add additional permission parameters :param user_pw: str User password required to open and view PDF document :param owner_pw: str Owner password required to alter security settings and permissions :param encrypt_128: bool Encrypt PDF document using 128 bit keys :param allow_printing: bool Restrict permissions to print only :return: str Encrypted PDF full path """ document = self.document if document is None else document if self.use_receipt: self.receipt.add('User pw', user_pw) self.receipt.add('Owner pw', owner_pw) if encrypt_128: self.receipt.add('Encryption key size', '128') else: self.receipt.add('Encryption key size', '40') if allow_printing: self.receipt.add('Permissions', 'Allow printing') else: self.receipt.add('Permissions', 'Allow ALL') p = str( Encrypt(document, user_pw, owner_pw, output=add_suffix(self.document_og, 'secured'), bit128=encrypt_128, allow_printing=allow_printing, allow_commenting=allow_commenting, progress_bar_enabled=self.progress_bar_enabled, progress_bar=self.progress_bar)) if self.use_receipt: self.receipt.add('Secured PDF', os.path.basename(p)) return p
STACK_EDU
Everyone knows about Linux. It’s arguably the most successful open source project ever. In open source cloud software, OpenStack is following a similar trajectory. Since its launch three years ago, it has attracted a community of corporate, developer and user support so rapidly that it can be called the fastest growing project in the history of open source. But let’s take a step back and look at the state of OpenStack including the factors that have driven its rapid ascent. What is OpenStack? In simplest terms, OpenStack is open source cloud software with which you can build an infrastructure-as-a-service cloud. It has components for compute, block storage, object storage, networking, dashboard, metering, authentication, VM image management and orchestration. Maturity and feature functionality of each component range from fully baked to fully green. OpenStack follows a six-month release cycle, and the current release is codenamed Grizzly. Who Built It? No single person started OpenStack. In the summer of 2009, NASA contributed the code that became OpenStack Compute and Rackspace (News - Alert) contributed OpenStack Object Storage. Rackspace managed the project until the OpenStack Foundation was launched in September 2012. Today, there are more than 9,000 individual Foundation members and 189 corporate supporters from 100 countries. More than 500 developers contributed to the current release, adding 230 new features. Momentum (News - Alert) & Production-Readiness According to ohloh, OpenStack’s primary open source competitors have topped out at 100 contributors, combined. That’s about one-fifth of OpenStack’s total contributors. This means that in the last six-month release cycle, OpenStack added more new contributors than its major competitors did combined in more than three years. Users are lining up, too. They include next-generation web app companies like Living Social, Ubisoft, PayPal and WebEx, and HPC users like Argonne National Laboratory and CERN in Switzerland. These are in addition to service providers like Rackspace, Comcast (News - Alert) and AT&T. OpenStack’s maturity of software development life-cycle has increased as well. The OpenStack community also created a sophisticated continuous integration (CI) and testing framework that auto-deploys and tests a complete deployment of OpenStack over 700 times a day. Every time a developer checks in new code they are “gated” by a full test of their code before that code is allowed to be contributed back to mainline. These “gated tests” have increased code quality dramatically, reduced or eliminated regressions and increased velocity, maintaining OpenStack’s six-month release cycle while increasing the number of projects and developers. No other open source rival comes close to the scope of continuous integration and testing that OpenStack has achieved. Momentum makes it clear that the open source cloud race is over and the maturity of the project is now without question. Simply put, OpenStack has won. Challenges Ahead for OpenStack Although the project has come a long way in three years, some big challenges must be addressed for OpenStack to continue its march toward Linux-dom. Some say that the lack of a Linus Torvalds in the OpenStack community is a weakness. Let’s be honest: there’s only one Linus. OpenStack must succeed with a technical meritocracy driving the development roadmap. I think of it like the early days of the Internet and IETF. What will help shape the future is simple: Rough consensus and running code. Customers will tell us what they need by what they adopt. OpenStack can leverage the fact that it’s the de-facto winner in open source cloud software. There’s immense velocity and corporate support, and the user base is growing rapidly. Increased public cloud compatibility is in the roadmap. The state of OpenStack is strong, but there’s much work left to be done. Randy Bias is co-founder and chief technology officer of Cloudscaling. Edited by Alisen Downey
OPCFW_CODE
Use intl package for plural forms https://github.com/openfoodfacts/smooth-app/blob/c535037ea373d40e5fa3f3acf9b64bcd62f082c3/packages/smooth_app/lib/pages/product/common/product_query_page_helper.dart#L53 These kind of cases are not so easy to manage and translate plus some languages need the parameter, can you please take a look here? https://api.flutter.dev/flutter/intl/Intl/plural.html Thanks. Thanks for your issue @yarons, I suppose you mean that in some languages it is not possible to write the time in front and it would be better to put it as a variable in the translation. Am I right? There are many challenges, In Hebrew we have singular, dual and plural, Arabic and Polish has 6 potential plural forms, in Ukrainian there's a special singular form that applies for 1, 11, 21 etc. (not just 1 like in most language). Btw I'm not sure we actively support RTL languages for the moment - at least I can't remember reading anything related in the code. @monsieurtanuki, yes you're right, we have not done anything about this. But I think I read that flutter automatically support such usecases. I found a possible solution in a publicly shared flutter document here but cant finde anything about this in the documentation. "nWombats": "{count,plural, =0{no wombats} other{{count} wombats}}", "@nWombats": {} nWombats(0) returns "no wombats" nWombats(5) returns "5 wombats" In the flutter_localizations README is also a part about plurals: MaterialLocalizations.of(context).selectedRowCountTitle(yourRowCount) Plural translations can be provided for several quantities: 0, 1, 2, "few", "many", "other". The variations are identified by a resource ID suffix which must be one of "Zero", "One", "Two", "Few", "Many", "Other". The "Other" variation is used when none of the other quantities apply. All plural resources must include a resource with the "Other" suffix. For example the English translations ('material_en.arb') for selectedRowCountTitle are: "selectedRowCountTitleZero": "No items selected", "selectedRowCountTitleOne": "1 item selected", "selectedRowCountTitleOther": "$selectedRowCount items selected", When defining new resources that handle pluralizations, the "One" and the "Other" forms must, at minimum, always be defined in the source English ARB files. The first one would alow much more flexibility for the translators, as @yarons mentioned some languages have their own forms for more then just none, one and multiple. We would have to test if the first method is implmented since some of the things in these docs could just be ideas which are not in the final product. Ok it tested a bit. The first method is working but turns out flutter_localizations uses dart-lang/intl under the hood which only allows zero, one, two, many, few, other. But I am not able get many and few This way or are you using a different syntax? {count,plural, =0{no wombats} one{a single wombat} many{wow! look at all these wombats} other{{count} wombats}} Exactly, its 1 not one and many is not in use, but yeah. Have a look at #319. Is it that what you expected? That's wonderful but you can keep the static part out instead of repeating it, so this: "plural_ago_minutes": "{count,plural, =0{Cached results from: less then a minute ago} =1{Cached results from: one minute ago} =2(Cached results from: two minutes ago} other{Cached results from: {count} minutes ago}}", (Also notice the right after =2 there are parentheses instead of curly brackets and then=>than). Would become this: "plural_ago_minutes": "Cached results from: {count,plural, =0{less than a minute} =1{one minute} =2{two minutes} other{{count} minutes}} ago", Great job, thank you. @yarons the way you suggested it unfortunately does not work, the Cached results from: part will be skipped. We could also put this String together in the code (from multiple translations), but I think the method we have at the time will make it easier for the translator. Seriously? The ago part as well? OK then, I'm guessing this approach is good enough, weird though…
GITHUB_ARCHIVE
Matlab: use of memory I have several "out of memory" problems using MATLAB. I don't understand exactly if Matlab can use (or not) all the ram memory of my computer. This is the problem: my computer has 4gb of ram memory and 2 gb for Swap memory (my OS is Linux/Ubuntu 12.10), but Matlab only uses up to 2.6 gb and then shows the warning: "out of memory". Is it possible to fix this and allow Matlab to use all the "available" memory? Thanks. Your OS and other applications use some memory. How do you know that the available memory is more than 2.6 GB? Anyway, typing memory on Matlab commnand window may give you more information using the system monitor (in ubuntu) I obtain all the information I want. The available memory of the system is 3.9gb, and also when I'm working with Matlab I can check the memory use by the other programms and that's not the problem, I mean Matlab only use up to 2.6 gb, and it has more memory available. The command "memory" only works on Windows matlab version, in ubuntu I've tried with [http://stackoverflow.com/questions/12350598/how-to-access-memory-information-in-matlab-on-unix-equivalent-of-user-view-max] It sounds like you're running 32bit linux and or 32bit MATLAB. If you allow for enough swap, a process can take up to its virtual memory address space worth of memory. Generally for 32bit linux you're limited to 3gb of address space for any process (the last gb is kernel memory space). It's entirely possible, depending on usage patterns, that at 2.6gb the next request for memory can't complete because there isn't enough /contiguous/ memory to satisfy it. This is especially common when growing large arrays. Upgrading to a 64bit version of linux/windows/macOS with 64bit MATLAB should solve this problem but even so, using 3gb+ of virtual address space on a system with 4gb of ram is probably going to start making things very slow. Yes, like you said: I solved the problem in some sense, in this link [http://www.mathworks.com/help/matlab/matlab_prog/resolving-out-of-memory-errors.html], there's an explanation of the memory limits of matlab, it shows a list of MATLAB supported operating systems and their process limits. In conclution it's true that there is a limit for the available memory of matlab, and it depends on the number of bits of the OS and the respectly number of the matlab version. Obviously that limit is going to be less than the total available memory of the computer. Thanks for your answer! Some googling brought up this: MATLAB will use as much memory as your OS lets it use; the only way to increase the amount of memory MATLAB can use is to reduce the amount of memory occupied by other applications or to give the OS more memory to partition out to the applications. So no, there's no easy way to tell matlab to use more memory. You either have to buy more memory, optimize your code, run your scripts/functions with less output to store at once or reduce memory usage by other procedures that are running. Here are some helpful links though: memory management functions memory allocation related discussion on the mathworks forum I think there's some kind of limit of the available memory for Matlab, it's not seem to be true that Matlab can use as much memory as my system allows it. With only 500mb of memory used, I made a little programm in c++ to "sature" the memory and it worked, I could see how it "catched" the whole memory (4gb). And after that I tried to "sature" the memory only using matlab but again , I couldn't (it only used 2.6 gb....) @Javier Gargiulo nevertheless, there's no function to tell matlab to use more memory from within matlab.
STACK_EXCHANGE
1 EUROPEAN COMMISSION. Directorate-General for Research & Innovation H2020 Programme Guidelines on FAIR data management in Horizon 2020. Version 26 July 2016. History of changes Version Date Change Page the guide was also published as part of the Online Manual all with updated and simplified content This version has been updated in the context of the all extension of the Open Research data Pilot and related data management issues New DMP template included 6. 2. 1. Background Extension of the Open Research data Pilot in Horizon 2020. Please note the distinction between open access to scientific peer-reviewed publications and open access to research data : publications open access is an obligation in Horizon 2020. data the Commission is running a flexible pilot which has been extended and is described below. 2 See also the guideline: Open access to publications and research data in Horizon 2020. This document helps Horizon 2020 beneficiaries make their research data findable, accessible, interoperable and reusable (FAIR), to ensure it is soundly managed. Good research data management is not a goal in itself, but rather the key conduit leading to knowledge discovery and innovation, and to subsequent data and knowledge integration and reuse. Note that these guidelines do not apply to their full extent to actions funded by the ERC. For information and guidance concerning Open Access and the Open Research data Pilot at the ERC, please read the Guidelines on the Implementation of Open Access to Scientific Publications and Research data in projects supported by the European Research Council under Horizon 2020. 3 The Commission is running a flexible pilot under Horizon 2020 called the Open Research data Pilot (ORD pilot). The ORD pilot aims to improve and maximise access to and re-use of research data generated by Horizon 2020 projects and takes into account the need to balance openness and protection of scientific information, commercialisation and Intellectual Property Rights (IPR), privacy concerns, security as well as data management and preservation questions. In the 2014-16 work programmes, the ORD pilot included only selected areas of Horizon 2020. Under the revised version of the 2017 work Programme , the Open Research data pilot has been extended to cover all the thematic areas of Horizon 2020. While open access to research data thereby becomes applicable by default in Horizon 2020, the Commission also recognises that there are good reasons to keep some or even all research data generated in a project closed. 4 The Commission therefore provides robust opt-out possibilities at any stage, that is during the application phase during the grant agreement preparation (GAP) phase and after the signature of the grant agreement. The ORD pilot applies primarily to the data needed to validate the results presented in scientific publications. Other data can also be provided by the beneficiaries on a voluntary basis, as stated in their data management Plans. Costs associated with open access to research data , can be claimed as eligible costs of any Horizon 2020. grant. 3. Participation in the ORD pilot is not part of the evaluation of proposals. In other words, proposals are not evaluated more favourably because they are part of the pilot and are not penalised for opting out of the pilot. For more on open access to research data , please also consult the H2020 Annotated Model Grant Agreement. 5 Participating in the ORD Pilot does not necessarily mean opening up all your research data . Rather, the ORD pilot follows the principle "as open as possible, as closed as necessary" and focuses on encouraging sound data management as an essential part of research best practice. 2. data management Plan general definition data management Plans (DMPs) are a key element of good data management . A. DMP describes the data management life cycle for the data to be collected, processed and/or generated by a Horizon 2020 project. As part of making research data findable, accessible, interoperable and re-usable (FAIR), a DMP should include information on: the handling of research data during and after the end of the project what data will be collected, processed and/or generated which methodology and standards will be applied whether data will be shared/made open access and how data will be curated and preserved (including after the end of the project). 6 A DMP is required for all projects participating in the extended ORD pilot, unless they opt out of the ORD pilot. However, projects that opt out are still encouraged to submit a DMP on a voluntary basis. 3. Proposal, submission & evaluation Whether a proposed project participates in the ORD pilot or chooses to opt out does not affect the evaluation of that project. In other words, proposals will not be penalized for opting out of the extended ORD pilot. Since participation in the ORD pilot is not an evaluation criterion, the proposal is not expected to contain a fully developed DMP. However, good research data management as such should be addressed under the impact criterion, as relevant to the project. Your application should address the following issues: What standards will be applied? 7 How will data be exploited and/or shared/made accessible for verification and reuse? If data cannot be made available, why? How will data be curated and preserved? 4. Your policy should also: reflect the current state of consortium agreements on data management be consistent with exploitation and Intellectual Property Rights (IPR). requirements You should also ensure resource and budgetary planning for data management and include a deliverable for an initial DMP at month 6 at the latest into your proposal. 4. Research data management plans during the project life cycle Once a project has had its funding approved and has started, you must submit a first version of your DMP (as a deliverable) within the first 6 months of the project. The Commission provides a DMP template in annex, the use of which is recommended but voluntary. 8 The DMP needs to be updated over the course of the project whenever significant changes arise, such as (but not limited to): new data changes in consortium policies ( new innovation potential, decision to file for a patent). changes in consortium composition and external factors ( new consortium members joining or old members leaving). The DMP should be updated as a minimum in time with the periodic evaluation/assessment of the project. If there are no other periodic reviews foreseen within the grant agreement, then such an update needs to be made in time for the final review at the latest. Furthermore, the consortium can define a timetable for review in the DMP itself. Periodic reporting For general information on periodic reporting please check the following sections of the online manual How to fill in reporting tables for publications, deliverables Process for continuous reporting in the grant management system. 9 5. Support Reimbursement of costs Costs related to open access to research data in Horizon 2020 are eligible for reimbursement during the duration of the project under the conditions defined in the H2020 Grant Agreement, in particular Article 6 and Article , but also other articles relevant for the cost category chosen. data management Plan A DMP template is provided in Annex I. While the Commission does not currently offer its own online tool for data management plans, beneficiaries can generate DMPs online, using tools that are compatible with the requirements set out in Annex 1 (see also section 7 of Annex I). 5. ANNEX 1. Horizon 2020 FAIR data management Plan (DMP) template Version: 26 July 2016. Introduction This Horizon 2020 FAIR DMP template has been designed to be applicable to any Horizon 2020 project that produces, collects or processes research data . 10 You should develop a single DMP for your project to cover its overall approach. However, where there are specific issues for individual datasets ( regarding openness), you should clearly spell this out. FAIR data management In general terms, your research data should be 'FAIR', that is findable, accessible, interoperable and re-usable. These principles precede implementation choices and do not necessarily suggest any specific technology, standard, or implementation- solution. This template is not intended as a strict technical implementation of the FAIR. principles, it is rather inspired by FAIR as a general concept. More information about FAIR: FAIR data principles (FORCE11 discussion forum). FAIR principles (article in Nature). Structure of the template The template is a set of questions that you should answer with a level of detail appropriate to the project.
OPCFW_CODE
Grails Facebook-Graph Plugin Oauth2 We have been using the Grails Facebook-graph plugin for a while now - it has been working perfectly until earlier this month when FB apparently turned off their old authentication scheme, and indirectly forced everybody to use oauth2 instead. This post from FB https://developers.facebook.com/blog/post/525/ describes the changes, and the issue in the Grails plugin seems to be that it does not comply with the new standard. The main issue appears to be in the way the active user data is being maintained in the plugin. This is currently based on the FB provided cookie "fbs", which contains all the necessary session data related to the active user. Unfortunately, this is no longer provided by FB (apparently replaced by a "fbsr" cookie instead). I have searched the FB documentation, and in various forums for details on how to upgrade the plugin, but unfortunately without luck. Can anyone help with a hint or two on what steps should be performed in order to get the plugin updated? EDIT: I think the updated version of the plugin (0.14) has been pushed the public repository. You should try grabbing that one first before reading the rest of my answer. It looks like the plugin maintainer, Jesus Lanchas, made some updates over the last few days to enable oauth2 support. It has not been pushed to the plugin repository yet, but I was able to get it working with my project. Here's what I did: #Install a local copy of the plugin WITHIN my project mkdir plugins-local cd plugins-local git clone git://github.com/chechu/grails-facebook-graph.git mv grails-facebook-graph facebook-graph Update BuildConfig.groovy and tell grails where to load the plugin from. I put this line before grails.project.dependency.resolution grails.plugin.location.'facebook-graph' = "plugins-local/facebook-graph" Uninstall the existing facebook-graph plugin from my project grails uninstall-plugin facebook graph This is a temporary solution for me until the offical update hits the repo, but it allows me to make sure I'm using the same new code everywhere. That looks really great. We will try out this during the coming week. Hope the new official version will be on the repo soon. Have just downloaded and installed the new version. Seems to work perfectly. A big thank to Jesus Lanchas :-) EDIT: we released our Facebook Grails SDK on GitHub : https://github.com/benorama/facebook-grails-sdk. Currently only tested on Grails 2.0… Any feedback is welcome before we release it officially to Grails.org. Indeed, it looks like Grails Facebook-graph plugin does not support OAuth2 Facebook authentication (which is required since October 1st 2011). We have already ported the official PHP SDK V3.1.1 to ColdFusion 9 (https://github.com/affinitiz/facebook-cf-sdk). Last month, we started to implement it as a plugin in Grails 2.0. It is currently at an alpha stage so we have not released it yet, but it is working on our prototype. To connect to the Facebook Graph API, it uses RestFB internally. If you want to give it a try and give us some feedbacks, let me know, I'll sent it to you by email. Hi Benoit, depending on how your plugin works it could be an option to upgrade to this instead. My original plan was to update the existing Grails FB-graph plugin and publish it back to repo, since it might help other Grails users as well. However, if your plugin will work with Grails 1.3.7 it might be a fine solution, as well. You are welcome to send me a link on email<EMAIL_ADDRESS>
STACK_EXCHANGE
var cp = new CreasePattern().rectangle(1, 1.618); var origami = new OrigamiPaper("canvas-cp", cp); // by default only edges are shown origami.show.nodes = true; origami.show.faces = true; origami.show.sectors = true; // set all fill colors transparent, only turn on each one upon hover origami.style.node.fillColor = {alpha:0.0}; origami.style.face.fillColor = {alpha:0.0}; origami.style.sector.fillColors = [{alpha:0.0}, {alpha:0.0}]; // this is required if new components are shown. they didn't yet exist on the canvas // (todo: this is hard to remember and shouldn't be required. need a work-around) origami.draw(); origami.patternscale = 1/12 * 2; // origami.makeTouchPoint(new XY(0, 0.5 - 0.025 * origami.patternscale * 12 )); // origami.makeTouchPoint(new XY(0, 0.5 + 0.025 * origami.patternscale * 12 )); // origami.makeTouchPoint(new XY(0.5, 0.5 - origami.patternscale )); // origami.makeTouchPoint(new XY(0.5, 0.5 + origami.patternscale )); origami.bounds = cp.bounds(); for(var i = 0; i < 3; i++){ var dir = (i%2 * 2)-1; origami.makeTouchPoint(new XY(origami.bounds.size.width*0.5, origami.bounds.size.height*0.5 + dir*(i+0.5)*0.25 )); origami.makeTouchPoint(new XY(origami.bounds.size.width*0.5, origami.bounds.size.height*0.5 - dir*(i+0.5)*0.25 )); } origami.makeTouchPoint(new XY(0, origami.bounds.size.height*0.5 - 0.015 )); origami.makeTouchPoint(new XY(0, origami.bounds.size.height*0.5 + 0.015 )); origami.redraw = function(){ this.cp.clear(); var scale = this.patternscale; var gap = 0.025 * scale * 12; for(var i = 0; i < 6; i++){ var dir = (i%2 * 2)-1; this.cp.crease(new Line(this.touchPoints[i].position.x, this.touchPoints[i].position.y, 1, dir*.04)); } for(var i = 0; i < this.xs.length-1; i++){ this.cp.creaseAndReflect( new Line(this.xs[i], this.bounds.size.height*0.5, 0, 1) ); } this.cp.clean(); this.cp.creaseAndReflect( new Ray(0, this.touchPoints[6].position.y, 1, -0.5) ); this.cp.creaseAndReflect( new Ray(0, this.touchPoints[7].position.y, 1, 0.5) ); this.cp.clean(); origami.draw(); } origami.reset = function(){ this.cp.clear(); var scale = this.patternscale; this.xs = []; do{ this.xs = []; for(var i = 0; i < 1; i += Math.random() * (scale/2) + (scale/3)){ this.xs.push(i); } } while(1 - this.xs[this.xs.length-1] > (scale/4)); this.xs[this.xs.length-1] = 1; // console.log(this.xs.length); // this.ys = Array.apply(null, Array(this.xs.length+1)).map(function(){return 0;}); // var upwards = true; // var horizontal = false; this.redraw(); } origami.reset(); origami.onMouseMove = function(event){ if(this.mouse.isPressed){ this.redraw(); } // update() returns all crease lines back to their original color origami.update(); // get the nearest parts of the crease pattern to the mouse point var nearest = cp.nearest(event.point); // get() will return the paperjs object (line, polygon) that reflects the data model object // this paperjs object is what we style origami.get(nearest.node).fillColor = this.styles.byrne.darkBlue; origami.get(nearest.edge).strokeColor = this.styles.byrne.yellow; origami.get(nearest.face).fillColor = this.styles.byrne.red; origami.get(nearest.sector).fillColor = this.styles.byrne.blue; }
STACK_EDU
Apr 06 · Windows shouldn' t change the keyboard layout automatically but it remembers the individual language settings to every application as long as the application is running. This wikiHow teaches you how to change the language used in your computer' s web browser. 4 for everyone else). In this post acquisition, use of complex systems of communication, particularly the human ability to do so; , maintenance , we are going to discuss that how one can use nguage is a system that consists of the development a language is any specific example of such a system. Questions concerning the philosophy of language such as whether words can represent experience have been debated at. 1 for Windows Vista The Windows Mobile Device Center enables you to set up new partnerships pictures , manage music, synchronize content , video with Windows Mobile powered devices ( Windows Mobile later). On average Full- stack developers are comfortable coding with 5 to 6 major languages frameworks ( vs. Azure App Service enables you to build mobile back ends, host web apps RESTful APIs in the programming language of your choice without managing infrastructure. For more information on doing that see this article for Windows 10 this article for Windows 7 8. In this article, you will find direct download links to Windows 7 SP1 language packs for all available languages. I' m writing a script ( powershell) for SQL Server Express Install. I live in Thailand but I my old windows display language was English so I prefer using English menu in Windows. Mar 15, · i just buy a laptop in singapore but i need to use chinese version in my english version windows i am using windows 7 how do i get install chinese version to my laptop * original title - how do i. If you download Windows 7 DVD from the Internet, most likely it will be downloaded in English. The scientific study of language is called linguistics. Open your Android' s Settings. This article compares UML tools. How to Change the Language in Android. UML tools are software applications which support some functions of the Unified Modeling Language. It is now more life, more common for people to use multiple languages for work Microsoft also allows users to switch display languages on Windows 10 ject language pack Windows 10 WIM images can be achieved in many different ways. This wikiHow teaches you how to change the default language on your Android phone tablet as well as how to change your Android keyboard' s input language. 2) Phone From home. Windows mobile 6 1 language change. Windows mobile 6 1 language change.I installed Windows 8 ( had some unrelated problems so I had to revert to Windows 7 though) but Shift- JIS ( Japanese). MDT has a module to easily import image. Note: If you want to completely change Office permanently to a different language you' ll get the best results if you first set that to be your default display language in Windows as well. Change Exchange email password on Android ( Version 4. 5 PDA to connect up to a Windows 10 PC? 1 Home single language to windows 10 Home single language. 1 I was able to use Windows Mobile Device Center to link up to the PDA over USB. The problem is: I have to change the OS Language ( Region and languages) to fr_ FR ( French from France) Silently. You can change the language used in Google Chrome Microsoft Edge, Firefox, Internet Explorer Safari. It offers auto- scaling supports both Windows , Linux, high availability, enables automated deployments from GitHub Azure. ( Fixed) How to Download and Install Windows 10 Language Pack. Before upgrading the PC from Window 8. There has always been a " Language for non- Unicode programs" settings in the " Region Vista , Language" settings in XP 7.
OPCFW_CODE
Failed to deserialize CompileTimeKeyframe because splitting by @ did not result in 3 tokens! Running the latest code on Unity 5.3.5f1 on OS X 10.11.5, I got this error immediately after installing the tracker in my project and Unity compiled it: Failed to deserialize CompileTimeKeyframe because splitting by @ did not result in 3 tokens! UnityEngine.Debug:LogError(Object) DTCompileTimeTracker.CompileTimeKeyframe:Deserialize(String) (at Assets/CompileTimeTracker/Editor/CompileTimeKeyframe.cs:18) DTCompileTimeTracker.CompileTimeKeyframe:<DeserializeList>m__4(String) (at Assets/CompileTimeTracker/Editor/CompileTimeKeyframe.cs:42) System.Linq.Enumerable:ToList(IEnumerable`1) DTCompileTimeTracker.CompileTimeKeyframe:DeserializeList(String) (at Assets/CompileTimeTracker/Editor/CompileTimeKeyframe.cs:42) DTCompileTimeTracker.CompileTimeTrackerData:Load() (at Assets/CompileTimeTracker/Editor/CompileTimeTrackerData.cs:49) DTCompileTimeTracker.CompileTimeTrackerData:.ctor(String) (at Assets/CompileTimeTracker/Editor/CompileTimeTrackerData.cs:29) DTCompileTimeTracker.CompileTimeTracker:.cctor() UnityEditor.EditorAssemblies:SetLoadedEditorAssemblies(Assembly[]) Subsequent compilations get the same error, but I do also see: Compilation Finished: 22.68s (error) UnityEngine.Debug:Log(Object) DTCompileTimeTracker.CompileTimeTrackerWindow:LogCompileTimeKeyframe(CompileTimeKeyframe) (at Assets/CompileTimeTracker/Editor/CompileTimeTrackerWindow.cs:31) DTCompileTimeTracker.CompileTimeTracker:HandleEditorFinishedCompiling() (at Assets/CompileTimeTracker/Editor/CompileTimeTracker.cs:46) DTCompileTimeTracker.EditorApplicationCompilationUtil:OnEditorUpdate() (at Assets/CompileTimeTracker/Editor/Util/EditorApplicationCompilationUtil.cs:29) UnityEditor.EditorApplication:Internal_CallUpdateFunctions() The time tracker window shows no information. I tried creating an empty Unity project and adding the tracker and it seems to work fine there, no errors and data shows up in the window. After going back to my main project, I no longer get the error. However, the compile times from the empty project are showing up in the window with my main project. That's really a separate problem so I'll make a new issue for that. Did you have the old version of the tracker running in your project? It's possible that you then updated and the serialization was switched - leaving old / different formatted data in the editor prefs. While you do get that error when failing to parse the tracker information, the next time the tracker saves it should ignore failed data + the data should be clean again. I had never had the tracker installed before. The same thing occurred when I first did a build on another machine on the codebase, and then it also went away again. Looks perhaps like a problem with any brand new install. John On Jul 18, 2016, at 7:42 PM, Darren Tsung<EMAIL_ADDRESS>wrote: Did you have the old version of the tracker running in your project? It's possible that you then updated and the serialization was switched - leaving old / different formatted data in the editor prefs. While you do get that error when failing to parse the tracker information, the next time the tracker saves it should ignore failed data + the data should be clean again. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/DarrenTsung/compile-time-tracker/issues/2#issuecomment-233402251, or mute the thread https://github.com/notifications/unsubscribe-auth/AAVDtMglzJWOv2ugSkzLGpa2_l2oTvEHks5qW7rqgaJpZM4JNNVO.
GITHUB_ARCHIVE
My name is Henrique Pereira Coutada Miranda. Currently I am a Post-doc in Physics and Materials Science at the Université catholique de Louvain. Here you will find small scripts, games and programs that I implemented and want to share with everyone. Some of these projects are still under development. 2017-present: Post-doc position in the Université catholique de Louvain in the groups of Prof. Gian-Marco Rignanese and Prof. Geoffroy Hautier 2013-2017: PhD on "Ab initio approaches to Resonant Raman Spectroscopy of Transition Metal Dichalcogenides" under the supervision of Prof. Ludger Wirtz in the Theoretical Solid-State Physics Group in the University of Luxembourg. 2012-2013: Master thesis on “Embedding schemes for treating magnetic impurities and defects in metallic systems” under the supervision of Prof. Matthieu Verstraete in the University of Liège and Prof. Myrta Gruning at the time in the University of Coimbra now at the University of Belfast. 2011-2013: Master in Physics (specialization in Computational Physics) in the Physics Department of the Faculty of Sciences and Technology of the University of Coimbra. 2008-2011: Degree in Physics in the Physics Department of the Faculty of Sciences and Technology of the University of Coimbra. Phonon and exciton visualization - H.P.C. Miranda, S. Reichardt, G. Froehlicher, A. Molina-Sánchez, S. Berciaud, and L. Wirtz, Nano Lett. 4, 17 (2017) - T. Galvani, F. Paleari, H.P.C. Miranda, A. Molina-Sánchez, L. Wirtz, S. Latil, H. Amara, and F. Ducastelle, Phys. Rev. B 94, 125303 (2016) - J. Li, H. P. C. Miranda, Y.-M. Niquet, L. Genovese, I. Duchemin, L. Wirtz, and C. Delerue, Phys. Rev. B 92, 075414 (2015) - M. Endlich, H. P. C. Miranda, A. Molina-Sánchez, L. Wirtz, and J. Kröger, Annalen der Physik 526, 372 (2014) Developer of yambopy Developer of phononwebsite Developer of excitonwebsite Colaborator of yambo If you have any suggestions or questions send an e-mail to: miranda dot henrique at gmail Fonds National de la Recherche Luxembourg (2013-2017): http://www.fnr.lu/ University of Luxembourg (2013-2017): http://wwwen.uni.lu/
OPCFW_CODE
When it comes to being a Manager, a team leader, there are lot of dos and don’ts one is being told to follow. A manager is a “person”, let us start from here. A person, who is a given a team to handle and to get the work done. The manager is responsible for enabling conducive environment in the team and maintaining the productive levels of the team members. The manager is a person the team members look up to for learning new things and practicing the existing. Now in between all these roles, there is one more role a manager has to play and that is of being a mid hanger between the management and the team members as well as employees, yes I’m talking about an HR Manager here! How does a manager motivate the team members? How does a manager bond well with the team members? Does a manager need to bond well with the team members at all? This could be a very subjective question as well as situation and persona dependent. Because behind every manager there is a personality which is functional. This personality has a great effect on how a manager conducts himself/herself and manages the team. If one has to ask me, how I handle my team, I would say this: -I balance: I pamper and I reprimand. -I believe in giving freedom & space: Lining up the tasks and checking back on the deadline. -I trust: To certain extent where team members know they cannot lie to me, in fact they need not lie to me! -I mingle: I become like my team offline, on trips, on small vacations and on eat outs. All this with a pinch of professionalism and boundaries. There is a very thin boundary line between a mingling manager and a professional strict manager. I do not believe in being only manager to my team. I want to connect at human level, know the person behind the role which helps me in understanding their potential and limitations. For me being a soft manager works well where in I don’t have to reprimand often and the work is done within stipulated deadlines. There are times when I’ve been pointed out as the one who spoils the team rotten by pampering and being too soft. But as long as my team is working on my words and delivers on deadlines without fail and yet feel light and happy, my job is done! So be it because I’m a soft manager or whatever! The bottom line is met. That said, there is no fixed formula by which one can manage the team all the time. There is some tweaking needs to be done in the way one handles the team, depending on the team members and the situation. What is your idea of a good manager? Do you advocate maintaining distance and not mingling with the team offline? How do you get the team to trust you and follow?
OPCFW_CODE
Human-in-the-loop machine learning (HITL) simply means this: keeping humans in the loop in the development of machine learning models. It integrates human labelled data into the machine learning model and goes through a feedback cycle to teach models to yield the desired output. From a retail customer’s perspective, the seamless experience of contactless technology is a dream. Amazon has made it possible with its cashierless Just Walk Out1 technology and smart carts strategy. Behind the scenes is where the “magic” happens. Every item you pick up and put into your cart is labeled. This also triggers inventory management if an item is running low on the shelves. That’s HITL at work. The use of artificial intelligence (AI) has its flaws, particularly in shoplifting detection2, therefore requiring human expertise and guidance. The goal of HITL is to build smarter machine learning systems to increase work productivity and efficiency. The goal of HITL is to build smarter machine learning systems. Enhanced by human labelled data, these systems increase work productivity and efficiency. In 2020, Google Health’s medical AI system, DeepMind, for example, detected more than 2,600 breast cancer cases3 than a radiologist would have. However, rather than depending on AI entirely, machine learning systems still perform best when complemented with human intellect. Of how much humans are involved in machine learning4, developers adopt a variation of the Pareto principle—80% computer-driven AI, 19% human input, and 1% randomness. Human-in-the-loop machine learning includes three main stages: training, tuning and testing. Let’s see below how these stages are applied in the machine learning lifecycle: Training. Oftentimes, data can be incomplete or messy. Humans add labels to raw data to provide meaningful context so that machine learning models can learn to produce desired results, identify patterns, and make correct decisions. Data labeling is a crucial step in building AI models as properly labeled datasets would provide a baseline for further application and development. Tuning. In this stage, humans check the data for overfitting5. While data labeling establishes the foundation for an accurate output, overfitting happens when the model trains the data too well. When the model memorizes the training dataset, it can make a generalization, thus making it unable to perform against new data. It’s allowing a margin of error to allow unpredictability in real-life scenarios. It is also in the tuning stage when humans teach the model about edge cases or unexpected scenarios. For example, facial recognition6 enables convenience but is susceptible to gender and ethnicity bias when datasets are misrepresented. Testing. Lastly, humans evaluate if the algorithm is overly confident or lacking in determining an incorrect decision. If the accuracy rate is low, the machine goes through an active learning cycle wherein humans give feedback for the machine to reach the correct result or increase its predictability. For social networking platforms to be a safe space for everyone, large scale content moderation services are needed. AI optimizes content moderation services to recognize patterns and automatically detect and censor sensitive content. Humans first train the system to identify text, usernames, images, and videos for hate speech, cyberbullying, explicit or harmful content, fake news, and spam. The data then goes through finetuning and eventually testing to check contextual and language nuances. A medical imaging model can powerfully and accurately detect malignant and benign cells before recommending treatment. To keep machines up to speed, subject matter expertise, up-to-date training, and familiarity are key. High-quality medical datasets are difficult to find because of limited healthcare data and patient data protection laws. Thus, HealthTech companies rely on data labelling services to augment their training data. To develop self-driving cars, massive amounts of image, video, or sensor data are collected and annotated by humans. For the self-driving car to correctly recognize objects in its environment, it requires adequately labelled examples to identify pedestrians, bikes, trees, and more. These details provide insight to various malperforming scenarios. Augment Rare Training Data While there are many open source datasets available on the web, they generally don't work for specific problems. For such rare use cases, the data needs to be created and curated by humans. Increase Safety & Precision There are many instances where you need human-level precision to ensure safety and accuracy. For example, when developing a fleet of autonomous vehicles, it is best to have the system monitored by humans to catch and fix any failure cases. Incorporate Subject Matter Experts By having a team of subject matter experts (SMEs) collecting and annotating training data, you can create some very sophisticated applications. For example, a medical image analysis algorithm would require a wealth of meticulously annotated image data by experts in the field. Artificial intelligence is only as intelligent as the data it’s given. Machine learning still requires highly capable data annotation experts to ensure an excellent performance model. It’s just a matter of finding the right balance between people and technology. TaskUs sets itself apart with its people-first culture in the fast-paced, tech-dominated industry. We provide data collection, annotation, and evaluation services to power the most innovative AI solutions. Whether you're building computer vision or natural language processing (NLP) applications, we are equipped to handle complex, large scale data labelling projects. One of our clients include a leading autonomous vehicle company wherein we provide optimized data annotation and labeling services to help them achieve rapid scaling of their operations. Our expert AI Operations solutions resulted to: Partner with Us in building better-performing machine learning models faster and more efficiently.
OPCFW_CODE
If you're a fan of Linux systems and have a hard drive crashing, see in this article the best 5 systems we think can recover your valuable data. Logically, there is no computer user who, in his total preoccupation with computers, has not found himself in the awkward position of not being able to read a data storage unit, due to damaged or "hit" material. Knowing that if you find yourself in such a situation then the panic of data loss, probably makes you not think clearly and desperately looking for an immediate solution, here are 5 data recovery systems for Linux systems. Respectively for Windows systems we have already written several articles that you can find with a simple search on our website. So let's get started. While the Ddrescue is not a pure data recovery tool, it should stand as your first step on your journey, recovering your files. Ddrescue creates an image of your troublesome unit or partition so you can analyze a copy of your broken disk rather than the disk itself. Always copy your disc to a separate image before you start file recovery operations with the following tools. The more you use the actual damaged unit, the more damage you can cause it. What you see in the image above is some ddrescue work. In the first command, it copies the entire disk to an image called "backup.img". The second command then copies only the damaged blocks in the same image, passing over these blocks three times each to try to read them more correctly. When you run the same commands, always use a logfile. Backups can take hours or days to complete without a log, any interruptions will make you start the process again from scratch. When this process is complete for your disk or partition, you can attach the copied image and use the following utilities to recover files from it. Further use of the following recovery tools for Linux will retrieve data from the "backup.img" created here. The Foremost uses the structure of common file types to retrieve data. You can either retrieve the entire disk image with all its files, or specify some types of files that are most interesting to you. What you can see in the above photo is Foremost's results in verbose mode (or -v option). The -t option searches for .jpg file types and the -i and -o options respectively indicate the input file and the output directory. You can see Foremost analyzes the image created with Ddrescue in the previous step. This image (backup.img) has some JPEG files inside it. Foremost was able to find ten such files and, after pulling them from the image, copied them in the output file. The Scalpel, which in Greek means a lancet, is based on Foremost, but aims to be more fluid in its operation. It effectively uses a multi-threading method and an asynchronous input / output to search within the images. In addition, it gives users the ability to determine the number of footers and heaters they want to use to retrieve files. Users can also specify the file types they want to retrieve by changing the Scalpel configuration command. By default, it searches for all files, even without the verbose function being enabled (-v parameter). In this photo you can see the results of Scalpel analysis in the image "backup.img". The basic command (listed at the bottom of the screen) requires only one output folder and one image for resolution. The PhotoRec differentiates itself from its competitors, focusing on retrieving photos, videos and text documents. It also works as an interactive utility through its graphical environment. The original PhotoRec command prompts you to specify the desired image (backup.img file) and the output folder. PhotoRec then displays a graphical interface where it shows the size of the image. On subsequent screens it asks for the type of partition on the disc and if you want to search the entire image. And finally, we arrived at Grep. Perhaps this program may not seem like the simplest data retrieval application, but Grep has the power to find deleted or lost text files by searching for strings existing on a block or on a disk image. There is a file in backup.img called "myfile". Contains only one line of text, "This is the file I will try to recover". Grep uses this string as a starting point for retrieving files. In addition to some other parameters, you can see in this example that it stores the detected string in a new binary named "foundtext". In particular, you should pay attention (and modify) the -C parameter that affects the frame surrounding the string you are looking for. This command, for example, tells Grep to find a line of text before and after the string we're looking for. If you say -CNUMX, grep will find 200 lines both before and after, from this string. This feature may not seem necessary to you, but it could be important for larger text files with hundreds of lines. Of course, you should remember the text of your own files so that grep has a starting point to begin its search. Grep will create a binary file as a result. Some parts of it will be readable, such as the desired text line of this example near the bottom of the photo below. Beyond that, it's your job to manually search for the data you need. It's definitely hard work, but it's an alternative to not recovering a file. In a nutshell, be sure to first copy your drive or partition with Ddrescue, then work on this copy with any of the next Linux recovery tools. Do not be afraid to try more than one tool, especially if you did not find the data you want at the first option. Data recovery requires patience and enough luck. But he compensates and before you know it you will have your valuable files back.
OPCFW_CODE
I Blocks IDEV KIT In-circuit Debug System - Helps to solve programming problems quickly and simply - Compatible with ECIO, MIAC - Also ideal for use with third party and user’s own programming systems - Allows start, step, and play of programs and see and alter variable values The IDEV can be connected to microcontroller hardware systems to provide a real time debug facility where it is possible to step through the UC program on the PC, and step through the program in the hardware at the same time. The system is controlled from within the UC environment where controls allow users to start, stop, pause and step through their program one icon at a time. Under user control the UC software shows the location of the program in the flow chart, the value of all variables in the program, and allows users to alter the variable values when the program is paused. It is compatible with I-blocks, user’s own hardware and third party PIC and AVR programming systems. I Blocks Atmega8 Development Board ATMega8 Experiment Board is a development board designed for the ATmega8 microcontrollers. It is designed to give designers a quick start to develop code on these devices. There’s socket on the ATMega8 Experiment Board for easily changing the microcontroller, batch programming is either supported over the ISP port. Easy to handle and according to your requirement you can make the different projects. I Blocks PIC BASE 40 Development Board If you need lots of Analog to Digital channels, or plenty of program space (up to 32K!), this one is for you! Prototype board for 40 pin PIC microcontrollers with power supply circuit, 20MHz crystal oscillator circuit, RS232 port, ICSP/ICD programming port. I Blocks 8051 Development Board This is a development board for the 8051 microcontroller. The aim of this small board is to be able of running as many as possible already available software’s for the 8051. The Intel 8051 microcontroller is one of the most popular general purpose microcontrollers in use today. Since the 8051 allows different memory configurations ( Von Neumann or separated code and data ) was used in such a way that 8 different memory configurations can be set. -On board socket for easily changing MCU, support batch programming over ISP port -Power voltage level 5V -Crystal configurable, on board 12MHz crystal or external crystal mounted via the socket -All the pins are marked on the board connected to pin headers for further expansion I Blocks ATmega128RFA1 Development Board This is a breakout board for the ATmega128RFA1. It includes an onboard 3.3V regulator, chip antenna, and DC input jack. The board can be powered with either a 5V or a 9V wall wart. ATmega128RFA1 is an IEEE 802.15.4 compliant single-chip, combining the industry- leading AVR microcontroller and best-in-class 2.4GHz RF transceiver providing industry’s highest RF performance for single-chip devices, with a link budget of 103.5dBm. I Blocks FPGA Module 6000LE - Designed for educational use - Can be used with block diagrams, VHDL or Verilog - Contains 3K logic elements – 6kLE option available - NIOS core compatible - Can be used as a component in projects This system has been put together from I-blocks to allow rapid development of electronic solutions based on FPGA technology and to provide a superb platform for developing FPGA projects. This FPGA daughter board sits on top of the existing I-blocks CPLD programming board (EB020) to provide 7 full I-blocks ports which can interface to other I-blocks: from simple LED and switch boards through to more complex boards like internet interfaces, IrDA communication systems, internet and Bluetooth boards. The FPGA daughter board itself can be removed from the CPLD board to provide a leaded component that can be used in your own projects (serial memory programmer required). This solves the difficult issue of handling FPGA packages which are difficult to solder. The FPGA device used is Altera’s EP1C6T144C7 FPGA which contains 6000 logic elements (EPC1C3T144C7). I Blocks Arduino Ethernet Shield The Ethernet Shield to connect to the internet. It is based on the Wiznet W5100 ethernet chip providing a network (IP) stack capable of both TCP and UDP. The Ethernet Shield supports up to four simultaneous socket connections. Use the Ethernet library to write sketches which connect to the internet via a standard RJ45 Ethernet jack using the shield.
OPCFW_CODE
Onboard heat hazard event data Hi @floriangallo and @aliebadi22 I think you are both looking at heat (as opposed to temperature as a hazard). Starting this issue as a place to share your sources! @floriangallo, do you happen to know where to find the lost labour per degree days in Zhang and Shindell (or is it in Zivin and Neidell?) for the vulnerability model. Just if you know off the top of your head, otherwise @mariembouchaala and I can find! Hey, sorry it took me some time to find it again, I didn't write down the proper citation in my notebook! It's in Neidell et al., 2021 that you have the simplified function, with 2.6 minutes lost / degree day https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0254224 Hi @floriangallo, as discussed for the 100 km data next steps are: For labour productivity (WBDT): Create just first set for on-boarding (say 2050, ssp585?) For labour supply side (degreee days): Compute the degree days above 90F but converted to Celsius Do for existing sets (historical plus ssp585) Then extend for multiple years and multiple scenarios Broadly the idea for storage of 'raw' data in S3 is that for each pixel and each model, we store the time series of temperatures. For WBDT this would be temperature plus relative humidity. I shared with @joemoorhouse the celsius dataset corresponding to what we previously refer to as dd90F. Now calling it hdd32C (so we know that it is degree days ABOVE a threshold). We have it for historical period (1995-2014 average) 2030 (2021-2040) 2040 (2031-2050) 2050 (2041-2060) and averaged over 6 models: ACCESS-CM2 CMCC-ESM2 MPI-ESM1-2-LR NorESM2-MM CNRM-CM6-1 MIROC6 @mariembouchaala we'll have to change the damage function, based on fahrenheit, by multiplying the damage functino ratio by 1.8 (fahrenheit to celsius ratio). Hi Has the above mentioned temperature data been loaded into the S3 bucket yet? I'm having a look at the file names in the bucket and can't find anything that rings a bell. Hi @MLevinMazars, Good use of issue :) Yes, you should now see some curves with names like: redhat-osc-physical-landing-647521352890/hazard/hazard.zarr/chronic_heat/osc/v1/mean_degree_days_above_32c_ssp585_2030 Those are the ones. I have not yet pushed the on-boarding scripts to physrisk so not as transparent as it should be (still working on creating the Mapbox visualizations). As discussed with @floriangallo and @MLevinMazars, a question to ponder: how can we come up with some uncertainty in the value of the WBDT aggregated work loss to apply to a given value (i.e. our secondary uncertainty estimate)? Hi @floriangallo, @MLevinMazars and all, Just FYI. I pushed data to sandbox (useful to visualize/check values). http://physrisk-ui-latest-sandbox.apps.odh-cl1.apps.os-climate.org/ As discussed with @floriangallo and @MLevinMazars, a question to ponder: how can we come up with some uncertainty in the value of the WBDT aggregated work loss to apply to a given value (i.e. our secondary uncertainty estimate)? Regarding the WBGT uncertainty, not a definitive answer but two things: it appears that the vulnerability function used in Zhang and Shindell comes from this paper: https://www.annualreviews.org/doi/pdf/10.1146/annurev-publhealth-032315-021740 even though I can't find it at first look, it might be a case of russian dolls references hidden within eachother, but we might be able to find some uncertainty quantification in there of course, we'll also have the uncertainty related to the climate models used here, whe have only 4 or 5 now but we can have some kind of hazard-related uncertainty in there too. Complete
GITHUB_ARCHIVE
Error running MBUnit test from Visual Studio using TestDriven.Net I'm getting the following error trying to execute a unit test from Visual Studio. I've poked around a little, and have re-installed both Gallio and TD.Net, but still get the same error. I'm kind of clueless on where to begin, and searching google turns up next to nothing... Gallio.Loader.LoaderException: Gallio.Loader.LoaderException: Failed to setup the runtime. ---> Gallio.Runtime.RuntimeException: The runtime could not be initialized. ---> Gallio.Runtime.RuntimeException: Could not register component 'TDNetRunner.UI.PlaceholderPreferencePaneProvider' of plugin 'Gallio.TDNetRunner.UI' because it implements service 'Gallio.UI.PreferencePaneProvider' which was not found in the registry. at Gallio.Runtime.Extensibility.PluginCatalog.RegisterComponents(IRegistry registry, IList1 topologicallySortedPlugins, IList1 pluginDescriptors) in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\Extensibility\PluginCatalog.cs:line 225 at Gallio.Runtime.Extensibility.PluginCatalog.ApplyTo(IRegistry registry) in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\Extensibility\PluginCatalog.cs:line 69 at Gallio.Runtime.DefaultRuntime.RegisterLoadedPlugins() in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\DefaultRuntime.cs:line 270 at Gallio.Runtime.DefaultRuntime.Initialize() in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\DefaultRuntime.cs:line 170 --- End of inner exception stack trace --- at Gallio.Runtime.DefaultRuntime.Initialize() in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\DefaultRuntime.cs:line 197 at Gallio.Runtime.RuntimeBootstrap.Initialize(RuntimeSetup setup, ILogger logger) in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\RuntimeBootstrap.cs:line 74 at Gallio.Runtime.Loader.GallioLoaderBootstrap.SetupRuntime(String runtimePath) in c:\Server\Projects\MbUnit v3.3\Work\src\Gallio\Gallio\Runtime\Loader\GallioLoaderBootstrap.cs:line 49 --- End of inner exception stack trace --- at Gallio.Loader.LoaderManager.LoaderImpl.SetupRuntime() at Gallio.Loader.Isolation.IsolatedEnvironmentManager.IsolatedInitializer.SetupRuntime() at Gallio.Loader.Isolation.IsolatedEnvironmentManager.IsolatedEnvironment.UnwrapException(Exception ex) at Gallio.Loader.Isolation.IsolatedEnvironmentManager.IsolatedEnvironment.SetupRuntime() at Gallio.Loader.SharedEnvironment.SharedEnvironmentManager.CreateSharedEnvironment() at Gallio.Loader.SharedEnvironment.SharedEnvironmentManager.GetSharedEnvironment() at Gallio.TDNetRunner.Core.LocalProxyTestRunner.CreateRemoteProxyTestRunner() at Gallio.TDNetRunner.Core.LocalProxyTestRunner.RunImpl(IFacadeTestListener testListener, String assemblyPath, String cref, FacadeOptions facadeOptions) at Gallio.TDNetRunner.Core.BaseProxyTestRunner.Run(IFacadeTestListener testListener, String assemblyPath, String cref, FacadeOptions facadeOptions) System.Runtime.Remoting.ServerException: Gallio.Loader.LoaderException: Gallio.Loader.LoaderException: Failed to setup the runtime. ---> Gallio.Runtime.RuntimeException: The runtime could not be initialized. ---> Gallio.Runtime.RuntimeException: Could not register component 'TDNetRunner.UI.PlaceholderPreferencePaneProvider' of plugin 'Gallio.TDNetRunner.UI' because it implements service 'Gallio.UI.PreferencePaneProvider' which was not found in the registry. at Gallio.Runtime.Extensibility.PluginCatalog.RegisterComponents(IRegistry registry, IList1 topologicallySortedPlugins, IList1 pluginDescriptors) at Gallio.Runtime.Extensibility.PluginCatalog.ApplyTo(IRegistry registry) at Gallio.Runtime.DefaultRuntime.RegisterLoadedPlugins() at Gallio.Runtime.DefaultRuntime.Initialize() --- End of inner exception stack trace --- at Gallio.Runtime.DefaultRuntime.Initialize() at Gallio.Runtime.RuntimeBootstrap.Initialize(RuntimeSetup setup, ILogger logger) at Gallio.Runtime.Loader.GallioLoaderBootstrap.SetupRuntime(String runtimePath) --- End of inner exception stack trace --- at Gallio.Loader.LoaderManager.LoaderImpl.SetupRuntime() at Gallio.Loader.Isolation.IsolatedEnvironmentManager.IsolatedInitializer.SetupRuntime() at Gallio.Loader.Isolation.IsolatedEnvironmentManager.IsolatedEnvironment.UnwrapException(Exception ex) at Gallio.Loader.Isolation.IsolatedEnvironmentManager.IsolatedEnvironment.SetupRuntime() at Gallio.Loader.SharedEnvironment.SharedEnvironmentManager.CreateSharedEnvironment() at Gallio.Loader.SharedEnvironment.SharedEnvironmentManager.GetSharedEnvironment() at Gallio.TDNetRunner.Core.LocalProxyTestRunner.CreateRemoteProxyTestRunner() at Gallio.TDNetRunner.Core.LocalProxyTestRunner.RunImpl(IFacadeTestListener testListener, String assemblyPath, String cref, FacadeOptions facadeOptions) at Gallio.TDNetRunner.Core.BaseProxyTestRunner.Run(IFacadeTestListener testListener, String assemblyPath, String cref, FacadeOptions facadeOptions) at Gallio.TDNetRunner.Core.BaseProxyTestRunner.Run(IFacadeTestListener testListener, String assemblyPath, String cref, FacadeOptions facadeOptions) at Gallio.TDNetRunner.GallioResidentTestRunner.Run(ITestListener testListener, String assemblyFile, String cref) at TestDriven.TestRunner.AdaptorTestRunner.Run(ITestListener testListener, ITraceListener traceListener, String assemblyPath, String testPath) at TestDriven.TestRunner.ThreadTestRunner.Runner.Run() 0 passed, 1 failed, 0 skipped, took 4.01 seconds (MbUnit v3.3). I recently encountered a similar issue running Visual Studio 2012 on Windows 8 (32-bit) with JetBrains ReSharper 7.1.10000.900, GallioBundle <IP_ADDRESS> and TestDriven.NET 3.4.2808 installed, assuming you are running a setup similar to the above mine, I was able to resolve this situation by doing the following: Shutdown all copies of Visual Studio Uninstall ReSharper and TD.Net First re-install Gallio and ReSharper Install the latest TestDriven (in my case 3.4.2808 Personal) Start Visual Studio and right click to run tests.
STACK_EXCHANGE
Angular is easy — Part 1: the theory Angular today: a Progressive Web App in 2 minutes npm install @angular/cli -g ng new myapp --routing ng add @angular/pwa ng build --prod That’s it. You have a real web app, working offline. You also have out of the box (non-exhaustive list): - a full development environment, without configuration needed, - an ultra optimized production bundle, without configuration needed, - an e2e & unit tests environment, without configuration needed, - all what you need to create an app: components, dependency injection, routing, AJAX, state management, and so on, all in an official and homogeneous API. ng add @angular/material and you now have official and ready-to-use UI components. Still think Angular is too complex? Let’s be honest: there were some issues contributing to this idea. As Google chose to keep the same name from version 1, we had first to speak about Angular 2 to discern versions. So now, many people don’t understand why Angular 6is already released. But wait : what is the version of React? 16. Does anyone worry about this? No. It’s just how semantic versionning works. So now it’s just Angular, and upgrades from Angular 2 to 4, and from Angular 4 to 5 are ultra smooth and usually just take a few minutes, as in any other framework. And since Angular 6, migrations are fully automated. Other frameworks are lying to you The idea that other frameworks are easier than Angular is built on a false promise: they just show you a small part of the whole picture. But you won’t get a real app with only what they present to you first. Angular is honest : it includes all what you’ll need to do an app. And nothing is superfluous. Vue.js is the better example of this. It’s presented as super easy as you would just have to do this: Great. Does one, or even multiple components are an app? No. You’ll need routing, dependency injection, AJAX and so on. As an app can’t tolerate any error (or it will crash), you’ll also need a tool to avoid errors (like TypeScript). You’ll also have hundred of components, so you’ll have to organize with oriented object programmation. So let’s look at a real world Vue.js example: Looks familiar? Yes, it’s exactly like Angular. In fact, Angular syntax is a little easier. Same goes for React: Is it an app? No. Do you think Facebook is built with React only? No. You’ll need routing, dependency injection, AJAX and so on. And contrary to Vue.js and Angular, they are not included. You’ll have to find other libraries by yourself. You’ll also need state management. And here goes Redux. At that point, React is not easy anymore. In fact, it’s far more complex to manage this problem in React than in Angular. And you’ll have to put together and configure all of this by yourself. So, definitively : no, React is not easier than Angular. Modern and efficient tools in Angular In this real world perspective, I’m even telling that Angular is easier than other frameworks. Why? Because of the modern tooling choices Angular has made: ES6+, TypeScript and RxJS. These tools are all here for a good reason. Welcome to PHP, Java and C# developers Finally, the most important point: because of these tools, Angular is very similar to Java, C# or OOP PHP: It is really easier for PHP / Java / C# developers to switch to Angular than to switch to React. And it’s just the beginning: as software is switching to web, and as more and more websites and native apps are switching to Progressive Web Apps, there will be a huge wave of developers coming from another world. One advantage of other frameworks like React or Vue.js is when you don’t have to create a full app but just some components. Warning and conclusion Like any post looking like “Framework A vs. framework B”, I suppose it will provoke an intense debate and strong reactions from people who prefer another framework, but I won’t answser to such comments. This post is not here to tell “X is better than Y”. Each tool has a different approach with advantages and disadvantages. Each person contributing to any project does a great job. That’s not the point. For a practical demonstration, I’ve written a second post, “Angular is easy — Part 2: the demonstration”, where I show how to build a Progressive Web App with Angular in only 5 minutes and 10 steps. Become a Pro!
OPCFW_CODE
Use case: dask + redis + progress reporting I have a couple of DAG workflows in dask. These workflows are triggered via user interfactions through a web API. I'd like to achieve the followings: The web API puts the workflow to a redis queue (RabbitMQ would be nice too, but redis is easier to start with). Run workers like the majority of job queue frameworks do (rq, arq, huey) Store the progress and current status of the workflows (with MultiProgress I guess) as key/values in redis What would be the correct approach to implement it? What do you do currently? On Sun, Apr 16, 2017 at 12:42 PM, Krisztián Szűcs<EMAIL_ADDRESS>wrote: I have a couple of DAG workflows in dask. These workflows are triggered via user interfactions through a web API. I'd like to achieve the followings: The web API puts the workflow to a redis queue (RabbitMQ would be nice too, but redis is easier to start with). Run workers like the majority of job queue frameworks do (rq https://github.com/nvie/rq, arq https://github.com/samuelcolvin/arq, huey https://github.com/coleifer/huey) Store the progress and current status of the workflows (with MultiProgress https://github.com/dask/distributed/blob/master/distributed/diagnostics/progress.py#L137 I guess) as key/values in redis What would be the correct approach to implement it? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mrocklin/streams/issues/6, or mute the thread https://github.com/notifications/unsubscribe-auth/AASszEyb1SUtIvudEewp5dR8D6xWcDUXks5rwkTngaJpZM4M-q40 . Currently nothing :) I wait for your advise. Have a client on the same machine as the scheduler? Have the client process read from the redis queue and take whatever action you deem appropriate? Honestly I'm not sure how much help I can be here without diving into your particular problem more deeply (which is probably not likely to occur). If you have more particular questions about how things work or thoughts on what features you would need then please let me know . The part I'm least aware of is the progress monitoring: How can I subscribe to futures prefixed by a key? Should I instantiate a scheduler plugin (like progressbar) per task submission, or per client? Should I prefer long living Clients? How can I nicely handle (non-critical) task failures? (the frameworks mentioned above put the failed tasks to a separate queue) - a minimal error handling would be nice in streams. Actually I would like to use redis as a resilient/persistent layer just before distributed if the scheduler dies, network error etc. If you're looking for a cross-client subscription mechanism then maybe use channels? There are more opportunities for cross-client communication; channels aren't perfect but they can often get the job done. Of course if you can solve your problem without multiple clients then this is probably better. I tend to create a single scheduler plugin. I haven't yet found an application where I needed to create one per client. You may have something else in mind though. Starting a client does take some milliseconds. I guess that this would depend on your performance needs. Yeah, I agree that streams should handle error handling and possibly stop signals. This is on a TODO list somewhere :) How do you think streams should handle exceptions? I really don't know, but there are practices: http://streamparse.readthedocs.io/en/stable/topologies.html#dealing-with-errors
GITHUB_ARCHIVE
13th October 2010, 14:59 #1 Help in installing prerequiste for IE 9 beta i was trying to install IE9 on my pc using Windows Ultimate (32 bit) Now this requires 3 updates to install before going ahead. so when i downloaded these files ( version x86 from ) and when i start installation then a small window appears which says " searching for updates on this computer...." and this process continues as is and there is no update installed at all and more over this process doesnt cancels till PC is shuted down. Please help me in installing the prerequiste 13th October 2010, 15:43 #2 what operating system are you using ? what is "Windows Ultimate (32 bit)", Vista or Windows 7 ? how did you try to install IE9 ? did you download the FULL installer ? i also had problems with IE9, but the FULL installer worked, so try that, as it contains all needed upgrades. IE9-beta Vista-32:Download details: Windows Internet Explorer 9 Beta for Windows Vista and Windows Server 2008 IE9-beta W7-32: Download details: Windows Internet Explorer 9 Beta for Windows 7 Microsoft Download Center, a search for IE9: Microsoft Download Center: Search Results 13th October 2010, 17:14 #3 IE9 beta installer will automatically identify the required updates and install them before proceeding with its installation. Troubleshoot: Internet Explorer 9 Beta installation problems | The Windows Club may interest you. 13th October 2010, 19:58 #4 yes Andy, thatīs correct. however, that didnīt work for me, so i downloaded the FULL installer. there are 2 installers: the "MINI" is 2.3MB, the "FULL" is 18.9MB. if you run the MINI, it downloads all the rest that is needed, the FULL is a "standalone"/ "complete package". 14th October 2010, 01:11 #5 I am using Windows 7 (32 bit). now even after the Full installer it askes for Updates manually and and those prerequiste are not installed at all. Please help 14th October 2010, 06:00 #6 Hello and Welcome ! Go through this article and apply all the Prerequisites Prerequisites for installing Internet Explorer 9 Beta then try to install IE9 Beta. Hope this helps, 14th October 2010, 07:52 #7 you may not get angry if i say it is still not installed. 14th October 2010, 08:01 #8 Hi.....if still you are not able to install then first download a fresh installer and try to install it offline.....here is the full guide where you can find detailed article explaining that how you can do that...Read 14th October 2010, 10:26 #9 16th October 2010, 20:41 #10
OPCFW_CODE
Improve Productivity with the Humble ToDo List A week or two ago, I read Stephen Walther’s blog post “Scrum in 5 Minutes” and reading his description of the backlog reminded me of a practice that I’ve been getting a lot of mileage out of lately. My practice, inspired by Kent Beck in his book Test Driven Development By Example, is to keep a simple To-Do list of small development tasks as I work. The parallels here are rather striking if you omit the portions of Scrum that have to do with collaboration and those types of logistics. When starting on a task, I think of the first few things that I’ll need to do and those go on the list. I prioritize them by putting the most important (usually the ones that will block progress on anything else) at the top, but I don’t really spend a lot of time on this, opting to revise or refine it if and when I need to. Any new item on the list is yellow and when done, I turn it green. There are no intermediate states and there is no going back. If I have something like “create mortgage calculator class” and I turn it green when I’m happy with the class, I don’t later turn that back yellow or some other color if the mortgage calculator needs to change. This instead becomes a new task. Generally speaking, I try to limit the number of yellow tasks I have (in kind of a nod to Kanban’s WIP limits), though I don’t have a hard-fast rule for this. I just find that my focus gets cluttered when there are too many outstanding tasks. If I find that a yellow item is taking me a long time, I will delete that item and replace it with several components of it. The aim is always to have my list be a series of tasks that take 5-15 minutes to complete (though they can be less). Items are added both methodically to complete the task and as reminders of things that occur to me when I’m doing something else. For example, if I fire up the application to verify a piece of integration that involves a series of steps and I notice that a button is the wrong color, I won’t drop everything and sidetrack myself by changing the button. I’ll add it to my queue; I don’t want to worry about this now, but I don’t want to forget about it. I never actually decided on any of these ‘rules’. They all kind of evolved through some evolutionary process algorithm where I kept practices that seemed to help me and dropped ones that didn’t. There will probably be more refinement, but this process is really helping me. So, What Are the Benefits Here is a list of benefits that I see, in no particular order: - Forces you to break problem into manageable pieces (which usually simplifies it for you). - Helps prevent inadvertent procrastination because task seems daunting. - Encourages productivity with fast feedback and “wins”. - Prevents you from forgetting things. - Extrapolated estimation is easier since you’re tracking your work at a more granular level. - Helps you explain sources of complexity later if someone needs to know why you were delayed. - Mitigates interruptions (not as much “alright, what on Earth was I doing?”) Your mileage may vary here, and you might have a better process for all I know (and if you do, please share it!). But I’ve found this to be helpful enough to me that I thought I’d throw it out there in case it helped anyone else too.
OPCFW_CODE
Integrate docker reference changes Allows having other parsers which are capable of unambiguously keeping hostname and name separated in a Reference type. This is intended to address comments made for #1777 and lay the foundation for upstreaming changes from docker/docker. Leaving this as WIP so some of those changes can be included. [Current coverage][cc-pull] is 54.72% Merging [#1778][cc-pull] into [master][cc-base-branch] will decrease coverage by 6.89% 3 files (not in diff) in ...istry/storage/driver were modified. more Misses +807 Partials -92 Hits -715 2 files (not in diff) in registry/storage were modified. more Misses +3 Partials +2 Hits -5 2 files (not in diff) in registry/client were modified. more Misses +6 Hits -6 2 files (not in diff) in registry were modified. more Misses +3 Partials +1 Hits -4 File ...ference/reference.go was modified. more Misses +1 Partials 0 Hits -1 @@ master #1778 diff @@ ========================================== Files 120 120 Lines 10455 10490 +35 Methods 0 0 Messages 0 0 Branches 0 0 ========================================== - Hits 6443 5741 -702 - Misses 3417 4244 +827 + Partials 595 505 -90 [][cc-pull] Powered by Codecov. Last updated by [75882f0...7e5543b][cc-compare] [cc-base-branch]: https://codecov.io/gh/docker/distribution/branch/master?src=pr [cc-compare]: https://codecov.io/gh/docker/distribution/compare/75882f079c9a2160af5df26c7b3a429f912b0b1d...7e5543bbe58870ce371d2b285c6793db3ab05afc [cc-pull]: https://codecov.io/gh/docker/distribution/pull/1778?src=pr @dmcgowan This is looking okay. Are there are few demonstrative test cases? For example, does the case from #1777 now pass? @stevvooe adding the downstream functionality now. The test case from #1777 must always fail unless we change the grammar. However by adding the downstream parsers it should be supported and able to be parsed in an unambiguous way. Added commit with functionality needed to replace github.com/docker/docker/reference. Another pass on the interface and naming should get us close to what we want. Also considering adding an interface which has methods for Hostname() and Path() (or whatever the correct name for "Path" is). This would allow some simplification of methods to split out the values and make the functionality less reliant on type asserting an unexported type. Ping @stevvooe @aaronlehmann for feedback Discussed more with Stephen and came to a few decisions... Get rid of the normalizer interface, only one implementation of normalize Normalize takes any name and produces the longest form ie redis -> docker.io/library/redis Introduce a FamiliarName function which always produces a shortened version for UI, called familiar because this is intended to be the name that is familiar to users, not always shortened. Rename Hostname to Authority as the first component of the Name may be a DNS name, it does not necessarily represent a host or the hostname of a registry. Rather the first component of a name is intended to represent the authority for that name for trust, authentication, and registry host resolution. Add support for fully resolving string references which may be an ID identifier. When just an identifier is found with no name, a digested type with no name will be returned. Also have a version which could take in a shortened identifier and a digest.Set and returned a reference. Refer to RemoteName as Path. A Name is then always made up of an Authority and a Path. Add all existing helper functions used from docker/docker and docker/engine-api to guarantee that the reference package does not get forked. I am going to start implementing these changes, we can have a discussion of their individual merit while I get these changes implemented. Ping @aaronlehmann @RichardScothern I like most of these changes. Rename Hostname to Authority as the first component of the Name may be a DNS name, it does not necessarily represent a host or the hostname of a registry. Rather the first component of a name is intended to represent the authority for that name for trust, authentication, and registry host resolution. I'm not sure I like the name Authority, but I'm not sure what would be a better choice. Add support for fully resolving string references which may be an ID identifier. When just an identifier is found with no name, a digested type with no name will be returned How will the grammar distinguish between IDs and names that look like IDs? How will the grammar distinguish between IDs and names that look like IDs? I am not sure we can have a top level grammar to represent the rules for ID parsing as well as normalization. I would prefer to keep the grammar as is along with the existing Parse function and just add functions which can parsing IDs and normalization. @aaronlehmann Authority works here because it is non-committal to the actual role and differentiates the term from namespace, which is too overloaded. How will the grammar distinguish between IDs and names that look like IDs? The role of the grammar to be to identify best effort structure. If a name may be an ID, its really up to the caller to run that throw the ID filter before treating it as a name. The main general drawback to this approach is that a secure process requires knowing the entire ID set. I would also consider Domain as an alternative to Authority, has a similar meaning but slightly more specific to its use. Domain is fine, but it still has that DNS flavor... PTAL, this now has additions to the grammar so I want everyone looking at it. Function naming may still still does not feel right (such as NormalizedName is similar to ParseNamed). Also I added helpers from engine-api but may get rid of the SplitName helper, it really does not add anything useful nor save many lines of code. Plus I only see the original version used twice in engine-api. Changed WithDefaultTag to EnsureTagged and dropped last commit with changes from engine-api. I agree those changes are too specific to the api and we don't want them widely used. LGTM LGTM I suggest we hold off on merging this until we go through the exercise of porting Docker Engine to use it, to make sure we got the details right. @aaronlehmann that is fair, I will link a branch of Docker with the integration soon, no rush on merging this Made a PR in my fork for reviewing the changes https://github.com/dmcgowan/docker/pull/27. Noticeable changes: "convenience" methods adding to RepositoryInfo, just call corresponding reference function with the Named value "WithDefaultTag" has different behavior than EnsureTagged. EnsureTagged always adds a tag if not tagged while WithDefaultTag would not add the tag is the type was canonical. I prefer the more explicit change to only call EnsureTagged on a named only reference, but if that is not the consensus we can update the behavior. ParseAnyReference causes slightly different behavior than the function it was replacing since it does explicitly return a Named type. While the return type is always Named or Digested I am not sure that logic translates well, perhaps another helper function would be better here. Added NormalizedNamed interface to allow referencing without ambiguity. Note that all underlying Named types implement the interface since even non-normalized types may be familiarized. Was thinking of also adding an IsNormalized function which just needs to ensure the domain is not empty. The caller still has the potential to call ParseNamed and type assert to NormalizedNamed but I don't see that as a being a problematic pattern since ParseNamed should only be called on fully normalized values. The docker update will only use ParseNormalizedNamed. I like the addition of the NormalizedNamed intrerface. I think it's the right direction because without it, there's no way to distinguish a normalized reference by type. For example in the first attempt at porting the Docker Engine codebase to this unified version of the reference library, I believe push/pull functions expected normalized references, but since the types were reference.Named, it would be easy to accidentally pass another type of named reference. Rebased Added 2 helper functions to make easy to integrate into Docker Whoah, nice LGTM LGTM I think it would be a nice simplification to merge Named and NamedRepository. Named doesn't ever appear to be backed by anything that doesn't implement NamedRepository. Formalizing the fact that Named exposes Domain and Path methods would simplify the code a bit, avoiding some type assertions that use these methods if available and otherwise do parsing. However, this would change the Named interface, and I think it's better not to do that yet. This could be a followup for after the docker and distribution reference packages are merged.
GITHUB_ARCHIVE
What is the reason that SD8 won’t run on High Sierra? I’m on an older MacPro system and High Sierra is about as far as I want to go on this old hardware. Thanks… ps – about the “support for iOS and Safari versions 13 and below” – does that apply to only iOS for Safari 13 and lower, or to macOS and Safari 13 and lower as well? Thanks… We limit Script Debugger support for macOS versions to keep our testing manageable. We do not have the resources to test Script Debugger under more than the current Mac OS version - 1 at time of release. We can provide a registration number for Script Debugger 7 which will run on High Sierra. Just contact me at email@example.com with your Script Debugger 8 order information and I’ll get you all set up. Was trying to find out what versions of macOS Script Debugger version 8 will actually run on, not supported on (perhaps these are the same). In other words is there something in the version 8 app that will just flat out prevent it from running on the older macOS versions? If it’s the case that SD8 will indeed try to run on older macOS versions, and we are without support on the older macOS versions, is there any desire to have issues/bugs/questions reported to you or asked on the forum? So, since the Script Debugger 8 was released in May 3, 2021, when the current version of macOS was Big Sur, macOS 11.0, then that would imply that Script Debugger is only supported on Catalina and newer macOS versions (so Catalina, Big Sur, Monterey and now Ventura), and Script Debugger 8 customers on anything older than macOS Catalina, are not? ps - I just looked at the requirements for SD8 and it says Mojave is required, so that’s two versions back of Big Sur, not the 1 that Mark talked about above. So which is it – minus one or minus two? The FAQ on our Buy page lists the system requirements: Script Debugger 8 requires macOS X 10.14 (Mojave) or later. Script Debugger 8 fully supports macOS X 11 (Big Sur) and runs natively on M1 Macs. Script Debugger 8 will not permit its self to run on earlier versions of macOS X. I’m just trying to understand what the software will work on and be supported on - your first reply message you say “current Mac OS version - 1 at time of release” (minus one I assume) based on when SD8 was released, which was during Big Sur, so to me that means Catalina, not Mojave, then the system requirements say Mojave. So that would imply to me that it’s current version of macOS when SD8 released minus two, right? And the requirements say “or later” so that means all macOS versions up thru Ventura now that it was just released? Yes, we will support the current version of macOS in our maintenance releases going forward until such time as Script Debugger 8 is superseded by Script Debugger 9. If/when Script Debugger 9 is released maintenance work on Script Debugger 8 will cease. And before you ask, we have not established a schedule for the release of Script Debugger 9 at this time.
OPCFW_CODE
Tooltip anchor locations are incorrect for small GeoJSON polygons in 1.9.x Checklist [X] I've looked at the documentation to make sure the behavior isn't documented and expected. [X] I'm sure this is an issue with Leaflet, not with my app or other dependencies (Angular, Cordova, React, etc.). [X] I've searched through the current issues to make sure this hasn't been reported yet. [X] I agree to follow the Code of Conduct that this project adheres to. Steps to reproduce This JSFiddle Illustrates the problem well: https://jsfiddle.net/michaelthoreau/ekafvph8/6/ This issue seems to only affect geojson polygons that are geographically small, regardless of pixel size when drawn. Expected behavior Both tooltips should appear centered on the polygons. Current behavior The smaller polygon's tooltip appears in the wrong position. This behaviour does not appear in<EMAIL_ADDRESS>Minimal example reproducing the issue https://jsfiddle.net/michaelthoreau/ekafvph8/6/ Environment Leaflet version: 1.9.3 Browser (with version): 108.0.5359.124 (Official Build) (arm64) OS/Platform (with version): Mac OS 12.6 Any news about this ? We have already a PR (#8784) for this, but the PR needs first to be merged and then a Release is necessary. So this will take a while. When you say a while you mean weeks ? months ? We can't use Leaflet for our new project due to this... You can use Leaflet 1.8.0 until the fix is released. I can't say you how fast the release will be done. I hope in the next 2 weeks but i can also take 2 months. We can't use Leaflet for our new project due to this... That's your problem, not ours. Go read the "NO WARRANTIES" bit of the license file, and watch https://www.youtube.com/watch?v=Q187JGeXueA . Pressuring maintainers by means of inducing guilt is not productive: it only burns maintainers out. We can't use Leaflet for our new project due to this... That's your problem, not ours. Go read the "NO WARRANTIES" bit of the license file, and watch https://www.youtube.com/watch?v=Q187JGeXueA . Pressuring maintainers by means of inducing guilt is not productive: it only burns maintainers out. Sorry, I really didn't mean that at all 😥 ! I really appreciate the time/effort/quality of the work. I was meaning more "oh that is a pity we can't use the latest version because of this problem" but realy not in aggressive or bad way. Really sorry if I badly expressed myself as english is not my native language. I was meaning more "oh that is a pity we can't use the latest version because of this problem" but really not in aggressive or bad way. No problem then ;-) @Falke-Design Thanks for the pull request! I was suspicious of something like rounding that but didn't do a great job of debugging due to unfamiliarity with the project structure. Looking forward to the release but using 1.8.0 until then. BTW we frequently use 7+ decimal places to annotate small objects... works well so far but hoping we dont run in to precision issues if we go much smaller. click here-this article may be solve this issue Issue fixed by #8784, will be in the next v1 release This is affecting us too. Thanks for the fix! We're looking forward to the next release. @foxymiles @michaelthoreau @Mushr0000m we just released 1.9.4 which includes the fix for this. @Falke-Design Thanks for the update!
GITHUB_ARCHIVE
est désormais compatible avec l'extension FastNews.kiwi disponible pour votre navigateur. Avec cette extension, vérifiez s'il y a des nouveaux sujets sur ce forum en un clic depuis n'importe quelle page !Cliquez ici pour en savoir plus. I am trying to calculate the distance between two zip codes. I have a dataset that contains two zip codes (customer zip & retail zip) as well as one dimension - sales .Im writing an application that needs to find the distance between two zip codes and return only records that are within a certain radius. I have the formula's I need.Finding distance between two addresses . you could do the ZIP code distance and for the person who wants the exact mileage they could do it themselves using .Calculate the Distance between two ZIP Codes. . ZIP Code & Postal Code Facts. Zip Codes are largely responsible for the automation of the United States Post Office .Tableau Tip: Calculating the distance between . I had done this calculation in an Excel spreadsheet where you type a zip code in one box and it then calculates .Distance between zips calculator . Look up a Zip Code; Distance Between Zip Codes. Distance Calculator. . To search for a Zip Code to use in the Distance Calculator.References: st: Calculate distance between two zip codes.Dear Smartest Exclers In The World, I have a table with row headers as source zip code and column headers as destination zip codes and I would like toSolved: All, I am trying to figure out distance between doctors offices locations. I have zipcode and lat/long data.In this post you will find out how to calculate the distance between two places using postcodes or zipcodes and also display the travelling time in current traffic by .Calculate the distance between any two zip codes or . This tool will allow you to automate the calculation of mileage between any two zip codes or .I have a list of zip codes in Excel that I would like to see if there is a method to calculate the distance between two zip codes. I searched and tried out .Download Zipdy: Zip Code Distance Calculator for free. Zipdy is a program for calculating the distance between two zip codes and finding all the records in .I import information pertaining to a shipment. I have to calculate the driving distance between 2 zip codes. Currently I go to mapquest, enter the two zip codes, then .Zip Codes by Distance. Calculate distance between any zip codes in the US.I am very new to using Macros for Excel and I was wondering if there was a way to calculate the distance between two zip codes. The beginning zip code is in column A .ZipCodeAPI - A RedLine13 Service Zip Code Distance, Radius and Location API. The easy way to calculate distances, radius, and locations for zip codes.Nordstrom () is an American chain of luxury department stores headquartered in Seattle, Washington. Founded in 1901 by John W. Nordstrom and Carl F.I wanted to calculate the distance between two zip codes. Is there any particular function to find the distance? I used this code Dataframe <- transform(dataframe .Calculating the distance between two Zip Codes in C#. . there are many classes built into the .NET framework, but a zip code distance calculator isnt included.Look Up Quick Results Now! Find Related Search and Trending Suggestions Here.how do these websites (any nearest you site) check their databases and return the locations nearest you?Deliver dealer locator/store locator, lead generator, marketing application, or call center solution with this source code to calculate distance between zip codes by .Look Up Quick Results Now! Find Related Search and Trending Suggestions Here.Hello all, can anyone point me to some doc/tutorial on how to find the distance between 2 zip codes using Google Map API? Thanks a lot.The ZIPCITYDISTANCE function returns the geodetic distance in miles between two ZIP code locations. The centroid of each ZIP code is used in the calculation.Calculate driving distances between cities based on actual turn-by-turn . Driving Distance Calculator. . countries, or zip codes to figure out the best route .Does anyone know how, via VBA or an advanced formula/function in Excel, to calculate the distance between two zip codes, if given the latitude and .How to calculate distance between two zip codes .Distance between ZIP Codes Test our free ZIP Code Distance Calculator and download our free scripts that are perfectly compatible with our ZIP Code databases.Calculating the distance between zip codes . There are references in the code where you can obtain approximate zip code . Calculate the distance between -90 .How do I find the distance between two locations in excel using Google maps?Find the driving distance between any US ZIP codes. Whether you're planning a trip or just curious, this simple tool was made to be an easy resource for you to find . 10c6d764d5 comodo internet security pro 2012 free 1 year serialerphun - bliss and agony zippyuniversity of central florida main librarycreate multiple itunes libraries mac oscccam 2.1.4 for kathrein ufs 910.zipwondershare pdf password remover full version free downloadsan diego smooth jazz music contemporary jazzgoogle chrome version 18 download macjavaserver pages standard tag library jstlel cantar del mio cid analisis literario Posté le: Dim 17 Déc - 16:56 (2017) Sujet du message: Publicité
OPCFW_CODE
Program will ask you to restart, allow it to do so. My communications to scan mouse have been unanswered. All are on the same hardware, so it's very odd that not everyone is impacted. Certainly, this issue may occur due to lots of reasons, and the solution varies in different situations. However, I wish I could say I am surprised. Run it as Administrator and press Y if asks you do you want to continue. Rarely do I get through the slide style presentations without the program presentation stopping altogether. All of the gmail access login's and passwords keep popping up. We haven't found a group policy or Registry fix for this yet. I am running Windows 7 Professional 64 bit on a Intel i7 X58 platform and this problem just started today and I have dont nothing I can think of to have caused this myself. If the issue persists, I doubt whether the Outlook credentials has corrupted, please do the following: 1. I would use something like the free Avast instead. I tried the solution to no avail The problem is the pre-loaded Office applications seem to have been moved to the App store What I have found is for whatever reason every Dell I am presented with recently has Office pre-loaded as a Mirosoft Store app. Just using Outlook to connect to an exchange account but have the same problem. No solution, just a temporary work-around. I Contacted Dell Support and was transferred to Microsoft Support, and 50 mins later finally landed in Microsoft Commercial Technical Support Team. After a password change it may take a while before our Hybrid setup catches on completely, but that's usually solved in extreme cases inside 3hrs. In the Run dialog box, type Netplwiz and then press Enter key. I have tried deleting the saved credentials in the cred manager but that did not work. Messy Windows Context Menus, and How to Clean Them Up One of the most irritating things about Windows is the context menu clutter that you have to deal with once you install a bunch of applications. You do not want to post it. If you have feedback for TechNet Subscriber Support, contact. To turn it off, go to Control Panel and click on Mail. Just keep in mind that the keyboard is now completely disabled. Step 4 Scan with Malwarebytes AntiRootkit Please download and save it to your desktop. Thanks for sharing this background! Have a nice day ahead. Before I could do that, the users reported the issue was solved. If it not works, you can try to add this registry entry. The last few days some of our end users are getting a Windows Security pop up asking them to enter their password. You will get slammed by spam bot emails. I then immediately tried to go back to Outlook and recreate the account it had failed not 10 minutes prior - and it worked. At first accepting my credentials, then not at all. If anyone has an idea of what happening please let me know. Has been on on-going issue for me ever since I installed Outlook 2016 on my Windows 10 laptop over a year ago now. You're still experiencing this issue? I have tried 'almost' everything. Mozilla Firefox users are recommending AdBlockPlus as a general, blocking tool. Thank you Ethan thanks for the response. For us, and I guess for Nicabus too, the issue was solved in August. How can I get it to stop popping up? Final status: We've completed the validation process and have confirmed that the issue is resolved. I changed the password to the account and I am still receiving the pop-up. I have this problem also, though not always with the flashing. You may want to do it while you are not sleeping, in case a dialog box pops-up asking you a question. It should also give you the tools you need to block individual sites. See our Leave this field empty if you're human:. Users are always prompt to provide a user name and password. Checking with our users the issue seems fixed.
OPCFW_CODE
SQL Query get a lot of timeouts I have a large database table (SQL Server 2008) where i have all my forum messages being stored (The table currently have more than 4.5 million entries). this is the table schema: CREATE TABLE [dbo].[ForumMessage]( [MessageId] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [ForumId] [int] NOT NULL, [MemberId] [int] NOT NULL, [Type] [tinyint] NOT NULL, [Status] [tinyint] NOT NULL, [Subject] [nvarchar](500) NOT NULL, [Body] [text] NOT NULL, [Posted] [datetime] NOT NULL, [Confirmed] [datetime] NULL, [ReplyToMessage] [int] NOT NULL, [TotalAnswers] [int] NOT NULL, [AvgRateing] [decimal](18, 2) NOT NULL, [TotalRated] [int] NOT NULL, [ReadCounter] [int] NOT NULL, CONSTRAINT [PK_GroupMessage] PRIMARY KEY CLUSTERED ( [MessageId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] One issue that i see keep coming back is that when i'm running my stored procedure that select a message and all its replies, i get sometime time-outs errors from the SQL server. This is my stored procedure: select fm1.[MessageId] ,fm1.[ForumId] ,fm1.[MemberId] ,fm1.[Type] ,fm1.[Status] ,fm1.[Subject] ,fm1.[Body] ,fm1.[Posted] ,fm1.[Confirmed] ,fm1.[ReplyToMessage] ,fm1.[TotalAnswers] ,fm1.[AvgRateing] ,fm1.[TotalRated] ,fm1.[ReadCounter], Member.NickName AS MemberNickName, Forum.Name as ForumName from ForumMessage fm1 LEFT OUTER JOIN Member ON fm1.MemberId = Member.MemberId INNER JOIN Forum On fm1.ForumId = Forum.ForumId where MessageId = @MessageId or ReplyToMessage=@MessageId order by MessageId the error that i get look like this: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding" I was looking on the execution plan, and the only this that look suspicious is that is see that the query has a cost of about 75%-87% (it varies) on the key lookup in the forummessage table (which i don't understand why, because i set it up as clustered, so i was hoping it will be much more efficient). I was always under that assumption that when you search on clustered index, the query should be very efficient. Is there anyone has any idea how i can improve this issue and this query to get a message and its replies? Thanks. Can you include the execution plan in your question? Why the loeft join, don;t all messages have to be from memebers? Create an index on ReplyToMessage: CREATE INDEX IX_ForumMessage_ReplyToMessage ON ForumMessage (ReplyToMessage) This will most probably result in two index seeks (over the PRIMARY KEY on MessageId and over the index on ReplyToMessage) contatenated with a merge or hash concatenation, rather than a full table scan which you are having now. @OrA: this is not obvious from your script. Please post the query plan: run SET SHOWPLAN_TEXT ON then run your query in the same session. i was actually running your script and for whatever reason, its took the 87% down to 14%!!! that much better! should i try to take it even lower or that is a "respectable" number? Also, now its shows me that the clustered index scan on the Member table (which have clustered index on the MemberId column, which its also identity), takes 37%, is there anyway to improve that too? @ora: could we please start with posting the query plan? My first comment explains how to do that. Why you are doing ORDER BY MessageId, is it so necessary ordering? Try to refactory your SELECT to SELECT FROM Forum and than joining the Member, and finally LEFT JOIN ForumMessage. So order tables from small to large yes, i must return them in the right order to make sure that the data is parsed correctly. I just wondering how you are using a message id, I believe any logic should NOT reffer this Id Two suggestions come to my mind: Remove the ugly OR and add a UNION for the condition (CODE BELOW) You must have a non-clustered index on ReplyToMessage As a last resort, create a non-clustered index and put MessageId AND ReplyToMessage in there. (See my answer to another question here Why does this Sql Statement (with 2 table joins) takes 5 mins to complete?) CODE: select fm1.[MessageId] ,fm1.[ForumId] ,fm1.[MemberId] ,fm1.[Type] ,fm1.[Status] ,fm1.[Subject] ,fm1.[Body] ,fm1.[Posted] ,fm1.[Confirmed] ,fm1.[ReplyToMessage] ,fm1.[TotalAnswers] ,fm1.[AvgRateing] ,fm1.[TotalRated] ,fm1.[ReadCounter], Member.NickName AS MemberNickName, Forum.Name as ForumName from ForumMessage fm1 LEFT OUTER JOIN Member ON fm1.MemberId = Member.MemberId INNER JOIN Forum On fm1.ForumId = Forum.ForumId where MessageId = @MessageId UNION select fm1.[MessageId] ,fm1.[ForumId] ,fm1.[MemberId] ,fm1.[Type] ,fm1.[Status] ,fm1.[Subject] ,fm1.[Body] ,fm1.[Posted] ,fm1.[Confirmed] ,fm1.[ReplyToMessage] ,fm1.[TotalAnswers] ,fm1.[AvgRateing] ,fm1.[TotalRated] ,fm1.[ReadCounter], Member.NickName AS MemberNickName, Forum.Name as ForumName from ForumMessage fm1 LEFT OUTER JOIN Member ON fm1.MemberId = Member.MemberId INNER JOIN Forum On fm1.ForumId = Forum.ForumId where MessageId = @MessageId order by MessageId why is the OR clause so "ugly" Why you found 'OR' ugly? As for me pretty well approach, I would say standart, am I wrong? Realy interesting, I always doing the same 'OR' for such simple constraints. BTW, your query will perform table scan twice obviously because of two SELECT! i was actually had the union before that the footprint was even worse. are you sure the union would be better here? i'm also getting (when using the union): 'The text data type cannot be selected as DISTINCT because it is not comparable' Avoid OR like a plague! There are limited cases where we have to use it and there are other cases when using OR is computationally acceptable, but generally try avoiding it. @OrA do you have a non-clustered index on ReplyMessage? @Aliostad: the @op's query is one of the limited cases when using OR is fine. @Quassnoi I agree although I would have to run it to see the difference. depending on the version of MS SQL Server you're running you could also try recreating the table utilizing partitioned tables to enhance the SELECT performance. if i add partitions, would this impact the performance of retrieval old messages? it shouldn't, but it really determines on your partitioning scheme. Remember that you'll need to rebuild that table. If you look at the documentation from MSDN, I believe they show dropping any constraints on the table, renaming the table, creating the new partitioned table, then moving the data over, then recreating the constraints, then finally dropping the old table.
STACK_EXCHANGE
We have a couple of Windows 7 development machines and decided to upgrade one of the Windows 7 to Windows 10 in our KVM host. Our KVM host is currently running on Linux CentOS 6.x. It seems it’s not simple and straight forward to upgrade Windows 7 to Windows 10 in Linux KVM. First, it’s advise to use Windows Media Creation Tools to upgrade Windows 7 machine into Windows 10 instead of the pop up appear on your taskbar. Earlier we run the upgrade from taskbar it didn’t return error and it literally just “hang” there. After we tried upgrade using Windows Media Creation Tools, found that the upgrade was stuck with SAFE_OS error. Beside seeing SAFE_OS error, we also seeing CompareExchange128 error message. According to some forums we’ve googling on, it has something to do with Processor or CPU setting on Linux KVM qemu setting. To fix all the error messages related to above and make sure the KVM guest is able to upgrade to Windows 10 (from Windows 7 for our example), here is the setting need to be added into KVM guest xml file. Next, we will make changes our KVM guest xml settings by the command below with our Windows 10 guest KVM name windows10 # virsh edit windows10 Next, we need to define custom CPU setting for the KVM guest by adding the lines below <cpu mode='custom' match='exact'> <model fallback='allow'>kvm64</model> <feature policy='require' name='nx'/> <feature policy='require' name='lahf_lm'/> </cpu> The line above emulate the processor as kvm64. A KVM 64 bits processor, and required policies such as nx and lahf_lm. Where to add the cpu tweak? Refer to our partial config file for KVM guest. <features> <acpi/> <apic/> <pae/> </features> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>kvm64</model> <feature policy='require' name='nx'/> <feature policy='require' name='lahf_lm'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> After that, start your current Windows 7 KVM guest and get ready with the upgrade to Windows 10 Media Creation Tools. Follow the step by step prompt by Windows 10 Media Creation Tools, it should be able to perform the upgrade and seeing the screen as below. The kvm CPU tweak settings also worked with Windows 8.1 upgrade. We didn’t remove the CPU setting, kept it as it’s for time being, we will update again if we found it’s necessary to untweak the CPU setting on KVM in order to have the systems run in reliable state.
OPCFW_CODE
CpSc 433, Data Security and Encryption Techniques Midterm Exam Review The only thing you won't need to know from Chapter 1 is the specifics of the X.800 Security Architecture (i.e., don't bother memorizing Tables 1.4-1.6), but read Section 1.2 and the tables at least once and make sure that you understand them. Make sure that you're familiar with each of the kinds of attack listed in Table 1.2. You need to know the symmetric cipher model of Section 2.1 and be familiar with the notation. Know the kinds of cryptanalytic attacks listed in Table 2.1, and why, for example, a chosen plaintext attack might yield more information than one using known plaintext. Understand the difference between being unconditionally and computationally secure. You do not need to know the details of the "classical" techniques such as the Caesar Cipher or Rotor Machines, but you do need to understand the cryptanalytic attacks based on relative frequency described on pp. 33-37. Know the One-Time Pad technique (pp. 43-44) and its advantages and disadvantages. You do not need to know the details of the "Simplified DES" algorithm in Section 3.1 unless it helps you to understand the regular DES algorithm. Know the entire contents of Section 3.2, including the difference between stream and block ciphers and the techniques of diffusion and confusion. Know the structure of the Feistel Cipher, and be able to list the parameters that might vary between encryption algorithms (p. 69). Understand the structure of DES (Figure 3.7) and be able to describe its external features (e.g., key size, plaintext block size, number of rounds). Make sure that you understand the Avalanche Effect in general, not just for DES. Read Section 3.4 completely, understand why DES is obsolete, and understand the idea of a timing attack. Read Section 3.5 and make sure that you understand the basic ideas of Differential and Linear Cryptanalysis. Know that Differential Cryptanalysis is based on the idea that watching how changes to the plaintext affect the value of the ciphertext might allow you to make conclusions about the key, and that Linear Cryptanalysis attempts to find linear approximations to the transformations performed by the cipher. Know each of the design criteria described in Section 3.6, and be able to describe them. (For example, what makes for a good S-box?). Know each of the block cipher modes in Section 3.7 and their characteristics, specifically Table 3.6. Read Section 6.1, understand why the meet-in-the-middle attack on Double DES results in a cipher that is only slightly stronger than DES, and understand why Triple DES is not vulnerable to the same attack. Section 6.4 should be largely a review of parts of Section 3.2. Understand the possibilities for variation of the basic Feistel structure. For Section 7.2, be able to describe the ideas of traffic analysis and covert channels, and how traffic padding can make analysis more difficult. In Section 7.3, Understand the problems posed by key distribution, the options for delivering keys, the idea of session keys, and of a KDC and key hierarchy (pp. 211-214). Understand how the lifetime of a session key affects its security (p. 216). Understand the issues addressed in the key distribution scenarios on pp. 214-215 and 217-218, and what a nonce is used for. Make sure that you understand Section 9.1 completely, including both the encryption and authentication functions performed by public-private key pairs (e.g., know how Alice can be sure that only Bob reads her message, and how Bob knows the message came from Alice.) Understand how you can use public and private keys to exchange a session key, and why you might want to use a session key instead of relying solely on asymmetric encryption. Make sure that you are comfortable with the KR and KU notation used in this section. Know what a trap-door one-way function is. In Section 9.2, you really just need to understand that the RSA encryption algorithm consists of taking the plaintext to a power, and that the decryption function is just taking the cryptext to a different power, and that the exponents are chosen to be related in such a way that the operations are yield the same value. Understand the requirements listed at the top of p. 269, and the possible avenues of attack described on pp. 274-278. Understand how the key distribution problem changes for public-key encryption, and how it stays the same (e.g., why a KDC still needs to be trusted). Understand how certificates work, and the role of the Certificate Authority (pp. 289-290). If you understood Section 7.3, the subsection "Public-Key Distribution of Secret Keys" (pp. 291-293) should be a straightforward application of public-key algorithms to the same problems. Diffie-Hellman Key Exchange is the one public-key algorithm straightforward enough that you should be able to memorize its derivation (bottom of p. 294, or Figure 10.7) Review the specific attacks on message integrity listed in Section 11.1. In Section 11.2, know the three types of authenticator function, and know the difference between an authenticator and an authentication protocol. Understand the requirements for cryptographic hash functions (p. 329). Understand the consequences of the Birthday Attack (pp. 332-333). Know the two most common cryptographic hash functions (Sections 12.1 and 12.2) and their external characteristics (i.e., the size of the hash values they produce). Be able to describe some practical uses for cryptographic hashes (e.g., comparing files, detecting file changes, authentication without storing passwords) You do not need to know the internals of the algorithms (i.e., the subsections "MD5 Logic" and "MD5 Compression Function" and the corresponding sections for SHA-1), nor do you need to know anything about MD4. You need to understand the idea of an HMAC (pp. 372 and 373) and why you would need an HMAC instead of a hash function (e.g., to avoid man-in-the-middle attacks), but you do not need to know the implementation details. Read Section 13.1, understand the problems with digital signatures (e.g., repudiation), and how they can be addressed by using an arbiter (Table 13.1). Understand how public-key encryption can keep the arbiter from seeing the original message, but know the remaining functions for which the arbiter must still be trusted (p. 383). In Section 13.2, understand the replay attacks described in Section 13.2 (p. 384) and the approaches for coping with them (p. 385). In Section 14.1, understand the problems that Kerberos is trying to solve (pp. 402-404). You may read the rest of Section 14.1 if you like, but your time would be better spent on "Designing an Authentication Scheme" and "The Moron's Guide to Kerberos."
OPCFW_CODE
Among the various tests it is capable of performing, there are a couple that are relevant to Digital Asset Manager (DAM) performance. Assuming that your current working directory contains toughday-5.5.jar, the following command will upload 1,000 112 KB 849px x 565px JPEG images to CQ DAM and report the elapsed time (about 4 minutes on a server with 8 CPU cores and solid state drives). java -Xmx2048m -Dhostname=yourcqserver.domain.com -Dport=4502 -DuploadImage.count=1000 -jar toughday-5.5.jar uploadImage Please note that the renditions workflows triggered by the ingestion of these images will last a lot longer than the reported elapsed time, especially if the server CPUs are under-powered. At least two CPU cores are needed for a valid test. Created images are NOT deleted after the test. They will be created under /content/dam/ with 50 images per folder with names such as /images1880. The following command will upload 1,000 556 KB PDF files to CQ DAM and report the elapsed time (about 3 minutes on a server with 8 CPU cores and solid state drives when Tough Day is run locally, about 6.5 minutes when Tough Day is run remotely on another machine, ping delay 1 ms, 1 hop).: java -Xmx2048m -Dhostname=yourcqserver.domain.com -Dport=4502 -DuploadPdf.count=1000 -jar toughday-5.5.jar uploadPdf Created PDF documents are NOT deleted after the test. They will be created under /content/dam/ with 50 PDFs per folder with names such as /pdf7498. This operation is CPU-intensive - see screenshot of Windows Task Manager below: If you get the following WARNING message in error.log (reached maximum number of queued asynchronous events): *WARN* [18.104.22.168 POST /bin/wcmcommand HTTP/1.1] org.apache.jackrabbit.core.observation.ObservationDispatcher More than 200000 events in the queue java.lang.Exception: Stack Trace re-start CQ with the following additional JVM init argument in the start script: See this for more details. NOTE: The following command will run all of the “Tough Day” tests (for Web Content Management [WCM] as well as Digital Asset Management [DAM]): java -Xmx2048m -Dhostname=yourcqserver.domain.com -Dport=4502 -jar toughday-5.5.jar all On a server with 8 CPU cores and solid state drives (SSD), this complete test suite should finish in about 6 minutes (Tough Day running on same server - no network latency). With Tough Day running remotely (thus incurring network overhead), it took 23 minutes on a server with 4 CPU cores and mechanical hard disks (HDD). On a desktop with 2 CPU cores and mechanical hard disks (HDD), this took about 14 minutes (Tough Day running on same server - no network latency). Essentially, if this particular test takes longer than 30 minutes on your server hardware, you should identify and address potential performance issues before deployment to production.
OPCFW_CODE
Europium stability in +2 and +3 state I would like to ask a question about europium's stability in the $+2$ and $+3$ oxidation state. The electronic configuration of europium in its neutral state is $\ce{[Xe] (4f)^7 (6s)^2}$. Now, when in the $+2$ oxidation state, the electronic configuration is $\ce{[Xe] (4f)^7}$ and in the $+3$ oxidation state, it is $\ce{[Xe] (4f)^6}$. Now, I thought the $+2$ oxidation state is more stable because it's a half-filled $\ce{f}$ sub-shell so there is less mutual repulsion between electrons in the same sub-shell. However, an article from nature.com reads the following: Europium metal is now known to be highly reactive; the element's most stable oxidation state is +3, but the +2 state also occurs in solid-state compounds and water. Now, I would like to ask, how is it possible for europium to be most stable in its $+3$ oxidation state, knowing what I have mentioned above? We prefer to not use MathJax in the title field due to issues it gives rise to; see here for details. Somewhat related: Lanthanide property exceptions. It's worth noticing that the answers given there also use the same argumentation as you do, which I would also agree with. This is going to be a rehash of Oscar Lanzi's answer, but this point needs to be driven home, so I make no apologies. "Special" electron configurations - fully-filled or half-filled subshells - are only a relatively minor factor in determining the stability of oxidation states. I've written about this before in a slightly different context, but IMO it would be well worth reading this, as the exact same principles operate in this case: Cr(II) and Mn(III) - their oxidizing and reducing properties? What determines the "most stable oxidation state"? We can consider what happens when we increase the oxidation state of a metal: firstly, we need to pay ionisation energies to remove electrons from the metal. On the other hand, though, metal ions in higher oxidation states can form stronger bonds (e.g. lattice energy in ionic compounds, solvation energy in solution, or more covalent bonds in molecular compounds). The balance between these two factors therefore leads to a "best" oxidation state. For example, sodium could hypothetically go up to Na(II) and form $\ce{NaCl2}$. Theoretically, this compound would have a significantly larger lattice energy than $\ce{NaCl}$. However, sodium's second ionisation energy is prohibitively large, which means that it stays put in the +1 oxidation state. On the other hand, magnesium's second ionisation energy is not prohibitively large, and therefore Mg preferentially forms $\ce{MgCl2}$ over $\ce{MgCl}$. For all lanthanides, the first three ionisation energies are all fairly comparable. There is a graph here which shows the variation in the ionisation energies of the lanthanides, which I reproduce below. (source: Inorganic Chemistry 6ed, Weller et al., p 630) For all lanthanides, these three ionisation energies are easily compensated for by the extra lattice energy / solvation energy derived from a more highly charged ion. The fact that Eu(II) has a $\mathrm{f^7}$ configuration only serves to make its third IE marginally larger than that of Gd. This difference is sufficient to make Eu(II) an accessible oxidation state (cf. different electronic behaviour of EuO and GdO), but not sufficient to make it the most stable oxidation state. Note that "most stable oxidation state" depends on the conditions, too, but I assume we are talking about aqueous solution. Electron configuration is not the be-all and end-all of stability. It's just one term in the energy balance. Other factors like the electron affinity of the anion former (think of oxygen or fluorine versus sulfur or iodine), lattice or solvation energies, etc may override the electron configuration term; if so, then you find your europium in the +3 oxidation state after all. That first sentence needs to be bolded in all caps. If there ever was a time to use obnoxious formatting for emphasis, this is it... One reason is that in the f orbital, the +2 oxidation state makes it that one pair of electron in the orbital is not filled, and will at least try to fill it. And the +3 orbitals filled those pairs, even though it didn't fill the whole orbital. Of course filling the whole orbital (like noble gases) will be great but it is easier for the metals to fill up it's pair. See Hund's Rules
STACK_EXCHANGE
Is there a change in the handling of unhandled alert in ChromeDriver and Chrome with Selenium? I have a test that has been running fine for months. One thing it does is cause an alert and then verify the alert text. This is running with Selenium, Java and Chrome Driver 76.0.3809.68. Lately it has been giving me the error: "No such alert". What happens is it clicks a button and waits for an alert if there: try { button.click(); } catch (UnhandledAlertException ex) { // nothing } // then here goes code to accept the alert and get the text when stepping through I see the alert. When I run it, I see the alert but it disappears. I did read something in the (Chrome Driver) release notes about unexpected alerts but it was a bit vague. We have a global page which sets the options for Chrome, but everyone uses it and I don't want to screw up for other people. I did do it locally (did not git push) and it worked when I set the options before creating the driver. Then I tried to do this, which does not seem to work. Should it, or once the web page is retrieved, can you not change options? // Somewhere after web page retrieved this gets called: public void setIgnoreAlert() { ChromeDriver cd = (ChromeDriver) driver; ChromeOptions cap = new ChromeOptions(); cap.setCapability(CapabilityType.UNEXPECTED_ALERT_BEHAVIOUR, UnexpectedAlertBehaviour.IGNORE); Capabilities other = cap; cd.getCapabilities().merge(other); } Which I was really hoping would work, but did not. Do you have to set the behavior before the Chrome instance comes up? That is, can you not set it as I did above? Any other suggestions on how to set it after the Chrome instance is up? --- added later to answer question This is done immediately after the try-catch block with button.click(): The method configPage.getAndHandleAlertPopUp() does the following: public String getAndHandleAlertPopUp() { Alert alert = driver.switchTo().alert(); String alertPopup = alert.getText(); alert.accept(); return alertPopup; } set it in options before initializing the driver. options.setUnhandledPromptBehaviour(UnexpectedAlertBehaviour.IGNORE) This will throw an expection upon executing certain subsequent actions IF the alert still exists. If you handle the alert yourself, you should be good. btw, default seems to be DISMISS_AND_NOTIFY now... which closes the prompt, but still throws the exception (that's the "notify" part I guess). ACCEPT and DISMISS will close all prompts without throwing an exception down the line. btw, if you can leave the options as they are, but just be aware that you have to deal with the prompt right when it appears. The code you are using to check for the text may be throwing the exception depending on what it does. This shows the W3C algorithms that the new Chromedriver is following: https://www.w3.org/TR/webdriver1/#navigation Whenever you see, "Handle any user prompts"... that's where the unexpected alert behavior comes into play. You saw it right. As per the User Prompts section within WebDriver - W3C Recommendation: The common denominator for user prompts is that they are modal windows requiring users to interact with them before the event loop is unpaused and control is returned to the current top-level browsing context. By default user prompts are not handled automatically unless a user prompt handler has been defined. When a user prompt appears, it is the task of the subsequent command to handle it. If the subsequent requested command is not one listed in this chapter, an unexpected alert open error will be returned. User prompts that are spawned from beforeunload event handlers, are dismissed implicitly upon navigation or close window, regardless of the defined user prompt handler. A user prompt has an associated user prompt message that is the string message shown to the user, or null if the message length is 0. As per the discussion in ChromeDriver should return user prompt (or alert) text in unhandled alert error response: When a user prompt handler is triggered, the W3C Specification states that the error response should return an "annotated unexpected alert open error" which includes an optional dictionary containing the text of the user prompt. ChromeDriver should supply the optional information. Clearly, ChromeDriver was not compliant with this standard as the @Test were annotated with @NotYetImplemented as follows: @Test @NotYetImplemented(CHROME) @NotYetImplemented(CHROMIUMEDGE) @Ignore(value = HTMLUNIT, reason = "https://github.com/SeleniumHQ/htmlunit-driver/issues/57") @NotYetImplemented(value = MARIONETTE, reason = "https://bugzilla.mozilla.org/show_bug.cgi?id=1279211") @NotYetImplemented(EDGE) public void testIncludesAlertTextInUnhandledAlertException() { driver.get(alertPage("cheese")); driver.findElement(By.id("alert")).click(); wait.until(alertIsPresent()); assertThatExceptionOfType(UnhandledAlertException.class) .isThrownBy(driver::getTitle) .withMessageContaining("cheese") .satisfies(ex -> assertThat(ex.getAlertText()).isEqualTo("cheese")); } Now this feature have been implemented with ChromeDriver v76.0: Resolved issue 2869: ChromeDriver should return user prompt (or alert) text in unhandled alert error response [Pri-2] So you have to handle the alert as a mandatory measure. A bit more of your code code block for ...then here goes code to accept the alert and get the text... would have helped us to debug the issue in a better way. However here are the options: Induce WebDriverWait for alertIsPresent() as follows: new WebDriverWait(driver, 10).until(ExpectedConditions.alertIsPresent()); Your code trials was perfect perhaps as you have passed the CapabilityType.UNEXPECTED_ALERT_BEHAVIOUR, UnexpectedAlertBehaviour.IGNORE in a structured way: public void setIgnoreAlert() { ChromeOptions opt = new ChromeOptions(); opt.setCapability(CapabilityType.UNEXPECTED_ALERT_BEHAVIOUR, UnexpectedAlertBehaviour.IGNORE); } Another perspective would be to disable the beforeunload event handlers and you can find a couple of related discussions in: How to disable a “Reload site? Changes you made may not be saved” popup for (python) selenium tests in chrome? How to handle below Internet Explorer popup “Are you sure you want to leave this page?” through Selenium Note: Once the WebDriver and Web Browser instances are initialized you won't be able to change the configuration on the run. Even if you are able to extract the Session ID, Cookies and other capabilities and session attributes from the Browsing Session still you won't be able to alter those attributes of the WebDriver. You can find a detailed discussion in How can I reconnect to the browser opened by webdriver with selenium? yes but I think by setIgnoreAlert() was ineffective as the driver had already been created. I did get our lead to give me a property I could set in my @DataProvider to bring the driver up with the ignore. I could not do it myself. I have to set the property. If it is not set, then it will remain the default which I guess is accept and dismiss (but throw the error) I added a few paragraphs into the question showing how the alert had been handled before the new Chromedriver version. Now with the new one it is dismissed, so it could not be switched to. @Tony Checkout the answer update and let me know if any queries. Thanks. I had them add a way for me to specify "ignore" before the driver is created. That works. I suppose also, instead of switching to the alert and getting the text and accepting, I could just catch the alert error and get the text from the alert message, and simulate actually intercepting the error. @DebanjanB what maven dependency should I add and what library to import in order to use opt.setCapability(CapabilityType.UNEXPECTED_ALERT_BEHAVIOUR, UnexpectedAlertBehaviour.IGNORE) ? Currently my IntelliJ doesn't know what is CapabilityType and what is UnexpectedAlertBehaviour? Thanks in advance!
STACK_EXCHANGE
Best Practice · Last modified July 15, 2009 The degree of competitiveness of a community depends on the individual goals of community members, the actions they engage in, and to what degree inter-person comparisons or contests are desired. Articulating the community's competitiveness can help the designer of a reputation system determine which specific reputation patterns to employ. When a new or existing community requires a reputation system, the designer must pay careful consideration to the degree of competitiveness the community ought to exhibit. Haphazardly introducing competitive incentives into non-competitive contexts can create problems and may cause a schism within the community. Use this pattern when choosing the type of reputation system to design for a community. This chart attempts to describe a community in terms of its 'competitiveness' - a broad term, but used here to describe a combination of things: the individual goals of community members, and to what degree those goals coexist peacefully, or conflict; the actions that community-members engage in, and to what degree those actions may impinge on the experiences of other community-members; and to what degree inter-person comparisons or contests are desired. This 'competitiveness spectrum' is admittedly subjective - it would not be surprising to find many examples where this model does not hold up exactly as illustrated. The attempt is to have any kind of framework to start from, not to have a definitive and comprehensive model. Finally, depending on the relative level of competitiveness present in your community, recommendations are made for appropriate reputation patterns. |Members are motivated by helping other members - giving advice, solace or comfort.||Member goals are largely shared ones. Members work together to achieve those goals.||Members have their own intrinsic motivations, but these goals need not conflict with other members' goals.||Members share the same goals, but must compete against each other to achieve them.||Members share opposing goals: in order for one member to achieve these goals, others must necessarily be denied their own.| |Use Reputation to...| |Identify senior community members of good standing, so that others can find them for advice and guidance.||Identify community members with a proven track-record of being trustworthy partners.||Show a member's history of participation, that others may get a general sense for their interests, identity and values.||Show a member's level of accomplishment, that others may acknowledge (and admire) their level of performance.||Show a member's history of accomplishments, including other members' victories and defeats against them. Reputation is used to establish bragging rights.| |Represent Reputation with...| |Accept volunteers (of good standing) from the community to wear an Identifying Label: 'Helpful' or 'Forum Leader'. New members can trust these folks to help initiate them into the community.||Use Named Levels to communicate members' history and standing: members with higher ranks should be trusted more easily than newbies.||Consider Statistical Evidence to highlight a members' contributions: just show the facts and let the community decide their worth. Optionally, Top X designations can highlight members with numerous valued contributions.||Allow easy comparisons between members with Numbered Levels. Provide mini-motivations by awarding Collectible Achievements.||Let a member track her own progress by assigning Point Values to different actions. Rank members against each other, displaying winners and losers.|
OPCFW_CODE
The Angular CLI process times out first time debug is run When I run my app in development I get the following exception. Refreshing the browser allows the app to run as expected. System.TimeoutException: The Angular CLI process did not start listening for requests within the timeout period of 50 seconds. Check the log output for error information. at Microsoft.AspNetCore.SpaServices.Extensions.Util.TaskTimeoutExtensions.d__1`1.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.AspNetCore.SpaServices.Extensions.Proxy.SpaProxy.d__4.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.AspNetCore.Builder.SpaProxyingExtensions.<>c__DisplayClass2_0.<b__0>d.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.AspNetCore.Builder.RouterMiddleware.d__4.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.d__6.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.d__7.MoveNext() Hi. We encounter the same behaviour which is due to the compilation of source files taking longer than the permitted timeout limit to connect to Node/Angular CLI. If you refresh the browser it will eventually connect and you are good to go. Once your source files are compiled the timeout no longer occurs. Regards Age Gould @agegould is correct. And if you need to, you can change the timeout value by setting a value on the spa.Options.StartupTimeout property inside your UseSpa setup call. This worked for me. I set it to 80 seconds. app.UseSpa(spa => { spa.Options.SourcePath = "ClientApp"; if (env.IsDevelopment()) { spa.Options.StartupTimeout = new TimeSpan(0, 1, 20); spa.UseAngularCliServer(npmScript: "start"); } }); Not working for me, after added "spa.Options.StartupTimeout = new TimeSpan(0, 0, 80);" Also, the message says "20 seconds?!!". TimeoutException: The Angular CLI process did not start listening for requests within the timeout period of 20 seconds. Env: VS2017 15.7 preview 5.0; Windows 10 1709; with Hyper-V; Docker; @JipingWang It's fixed in the latest code which should be included in the next ASP.NET Core preview release. @SteveSandersonMS , good to know, thanks. @JipingWang : As of 2018-07-13, using Visual Studio 2017 (15.7.5), Angular-CLI 6.0.8 and Angular 6.0.3, you still need to override the default timeout value. My example uses a admittedly extremely long / paranoid value of 90 seconds, but feel free to adjust to your needs: app.UseSpa(spa => { spa.Options.SourcePath = "ClientApp"; if (env.IsDevelopment()) { spa.Options.StartupTimeout = new TimeSpan(days: 0, hours: 0, minutes: 1, seconds: 30); spa.UseAngularCliServer(npmScript: "start"); } }); out of the box this takes ages. just switched to ng serve on my own with visual studio code / terminal. now up an running ins seconds. spa.Options.StartupTimeout = new TimeSpan(0, 0, 360); //spa.UseAngularCliServer(npmScript: "start"); spa.UseProxyToSpaDevelopmentServer("http://localhost:4200"); @SteveSandersonMS , good to know, thanks.
GITHUB_ARCHIVE
from collections import Counter print("Welcome to the Frequency Analysis App") #List of elements to remove from all text for analysis non_letters = ['1','2','3','4','5','6','7','8','9','0',' ','.',',','!','?',',','"',"'",':',";",'(',')','%','&','#','$','\n','\t'] #Information for the first key phrase 1 key_phrase_1 = input("\nEnter a word or phrase to count the occurance of each letter: ").lower().strip() #Removing all non letter from key_phrases_1 for non_letter in non_letters: key_phrase_1 = key_phrase_1.replace(non_letter,'') total_occurances = len(key_phrase_1) #Create a counter object letter_count = Counter(key_phrase_1) #Determine the frequency analysis for the message print("\nHere is the frequency analysis from key phrase 1: ") print("\n\tLetter\t\tOccurance\tPercentage") for key, value in sorted(letter_count.items()): percentage = 100*value/total_occurances percentage = round(percentage,2) print("\t"+ key+"\t\t\t"+str(value)+"\t\t\t"+str(percentage)+" %") #Make a list of letter from highest occurance to lowest ordered_letter_count = letter_count.most_common() key_phrase_1_ordered_letters = [] for pair in ordered_letter_count: key_phrase_1_ordered_letters.append(pair[0]) #Print that list print("\nLetters ordered from highest occurance to lowest: ") for letter in key_phrase_1_ordered_letters: print(letter,end='') # Information for the first key phrase 2 key_phrase_2 = input("\n\nEnter a word or phrase to count the occurance of each letter: ").lower().strip() # Removing all non letter from key_phrases_2 for non_letter in non_letters: key_phrase_2 = key_phrase_2.replace(non_letter, '') total_occurances = len(key_phrase_2) # Create a counter object letter_count = Counter(key_phrase_2) # Determine the frequency analysis for the message print("\nHere is the frequency analysis from key phrase 2: ") print("\n\tLetter\t\tOccurance\tPercentage") for key, value in sorted(letter_count.items()): percentage = 100 * value / total_occurances percentage = round(percentage, 2) print("\t" + key + "\t\t\t" + str(value) + "\t\t\t" + str(percentage) + " %") # Make a list of letter from highest occurance to lowest ordered_letter_count = letter_count.most_common() key_phrase_2_ordered_letters = [] for pair in ordered_letter_count: key_phrase_2_ordered_letters.append(pair[0]) # Print that list print("\nLetters ordered from highest occurance to lowest: ") for letter in key_phrase_2_ordered_letters: print(letter, end='')
STACK_EDU
export function hasAncestor(node, ancestor) { let parent = node while (parent && ancestor !== parent) { parent = parent.parentNode } return !!parent } export function findMatchingAncestor(node, selector) { let parent = node while (parent && 'matches' in parent && !parent.matches(selector)) { parent = parent.parentNode } return parent && 'matches' in parent ? parent : undefined } export function getDebugName(el) { return el.tagName.toLowerCase() + (el.id ? '#' + el.id : '') + (el.classList.length > 0 ? '.' + Array.from( el.classList ).join('.') : '') } // for large DOMs with few changes, checking the mutations is faster than querySelectorAll() export function applyNodeMutations(elements, mutations, selector) { for (let mutation of mutations) { for (let addedEl of mutation.addedNodes) { if (addedEl.matches(selector)) { elements.push(addedEl) } } for (let removedEl of mutation.removedNodes) { if (removedEl.matches(selector)) { elements.splice(elements.indexOf(removedEl), 1) } } } }
STACK_EDU
Miracast is a wireless connections standard that displays content from laptops and smartphones on projectors or TVs. Any display can act as a receiver as long as what is powering it supports the Miracast standard. Miracast supported devices The technology involved uses the peer-to-peer Wi-Fi Direct standard requiring specific devices for communication. Users have at their disposal adapters plugged into USB or HDMI ports to add devices or displays which do not otherwise support Miracast natively. Most of the latest computers running Windows 8.1 and 10 should support it, which means users may display the screen in another display such as a TV. Verify if your computer supports Miracast You can easily find out if your PC running Windows 10 supports Miracast or not: - Tap on the Windows-key, type connect, and then press Enter. You’ll get the message “The device doesn’t support Miracast, so you can’t project to it wirelessly” or “’name’ is ready for you to connect wirelessly.” Things will be a bit different in case you’re using Windows 8.1. You can run the DirectX Diag in order to get your answer, but this might not be very reliable. Here are the recommended steps: - Press the Windows-key, type dxdiag.exe, and press Enter - Confirm any prompt that will appear and wait for the scanning process to end - Select Save All Information and pick a local directory - Open the saved dxdiag.exe file and locate the Miracast entry The wireless adapter also needs to support Virtual Wi-Fi and Wi-Fi Direct. You’re going to need a device that supports at least NDIS 6.3 because Wi-Fi Direct was implemented in that version. The display driver also needs to support WDDM 1.3 and Miracast. If your driver is updated, it should be fine. Here’s what you need to do in order to find out: - Press the Windows-key, type powershell and press Enter - Use the command Get-NetAdapter | Select Name, NdisVersion to list the supported NdisVersion for every network - Make sure it is at least 6.3 For WDDM support, you should check the previously saved DxDiag diagnostic log. Search for WDDM to display the support version. RELATED STORIES TO CHECK OUT: - How to fix common Miracast issues on PC - Fixed: Miracast not working on Windows 10 - How to setup and use Miracast on Windows 10 PC Still having issues? Fix them with this tool: If the advices above haven't solved your issue, your PC may experience deeper Windows problems. We recommend downloading this PC Repair tool (rated Great on TrustPilot.com) to easily address them. After installation, simply click the Start Scan button and then press on Repair All.
OPCFW_CODE
Duplicate request sent at fixed time interval on tomcat 5.5 mod_jk connector Configuration on tomcat - mod_jk connector worker.properties - worker.list=tomcat worker.tomcat.type=ajp13 worker.tomcat.host=localhost worker.tomcat.port=7615 Problem Statement - For heavy load data, when the browser sends a request to my application, after exactly 3 minutes i can see another duplicate request being sent to my application. I tried the following things - 1) For small load data, I tried putting thread.sleep for 5 minutes, and the result was duplicate request sent to my application. 2) I also tried configuring the worker.properties with reply_timeout, socket_timeout. But it keeps on sending duplicate request. 3) I also deployed my application on Tomcat 6, but I got the same result. The mod_jk logs - [Mon Nov 12 11:09:04 2012] [23690:1908194448] [info] init_jk::mod_jk.c (3183): mod_jk/1.2.28 initialized [Mon Nov 12 11:15:32 2012] [23692:1908194448] [info] ajp_send_request::jk_ajp_common.c (1496): (tomcat) all endpoints are disconnected, detected by connect check (1), cping (0), send (0) [Mon Nov 12 11:15:54 2012] [23797:1908194448] [info] ajp_process_callback::jk_ajp_common.c (1788): Writing to client aborted or client network problems [Mon Nov 12 11:15:54 2012] [23797:1908194448] [info] ajp_service::jk_ajp_common.c (2447): (tomcat) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) [Mon Nov 12 11:15:54 2012] [23797:1908194448] [info] jk_handler::mod_jk.c (2608): Aborting connection for worker=tomcat [Mon Nov 12 11:16:31 2012] [23694:1908194448] [info] ajp_send_request::jk_ajp_common.c (1496): (tomcat) all endpoints are disconnected, detected by connect check (1), cping (0), send (0) I do not get any error in the access.log. Could there be a network connection issue? If yes, then what is the possible solution to this problem? Thanks. how are you processing the request ? are you using AJAX ? No..I am using the usual action class to process my request.
STACK_EXCHANGE
Permission denied for view beyond its own schema There are two schemes rohdaten_fiat and staging_fiat. I have created a view in the latter one based on the first one. CREATE VIEW staging_fiat.udz_odx_dtcs AS SELECT * FROM rohdaten_fiat.udz_odx_relational_dtcs; While keeping the database connection alive I can access the view, e. g. SELECT * FROM staging_fiat.udz_odx_dtcs; It runs under the following parameters: SELECT current_user, current_setting('search_path'::text) AS search_path , relowner::regrole, relnamespace::regnamespace, relname, relacl , pg_get_viewdef(c.oid) AS view_definition FROM pg_class c WHERE relname = 'udz_odx_dtcs' OR relname = 'udz_odx_relational_dtcs'; See: https://pastebin.com/Hn7jAUat However, after closing the connection and reconnect to the database, I can not: SELECT * FROM staging_fiat.udz_odx_dtcs; ERROR: permission denied for relation view The environmental parameters are the same (see https://pastebin.com/Hn7jAUat). Curiously, I can execute the query of the view manually: SELECT * FROM rohdaten_fiat.udz_odx_relational_dtcs; Only the view does not work. Where get the permissions lost? And why? It's PostgreSQL 10.12. The connection including the search_path is absolutely identical. To diagnose, inspect the output of this query before and after reconnecting in your question: SELECT current_user, current_setting('search_path'::text) AS search_path , relowner::regrole, relnamespace::regnamespace, relname, relacl , pg_get_viewdef(c.oid) AS view_definition FROM pg_class c WHERE relname = 'view' -- actual view name OR (relname = 'table' AND relnamespace = 'schema1'); -- actual table & schema name You should get two or more rows each time. I omitted the schema of the view on purpose to see whether there might be others with the same name. Oh, and make sure, your transaction is committed. Read the manual here. This is not a direct answer, but it should enable you to get your answer. Thanks to this query, I was finally able to identify a background script which was setting the relowner and relacl of the view to a role with permissions only within the schema staging_fiat. Thus, it's working now. Thank you!
STACK_EXCHANGE
Find resources for learning how to use cross-site publishing to make published content available to users in SharePoint Server 2013. Select a tile to filter by life cycle. CTRL-click to select multiple tiles. What is cross-site publishing: Cross-site publishing is a new publishing method that lets you create and maintain content in one or more authoring site collections and publish this content in one or more publishing site collections by using Search Web Parts. Cross-site publishing complements the already existing publishing method, author-in-place, where you use a single site collection to author content and make it available to readers of your site. Why would I use this scenario: You should use cross-site publishing when you want to store and maintain content in one or more authoring site collections and display this content in one or more publishing site collections. Using cross-site publishing provides the following benefits: Cross-site publishing uses search technology to retrieve content. On a site collection where the Cross-Site Collection Publishing feature is enabled, libraries and lists have to be enabled as catalogs before the content can be reused in other site collections. The content of the library or list catalogs must be crawled and added to the search index. The content can then be displayed in a publishing site collection by using one or more Search Web Parts. When you change the content in an authoring site collection, those changes are displayed on all site collections that reuse this content. In addition, you can use other search driven features like faceted navigation, query rules and usage analytics to adapt the content that is displayed on the publishing site. Learn how you can use cross-site publishing and Search Web Parts to create adaptive SharePoint Internet, intranet, and extranet sites. Learn about the key changes and improvements in SharePoint 2013 web content management. How to set up a product-centric website Learn how you can use the new web content management features to set up a website that is based on product catalog data. Content pivot viewer Before you start to build out your site using cross-site publishing, you should familiarize yourself with typical cross-site publishing architectures, the search driven features that let you adapt the content that is displayed on the publishing site, and the different features that are involved when you add a custom design to your publishing site. Plan the architecture for cross-site publishing Learn about components and typical architectures for SharePoint cross-site publishing sites. Plan authoring sites for cross-site publishing Learn how to plan the authoring sites for your cross-site publishing solution. Plan publishing sites for cross-site publishing Learn how to plan the publishing sites for your cross-site publishing solution. Plan search for cross-site publishing sites Search-driven pages are pages that use search technology to dynamically show content. Learn about the features that you will use on search-driven pages, such as managed properties, refiners, result sources, and recommendations, and find out what you must consider when you set up and use these features. Plan the design of a publishing site Learn about the different features that are involved when you add a custom design to your publishing site, such as master pages and page layouts. Get an introduction to how display templates are used to customize the content that is shown in Search Web Parts, and how you can use device channels and device channel panels to render a publishing site in multiple ways. Also, learn how you can use the new Design Manager feature to manage all aspects of your custom design. Learn how to use the design features that are involved when you add a custom design to your publishing site. Develop the design of a publishing site Learn how to add a custom design to your publishing site. Develop the design of Search Web Parts Learn how to add a custom design to your Search Web Parts. Learn how to create content for cross-site publishing, how to add content to the search index, and how to use search driven features to display content on a publishing site. Configure cross-site publishing Learn how to create site collections for cross-site publishing, activate the Cross-Site Collection Publishing feature, create and manage term sets for tagging content on authoring sites, create catalog content by using SharePoint lists, share a library or list as a catalog, and configure search settings for cross-site publishing. Connect a publishing site to a catalog To show content from a library or list that is shared as a catalog, you must connect the publishing site collection to the catalog. Configure Search Web Parts Search Web Parts use search technology to show content that was crawled and added to the index. In Search Web Parts, queries are configured so that a subset of content from the search index is shown in a particular ranking order. When users browse to a page that contains a Search Web Part, the Web Part automatically issues the query. The result is then shown in the Web Part. You can modify the associated display template to decide how the search results should be displayed. Configure refiners and faceted navigation You can add refiners to a page to help users quickly browse to specific content. Refiners are based on managed properties from the search index. To use managed properties as refiners, the managed properties must be enabled as refiners. Faceted navigation is the process of browsing for content by filtering on refiners that are tied to category pages. Faceted navigation lets you specify different refiners for category pages, even when the underlying page that displays the categories is the same. Learn how to map a crawled property to a refinable managed property, enable a managed property as a refiner and configure refiners for faceted navigation. Configure result sources Result sources limit searches to certain content or to a subset of search results. Learn how to create and manage result sources. Configure query rules Query rules can be used to improve search results that respond to the intent of users. Learn how to create and managed query rules. Configure recommendations and usage event types Usage events let you track how users interact with items on your site. Items can be documents, sites, or catalog items. You can use the data that is generated by usage events to show recommendations or popular items on your site. Learn how to create custom usage event types, and how to add code to record custom usage events so that they can be processed by the analytics processing component. Learn how to view reports that are based on usage event statistics of your site. View usage reports Learn how to view the Popularity Trends report and the Most Popular Items report to see usage event statistics for the content on your site. Read how Mavention, a Dutch system integrator, have used cross-site publishing for their new website, and then learn how to use your publishing site. Case study: Mavention This case study provides an overview of new SharePoint Server 2013 web content management features and describes how these features benefit Mavention after they upgraded their website from SharePoint Server 2010. Case study: United Airlines This case study gives an overview of new SharePoint Server 2013 web content management (WCM) and search features and shows how United Airlines used these features to improve their Service Catalog. Send feedback about this prototype to
OPCFW_CODE
Snapping real-life objects via your smartphone and transferring them straight to your computer's photoshop window might have sounded an insane idea until now. But a new app shows it can indeed be done. The augmented reality-based application suite called AR Cut & Paste can let you do so by pointing your smartphone camera to the desired object. Once it captures the subject and chucks off the extra background element, all you need to do is to point the camera at your computer screen to paste the object in its desired place. Cyril Diagne, the inventor of the service, has posted a video on Reddit to prove his claim. He has also left the detailed source codes of the service in Github to back his claim. Diagne, an artist, designer and programmer, is connected to Google's Cultural Research Laboratory and also serves as the director of media and interaction design at the Lausanne University of Art and Design. How to use AR Cut and Paste To use the service, you should install the tool on your smartphone. Later, all you need to do is focus on the desired object via your device camera and shoot the image. The application later shifts the exact image to the computer display to make the job done for you. For now, the application takes around two and a half seconds to capture the object on the smartphone, and for shifting the image, the process takes around four more seconds. If the time sounds rather slow to you, the application is still in its infancy. But the developer has announced rollout of another AI + UX prototype of it by sometime next week. How to install The application utilises both augmented reality and machine learning technologies to make it happen. The process involves three modules' work on the smartphone, the local storage server and the target device. Once the smartphone captures the image, the smartphone application uploads it to the storage up in the cloud. Once the user aims their smartphone on display, the phone camera finds the position on-screen by using another service called screen point. "For now, the salience detection and background removal are delegated to an external service," the developer said. "It would be a lot simpler to use something like DeepLap directly within the mobile app. But that hasn't been implemented in this repo yet." You should configure the local server on your device with the following command. virtualenv venvsource venv / bin / activate pip install -r requirements.txt Later, you need to activate the remote connection facility of Photoshop on your device with the following set of commands- python src / main.py basnet_service_ip = "http: //A.A.A.A" basnet_service_host = "basnet-http.example.com" The application utilises the BASNet or Boundary-aware Salient Object Detection to remove the background of the subject image. Alongside, the application uses OpenCV SIFT to track the coordinates of the phone is aligned on the computer display. Find the tiny python package here.
OPCFW_CODE
A while ago, I stumbled on somebody else’s blog entry Five things I hate about Python. The game (apparently) is to pick your favourite programming language and pick five things you don’t like about it. So, here are my five things: - No parallelism. The Python interpreter has a global lock that makes it impossible to parallelize execution. My processor has two cores, multiple pipelines, and a vector unit. Wouldn’t it be spiffy to use those? I have used the Parallel Python module to get around this (by spawning multiple interpreters), but it’s a hassle, and only applicable in certain cases. To be fair, this is a common problem in imperative languages, which force the programmer to precisely specify how thing are calculated. It’s much easier for a compiler to parallelize things in functional language, which have the programmer specify only what is calculated. Maybe Haskell is the answer to all of our problems? Hey… why are all the 383 students looking at me like that? - Late Binding. This is the mechanism that allows the beauty of duck typing, so it’s probably a net win. It comes up in situations like this: def add(a,b): return a+b Until the function is called, there’s no way for Python to know whether the +there is addition, string concatenation, or something else. So, when each statement executes, Python has to decide what operators (or whatever) to use at that moment. The net result is slowing the language down a lot. - Type confusion. I don’t know if it’s the duck typing or weak typing, but beginning programmers (aka CMPT 120 students) often have problems getting the type that a particular value has. I very often see students converting a type to itself. For example: name = str(raw_input("Name: ")) count = int(0) That indicates some serious confusion about what’s going on. Or maybe I’m a bad teacher. I’d accept that as an explanation. - GUI libraries. The standard Python install comes with only a Tk binding for GUI development. I really wish wxPython came with the default install. Then, we could all use it and assume it would be there. It would make Python a pretty serious contender for cross-platform GUI development. - Standard library. One of the principles of Python is that “the batteries are included”. In other words, the libraries you need are there by default. That’s usually true, but there are a few things I wish were always there. The Python modules that I seem to have to install the most often are: Biggles, Imaging, Numarray/Numeric, Parallel Python, PyGame. As an aside, maybe if the Imaging and PyGame modules were accepted into the standard library, there would be some pressure to get some good documentation going for them.
OPCFW_CODE
02-25-2015 11:44 PM When I submit authorizeAndCapture (AuthorizeNetAIM) on sandbox it gives me this " Error connecting to AuthorizeNet" message. Is the sandbox down? I'm experiencing this since yesterday. [_response_array:AuthorizeNetAIM_Response:private] => Array [error] => 1 [error_message] => Error connecting to AuthorizeNet 03-31-2015 10:53 PM I'm having the same issue since today. I think the problem is in the SDK - PHP as i'm able to connect with Curl. I know they made some maintenance on the sandbox today. Anyone else with a similar issue? 04-01-2015 02:10 AM - edited 04-01-2015 02:12 AM They are working on the SSL cert, should be working now. If you are still having issue, read this 04-01-2015 12:57 PM All my sandbox project just started throwing the same error: Response Code: 3 Response Subcode: 1 Response Reason Code: 261 Is this the same issue? The SSL cert and CURL fixes didn't help. I also can't create a new sandbox account right now.... 04-01-2015 01:37 PM We are currently investigating an issue with our sandbox server that is causing it to return response code 261. We're working to get it back up as soon as possible. I apologize for the inconvenience. 04-01-2015 02:44 PM We're still working on some of the systems and doing some additional testing, but as of now transaction processing should be back up. Please let me know if you continue to see any errors. 04-03-2015 09:51 AM For the last few days the auth.net environment has been sporadically failing to respond. No code or text has been provided. When will it be fixed? response_code = , response_reason_code = , response_reason_text = 04-06-2015 07:47 AM Ever since March 31, I am still having problems getting the sandbox to work UNLESS I turn off "Verify SSL Certificate". I am using osCommerce and the add-on for Authorize.net, AIM module, ver. 2.1. I have seen postings elsewhere about replacing the PEM file, but in the osCommerce add-on module for AIM, if I am looking in the right place, there are two files that (to me) seem relevant: authorize.net.crt, and cacert.pem. The latter is not used if the former is present. Is a pem file the same thing as authorize.net.crt? I tried getting a new PEM from Github, and replacing the existing cacert.pem (while at the same time removing authorize.net.crt so that the pem file would be utilized) but that did not work. And, in case the new pem is the same thing as authorize.net.cert, I then tried replacing the old authorize.net.crt with the new pem file, but that didn't work either. I am pretty much in the dark here. Any suggestions? 04-06-2015 12:16 PM Connecting to sandbox was working OK last Friday (Apr. 3), but can't connect today (Monday, Apr. 6). AIM SDK just returns "Error connecting to AuthorizeNet". Is sandbox down for maint. or something else broken?
OPCFW_CODE
🔍Lysaker, Akershus, Norway 📅 644Total Views 📅 190004H5Requisition # Apply for JobShare this JobSign Up for Job Alerts Introduction to the Job At TechnipFMC we deliver critical projects of a scale, scope and difficulty that you simply won't find anywhere else. We offer breakthrough projects, a global playground and inspiring experiences. We are now looking for an Analysis Engineer to join our multicultural team in Lysaker, Norway. As part of the installation analysis team, you will perform installation analysis engineering related to flexible, umbilical, rigid pipe and subsea lifting of structures. The work includes Hydrodynamic calculations and modelling in Orcaflex of subsea products and assets, static and dynamic analysis to verify systems constraints and operability. You will interface with project and lead engineers in projects, and with other engineering disciplines to ensure optimized and safe installation methods. As an analysis engineer your main tasks and responsibilities will be: · Data gathering and evaluation, design basis writing and follow-up · Calculation of hydrodynamic and inertial properties of structures · Use of marine operations software packages to model products, structures, vessels and other systems required to perform subsea operations such as pipe laying and lifting · Verification of modelling and calculations towards relevant industry codes, standards, design guides, rules and regulations · Use the developed automation based on python scripts and have a capacity to develop automation as required and request · Liaise with associated subsea disciplines and coordinate interfaces within projects · Provide technical engineering support to other departments and to colleagues in terms of training and mentoring · Preparation of study reports and participation in tender work related to installation analysis · Supports the knowledge development within the discipline by providing guidance, trainings and mentoring to engineers · Ensures that engineering documents meet the general quality requirements, codes, and standards · Ensures that the discipline engineering deliverables are compliant with contract requirements (including budget), project QHSE plan and TechnipFMC and discipline engineering processes and standards · Ensure “lessons learned” with other Lead Engineers and project support functions You are meant for this job if You have minimum 3 years` experience working within relevant subsea engineering sphere and have knowledge of engineering fundamentals associated with subsea construction activities. You should have proven ability to apply engineering and project management skills on subsea construction activities. You have a degree (BSc or MSc) within Marine Structures, Marine Hydrodynamics, Marine Engineering, Mechanical Engineering or Civil Engineering or equivalent. It is positive if you have experience from one of the following disciplines/fields: Flexible and umbilical pipelay, Rigid pipelay or Subsea lifting of structures. Requirements for this role include: · Engineering degree (MSc or BSc) · Knowledge of engineering fundamentals associated with subsea construction activities. · Skills in modelling of hydrodynamic physical systems · Basic programming skills, preferably Python · Good understanding of statistics and probabilistic approach to engineering design · Structured with high sense of quality and on time performance · Capable to generate results working in both team and alone · Good communication skills · Fluent written and verbal English Your future at TechnipFMC · Good opportunities for technical career path · Large international company with good Global mobility opportunities · Training and development through project activity · Diverse and international team Application deadline is 24.11.2019. For further information, please contact: Manager Project Engineering – Danone Bauknight, phone +47 6758 8562/mobile +47 4647 3661 Discipline Supervisor Analysis - Young Jang Na, phone +47 6720 2551 Learn more about TechnipFMC Learn more about us and find other open positions at our Career Page . Follow us on LinkedIn for company updates
OPCFW_CODE
architecture and interior fascinating of stand alone room dividers from alluring partition impressive folding com in. room divider curtain stand amazing panel glamorous dividing curtains from alone dividers up. architecture and interior fascinating of stand alone room dividers from alluring divider minimalist bedroom contemporary temporary partitions roo. interior room divider stand household lot screen solid wood base partition support legs pertaining alone dividers curtain incredible buy traditio. plant stand room divider alone dividers photos fantastic to redefine your space free di. stand up room dividers is here x 6 panel intended for plans alone curtain in inspirations 7. stand alone room dividers photo divider home cabinet click to enlarge and also. stand alone rooms divider source a curtain room dividers partition in the contemporary master bedroo. mobile freestanding stand alone room dividers plant divider hinged rooms decorative calligraphy design wood bamboo 4 in prepare. stand alone room dividers divider. stand alone office enchanting fort glass partitions home giant and room dividers curtain archi. this inexpensive plant stand is great for a room divider inside or outside alone dividers partition. room divider stand cozy alone dividers tended intended for attractive extraordinary appealing. living room divider cabinet furniture and built ins designs for stand alone dividers tv st. wall separator room divider stand decorative bedroom ideas kids dividers alone curtain freestanding screens within renovation f. room divider ideas 1 stand alone dividers rotating tv fantastic to redefine your space. stand alone room dividers beautiful cheap office or divider 6 steps with pictures at plant di. 6 modern room divider gray living stand alone dividers partition. stand room divider living contemporary swivel alone dividers up divide. three separators office options low glass fancy translucent target interior removable self stand alone wide decorative room dividers divider curtain offic. wall unit room divider image stand alone dividers partition. freestanding room dividers divider ideas bedroom free standing hackers for stand alone cu. stand up room dividers is here divider with alone remodel pertaining to idea 4 plant home design ideas in decor 5. stand alone room dividers architecture and interior fascinating of from alluring rotating tv divider stan. architecture stand alone rooms dividers this standalone room divider is perfect intended for designs shower tv. tobacco and bamboo standalone light diffusing room divider china stand alone dividers curtain. stand alone wall dividers room sliding divider inside designs curtain. freestanding room dividers house decorations intended for stand alone divider ideas media a. using wardrobes as room divider wardrobe closet stand alone dividers contemporary design and modern layout in a loft. this standalone room divider is perfect for creating a vertical garden stand alone dividers partition an easy way to create grid in your home. retractable room dividers folding cheap commercial design partitions stand alone tv. stand alone rooms dividers this standalone room divider is perfect comfortable as well photo per. fantastic room dividers to redefine your space stand alone divider curtain. stand up room dividers free standing divider hackers alone diy stan. freestanding room dividers stand alone divider amusing within effectively tv. wooden screen shop for affordable home furniture decor outdoors room divider stand remodel alone dividers tv cabinet of walls new inside decorations. stand alone rooms divider room in the contemporary master bedroom with floating panels design cornerstone architects dividers tv cabinet. modern slotted wood stand alone room divider by for sale dividers plant id f. this stand alone room dividers diy divider. woodsy room divider also allows you to showcase wall art design cornerstone architects stand alone dividers swivel tv living space. room divider stand alone rooms dividers gorgeous cherry wood and bamboo sold free swivel tv new to. stand alone room dividers standalone unit with open shelves also acts as a divider wood fra. alluring bedroom contemporary temporary partitions stand alone room in dividers tv divider d.
OPCFW_CODE
Thx, i have the same init code in the driver. Do you have any main.c code that shows how to use this driver? After i initialized all functions of the MCU is called the bhy_initialize_support(); function and it gets successfull the Sensor id. But whats next? the example code in the driver just shows 2 functions, the sensor callback and the demo_sensor. But don't know how to implement these into my main to geht continous readings ... View more Hello, started a new Projekt which contains the Sensordata from the BHI160b. I am using the BHI160 Shuttle Board. My Microcontroller is an STM32L053R8 which i will use to read out the sensordata. I started to follow the driver Porting guide to adapt the Functions to my Platform. At first i created a New Projekt with my STM32 init code and also included the BHy MCU driver. At the beginning i removed the lines in the file bhy_support.c #include "FreeRTOS.h" #include "task.h" extern int8_t sensor_i2c_write(uint8_t addr, uint8_t reg, uint8_t *p_buf, uint16_t size); extern int8_t sensor_i2c_read(uint8_t addr, uint8_t reg, uint8_t *p_buf, uint16_t size); extern void trace_log(const char *fmt, ...); After that i tried build the target Files in i have an error: ..\Drivers\BHI160b_driver\inc\BHy_support.h(59): error: #5: cannot open source input file "twi.h": No such file or directory Because the twi.h is a platform specific library for Atmel i also removed this line from file bhy_support.h. After this change i tried to rebuild again and now i have a lot of undefined identifiers : ..\Drivers\BHI160b_driver\inc\bhy_uc_driver_types.h(252): error: #20: identifier "uint8_t" is undefined uint8_t sensor_id; Even the Porting guide says that in this case, the bhy.h should be modified to define the following fixed-width types: s8, s16, s32, u8, u16, u32 I looked in this file and i am not sure how to exactly implement these types. Does anyone have an example for that? Also i should implement specific sensor_i2c_write() sensor_i2c_read() functions. This 2 functions have to be in bhy_support.c? Because there are no existing i2c functions just the declaration like this: extern int8_t sensor_i2c_write(uint8_t addr, uint8_t reg, uint8_t *p_buf, uint16_t size); extern int8_t sensor_i2c_read(uint8_t addr, uint8_t reg, uint8_t *p_buf, uint16_t size); Are there any example codes that has an HAL i2c connection to this sensor? Thx for helping! ... View more
OPCFW_CODE
Did you realize there are over 1.3 million computer programmers in the United States? Finding a way to rise above the competition in this industry is no easy task. The best way to achieve this goal is by producing quality work in a timely manner. Learning how to navigate the complexities of scripting languages like PHP is essential when trying to become proficient at building web-based software and apps. In the beginning stages of your PHP learning journey, you will undoubtedly make a number of mistakes. The key things you need to worry about when trying to become a better programmer is learning from these mistakes. The following are some of the PHP mistakes you need to avoid when trying to become a better programmer. Failing to Secure SQL Code Can Be Extremely Problematic Writing code that can withstand attempted hacking attempts is something you should be passionate about. If you get a reputation for creating less than stellar code, chances are you will have a hard time getting work. The best way to ensure your PHP is safe and secure is by protecting against SQL injections. Popular website building platforms use PHP as the backbone of its interface. When developers create a plug-in for this system, they will often create SQL statements. These statements will be sent into the SQL database, which means they can be used to infiltrate a site. Creating parameterized queries is a great way to limit the damage SQL injections can cause. Never Suppress Errors When working in PHP, you will be presented with a number of different types of errors. Some of these errors can be suppressed in the code. While this may seem like a great way to avoid getting bugged with constant error messages, it can actually lead to a variety of issues. However, if you want to silence codes that aren’t really critical to how your new program will run, then using the @ symbol to suppress them is a good idea. If you are going to use this trick, be sure to use it sparingly. Silencing every error message you are presented with may lead to functionality issues in the future. In reality, it is best to fix the problems causing the error messages rather than silencing them. Doing this can help you move forward without worry that a silenced problem will come back to haunt you. Always Remove Development Configurations Having an adequate development environment is crucial when trying to bring a web-based app or site to life. These production environments mimic how the code you are writing will interact with other components. The last thing you need to do is get in a hurry to release the program in question and failing to remove the development configurations. There are usually tons of different errors in this environment that need to be fixed before release. This is why moving it to an actual hosting environment before deployment is imperative. The time and effort invested in this preparation will be worth it considering the problems it can help a developer avoid. Not Running Backups Can Be Disastrous Building a website or web-based application is a lot of hard work. Protecting the progress you make throughout this process is important. The best way to do this is by creating and following a strict backup policy. Ignoring the need for a backup system can lead to you losing all of your work in the event of a network crash or outage. Developing New Technology Takes Time Some developers put too much value in being first to market with a new program. Instead of rushing to get a program developed, take your time to ensure no mistakes are made. Originally posted 2019-07-30 18:09:47. Republished by Blog Post Promoter
OPCFW_CODE
That 220.127.116.11 IP address seems to belong to APNIC - http://www.apnic.net/ - the "Regional Internet Registry that allocates IP and AS numbers in the Asia Pacific region." The "whois" information in the APNIC web site for that IP address says that it belongs to the "APNIC Debogon project": I'm posting those "whois" results at the bottom of this post. What is the Debogon project? The Wikipedia article about "Bogon Filtering" - http://en.wikipedia.org/wiki/Bogon_filtering - says that "a bogon IP address is a bogus IP address, and an informal name for an IP packet on the public Internet that claims to be from an area of the IP address space reserved, but not yet allocated or delegated by the Internet Assigned Numbers Authority (IANA) or a delegated Regional Internet Registry (RIR). The areas of unallocated address space are called the bogon space." Having said that, I don't know why your TP-LINK router is sending all DNS requests to that IP address. I also have one TP-LINK router and I don't think it's displaying that behavior. When I get to that router, I'll check it. EDIT: The TP-Link that I mentioned is a wireless router "TL-WR340G" and it's connected to a cable modem. In the router web interface, in the "Network -> WAN" page, I have the "WAN Connection Type" set to "Static IP" and it is set to an IP address of the same network of the Cable Modem. The "Default Gateway" and "Primary DNS" are both set to the IP address of the Cablem Modem. I hope this helps! :-/ APNIC found the following authoritative answer from: whois.apnic.net % [whois.apnic.net node-5] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 18.104.22.168 - 22.214.171.124 descr: APNIC Debogon Project descr: APNIC Pty Ltd status: ASSIGNED PORTABLE changed: email@example.com 20110922 role: APNIC RESEARCH address: PO Box 3646 address: South Brisbane, QLD 4101 remarks: + Address blocks listed with this contact remarks: + are withheld from general use and are remarks: + only routed briefly for passive testing. remarks: + If you are receiving unwanted traffic remarks: + it is almost certainly spoofed source remarks: + or hijacked address usage. remarks: + http://en.wikipedia.org/wiki/IP_address_spoofing remarks: + http://en.wikipedia.org/wiki/Regional_internet_registry changed: firstname.lastname@example.org 20110822
OPCFW_CODE
When I start a new project, I have a simple checklist that helps me to deliver faster. The first thing is to avoid reinventing the wheel and to pass it, I'm using boilerplate code written by me or by other developers. The second item on my checklist is to scan the market for new technologies I can use to be more productive. Using an app generator, sometimes those two things from my checklist can be combined and delivered at once by a single tool. In my opinion, a good app generator should provide at least three things: - A significant part of the source code for my app - The boilerplate code should be generated in modern technologies - Stable and tested source code This article presents a short-list with app generators that I've used in my projects or tools that look very promising but they aren't production-ready yet. Thank you for reading! TeleportHQ app generator TeleportHQ is a platform and a suite of open-source tools built for user interface professionals. It simplifies the process of creating, maintaining and publishing user interfaces for desktop and mobile devices. TeleportHQ uses AI to analyze the user's intentions and augments the final result with real-time optimizations. GatsbyJS is a free and open-source app generator based on React that helps developers build blazing-fast websites and apps. This generator uses GraphQL to read information from various sources (headless CMSs, Markdown, YAML files) and translate all that content in blazing-fast apps. Maybe is not relevant, but all my blogs are powered by GatsbyJS. - GatsbyJS - official website - GatsbyJS source code - published on Github - GatsbyJS starters - open-source starters for allmost anything: landing pages, ecommerce apps, blogs. Nextjs is a React framework built by Zeit capable of generating SSR and JAMstack apps styled with CSS-in-JS. Nextjs Documentation is great but lacks on a single point: there are no official starters to play and test the technology. - Nextjs the official website - CSS-in-JS - the styling library used by the Framework - A short unofficial list with Nextjs starters Gridsome in one sentence is GatsbyJS but for Vue. The whole product pattern is mirrored: information is read by GraphQL from various sources (YAML, headless CMS, Markdown) and injected into JAMstack apps ready to be deployed on Netlify, Zeit NOW and other modern platforms. Quasar is a high performance, Material Design 2, full front-end stack for Vue.js that provides a single code-base for all platforms simultaneously through Quasar CLI with all the latest and greatest best practices out of the box. AppSeed Web App Generator I must say from the beginning this is my startup, witch encapsulate my whole R&D work for the last two years. The code generation process is split into two steps. - Flat HTML themes are parsed and converted to various template engines: PUG, Jinja2, Blade using an interactive HTML parser - The HTML components and layouts are injected into simple Nodejs, Python and Php boilerplates already coded with authentication, ORM and database connectors. This is an open list, feel free to suggest more or AMA in the comments. Thank you!
OPCFW_CODE
"""Build containers and images.""" from io import StringIO from typing import List, Optional, TextIO from pydantic.dataclasses import dataclass from blowhole.core.config import ConfigModel from blowhole.core.image import BuildRecipe, RunRecipe from blowhole.core.module import Module @dataclass class EnvironmentRecipe: """Can be built into a container or image.""" build: BuildRecipe run: RunRecipe name: Optional[str] = None @property def dockerfile_str(self) -> str: """The Dockerfile string to build this environment.""" r = "" for c in self.build.commands: r += f"{c}\n" return r @property def dockerfile(self) -> TextIO: """The file-like Dockerfile to build this environment.""" return StringIO(self.dockerfile_str) @dataclass class EnvironmentDefinition(ConfigModel): """A definition of how to construct an envrionment.""" modules: List[Module] name: Optional[str] = None @property def recipe(self) -> EnvironmentRecipe: """Create a buildable Recipe from this definition.""" current_image = None build = BuildRecipe() run = RunRecipe() for m in self.modules: for c in m.components: if c.should_run(current_image): if isinstance(c.recipe, BuildRecipe): build += c.recipe elif isinstance(c.recipe, RunRecipe): run += c.recipe if c.results is not None: current_image = c.results return EnvironmentRecipe( build=build, run=run, name=self.name, )
STACK_EDU
JPA query run into oracle fails, depending on the values, with ORA-00600 I am working on an application that uses criteria builder to perform search with multiple criteria. There is a bug that when selecting two specific criteria the application crashes. I used the show_sql property to display the query that is being performed in the database. I am getting this error: SQL Error: 600, SQLState: 60000 ORA-00600: internal error code, arguments: [kdsgrp1], [], [], [], [], [], [], [], [], [], [], [] the query is the one below: select count(ves0_.CODE) as col_0_0_ from VES ves0_ where ves0_.STARTDATE<=TO_DATE('29/10/2018', 'DD/MM/YYYY') and ves0_.ENDDATE>TO_DATE('29/10/2018', 'DD/MM/YYYY') and ves0_.LICENCE_IND='Y' and (exists (select ves1_.CODE from LICENSES ves1_ where ves0_.CODE=ves1_.CODE and nvl(ves1_.LICENSE_DATE_RENEWED, ves1_.LICENSE_DATE_ISSUED)<=TO_DATE('02/10/2018', 'DD/MM/YYYY') and ves1_.LICENSE_DATE_VALID_TO>TO_DATE('02/10/2018', 'DD/MM/YYYY') ) ) and ves0_.STARTDATE<=TO_DATE('29/10/2018', 'DD/MM/YYYY') and ves0_.ENDDATE>TO_DATE('29/10/2018', 'DD/MM/YYYY'); It seems that the problem is with the dates inside the exists clause. Some dates are bringing correct results without crashing the application, others like the one above ('02/10/2018') are throwing this error [60000][600] ORA-00600: internal error code, arguments: [kdsgrp1], [], [], [], [], [], [], [], [], [], [], [] when running it on SQL Developer just like the on in the application. Is there a problem with the query? Why it works for some values and not for others? Are the data causing the error? I run some other queries and there does not seem to be any differences on data ranging through different dates. Please help. Thanks in advance. EDIT: I have the same problem with an other query in the same application: select count(ves0_.CODE) as col_0_0_ from VES ves0_ where ves0_.STARTDATE<=TO_DATE('29/10/2018', 'DD/MM/YYYY') and ves0_.ENDDATE>TO_DATE('29/10/2018', 'DD/MM/YYYY') and ves0_.UPDATE_IND='RET' I posted that because it is a simpler case. This time when I change the UPDATE_IND the query crashes with the same error. UPDATE_IND can only take eight different values. I tested all of them and the query runs for six of them and causes ORA-00600 for the other two. You should create a service request to Oracle support. Seems like you face the bug Causes and Solutions for ora-600 [kdsgrp1] (Doc ID 1332252.1), but you should check at least verify it with your dba.
STACK_EXCHANGE
Intralattice is a plugin for Grasshopper used to generate solid lattice structures within a 3D design space. It was developed as an extensible, open-source alternative to current commercial solutions. As an ongoing project developed at McGill’s Additive Design & Manufacturing Laboratory (ADML), it has been a valuable research tool, serving as a platform for breakthroughs in multi-scale design and optimization. By giving you full access to the source, we hope to collectively explore lattice design at a deeper level, and consequently, engineer better products. The rise of additive manufacturing (i.e. 3D printing) has allowed engineers to integrate new orders of complexity into their designs. In that regard, this software generates lattice structures as a means to: - Reduce volume/weight while maintaining structural integrity. - Increase surface area as a means of maximizing heat transfer. - Generate porosity in bone scaffolds and implants - Serve as a platform for structural optimization. In doing so, it should always output a watertight mesh suited for 3D printing. The primary goal of Intralattice is to serve as a research platform where flexibility and versatility are primordial. Users are given the freedom to define custom unit cells, choose from diverse lattice mapping methods, and set the thickness of struts individually. Moreover, the benefits of being integrated into CAD software Rhinoceros are evident. Renderings of solid models output (in .stl format) by Intralattice are shown below. The core of Intralattice is concerned with the geometric modeling of solid lattice structures. The generative process is split into 3 consecutive modules: the cell module, which generates a unit cell, the frame module, which generates a lattice wireframe within a design space, and the mesh module, which converts the wireframe (list of curves) to a solid mesh that can be exported as a .STL and 3D printed. A fourth, optional utility module can be used for pre/post-processing. In Grasshopper, the generative algorithm will look something like this: The components available for each module are summarized below. For more information, refer to the User Docs. |Cell||+ PresetCell - Library of preset unit cells. + CustomCell - Formats custom unit cells. |Frame||+ BasicBox - Generates a simple lattice box. + BasicCylinder - Generates a simple lattice cylinder. + ConformSS - Generates a conformal lattice between 2 surfaces. + ConformSA - Generates a conformal lattice between a surface and an axis. + ConformSP - Generates a conformal lattice between a surface and a point. + UniformDS - Generates a trimmed uniform lattice within a Mesh/Brep design space. |Mesh||+ Homogen - Generates homogeneous mesh for set of curves. + HeterogenGradient - Generates a gradient-based heterogeneous mesh. + HeterogenCustom - Generates a custom heterogeneous mesh. |Utility||+ AdjustUV - Adjusts the UV-map of a surface. + CleanNetwork - Cleans curve network by removing duplicates, within tolerance. + MeshReport - Returns comprehensive report regarding solidity of mesh. + MeshPreview - Generates preview of the mesh. The opti modules are concerned with mechanical optimization. The first set of components provide interfaces with various FEA software, such as Nastran and Hyperworks. Optimization algorithms have been developed for properties ranging from structural strength to heat transfer. These components will be released shortly.
OPCFW_CODE
Get that Citrix Receiver account pop-up of my screen I’m finishing a VMware Horizon project soon, been working with a customer since April to merge four organizations into one. Bumps and hurdles on our road but we’re almost there. The people in the organization have links to other organizations and for some reason they need access to those environments. Also we haven’t succeeded in merging all the applications yet, some are soo old they need to re-designed. For all these reasons users need access to remote environment. These remote environments are either Citrix (two of them) and VMware (two of them). So for the Citrix ones we needed a receiver in the image, but you don’t want the damn pop-up every morning as we deployed a VMware environment. So our first task, one getting more importance every day, was to get this pop-up of the screen. I read some blogs about this but the information we gathered there was wrong. So I thought on writing a small article showing how we fixed this (together with my colleague @pvdnborn). The pop-up you will see is the following: The reason why you get the pop-up is because you tried to suppress the account creation of the Citrix receiver. Let me show you where you went wrong on how to fix this. It’s pretty simple but you need to change a bit. The one reason you get this message is that you installed the Citrix receiver with the following option ” /ALLOWADDSTORE=N”. This disallows for the wizard to run that would allow the creation of an account to finish. You get the pop-up because you don’t allow this, the assumption with this installation is that you configured it some other way already. Here, in my situation, we use the receiver as a dumb client to help users connect to a remote environment, just reacting to a request when they logon on a web-page. One other option you get to get rid of this pop-up is renaming some files but that seems like a non-working solution. So to fix this you need to add the following key to the image you deploy, we used the Computer group policy we have running to set this for all VDI desktops. This will allow the Citrix receiver to start popping up the wizard again. Of course we had to reinstall the Citrix receiver for this to happen but it is the best way. Next up is the user setting to suppress the wizard from popping up for the users. This is the other way around, not disabling the account creation through the install but disabling the wizard from showing up at logon. Next up as mentioned before we manage the user environment to hold off the account creations. We use RES ONE Workspace for this and it inserts the registry settings at logon just in time to hold of the Citrix receiver account creation. ..and that’s how we roll… no more pop-up.
OPCFW_CODE
File Name: ordinary differential equations applications models and computing .zip - Differential equation - Solution of Differential Equations with Applications to Engineering Problems - Application Of Differential Equation In Real Life Pdf There is a large number of ordinary differential equations ODEs characterize the electrical behavior generated by ionic movements in human myocardial cell. In this paper, several approaches were investigated in order to improve the efficiency of solving the ODE systems for ten Tusscher et al. Read all version of your device. Read or Download Books. Ordinary Differential Equations: Applications, Models, and Computing Textbooks in Mathematics is one of the best selling books, the writer wrote a powerful story. The explanation and sentences are easy to understand and readers acquire essential things comfortably. Lets give a positive response a look at the detail under to acquire more conformity of Ordinary Differential Equations: Applications, Models, and Computing Textbooks in Mathematics. Ordinary Differential Equations: Applications, Models, and Computing Textbooks in Mathematics explanation is fascinating and flowing enough. Readers may be impatient approximately the explanation after reading the first page. Over the last hundred years, many techniques have been developed for the solution of ordinary differential equations and partial differential equations. While quite a major portion of the techniques is only useful for academic purposes, there are some which are important in the solution of real problems arising from science and engineering. In this chapter, only very limited techniques for solving ordinary differential and partial differential equations are discussed, as it is impossible to cover all the available techniques even in a book form. The readers are then suggested to pursue further studies on this issue if necessary. After that, the readers are introduced to two major numerical methods commonly used by the engineers for the solution of real engineering problems. Dynamical Systems - Analytical and Computational Techniques. This course for junior and senior math majors uses mathematics, specifically the ordinary differential equations as used in mathematical modeling, to analyze and understand a variety of real-world problems. Among the civic problems explored are specific instances of population growth and over-population, over-use of natural resources leading to extinction of animal populations and the depletion of natural resources, genocide, and the spread of diseases, all taken from current events. While mathematical models are not perfect predictors of what will happen in the real world, they can offer important insights and information about the nature and scope of a problem, and can inform solutions. The course format is a combination of lecture, seminar and lab. Beyond the capacity to solve mathematical problems, students are expected to be able to communicate their findings clearly, both verbally and in writing, and to explain the mathematical reasoning behind their conclusions. Learning is assessed through pre- and post-tests and a variety of assignments, including short response papers, quizzes, and a final group project involving an oral report and a page paper. Solution of Differential Equations with Applications to Engineering Problems Skip to content 1. The following are the principal areas of interest of the journal: Modeling using PDEs. If you want to. What's the use of differential equations in Computer An introductory chapter gives an overview of scientific computing, indicating its important role in solving differential equations, and placing the subject in the larger environment; Contains an introduction to numerical methods for Can you give me some application of differential equation in computer science? How important are differential equations in What are the applications for differential equations in Application Of Differential Equation In Real Life Pdf Using a series of examples, including the Poisson equation, the equations of linear elasticity, the incompressible Navier. The use of numerical methods to solve partial differential equations is motivated giving examples form Earth sciences. I know of no current textbooks on computational physics using Python, but there are several good books that make use of other languages. Practical examples of partial differential equations; derivation of partial differential equations from physical laws; introduction to MATLAB and its PDE Tool-box, and COMSOL using practical examples; an overview of finite difference and finite element solution methods; specialized modeling projects in topics such as groundwater modeling. Ordinary differential equations and banded matrices This first post outlines some background by describing how banded matrices can be used for solving ordinary differential equations ODEs. Application Of Differential Equations Pdf. System of linear equations: linear algebra to decouple equations. The natural variables become useful in understanding not only how thermodynamic quantities are related to each other, but also in analyzing relationships between measurable quantities i. Just as biologists have a classification system for life, mathematicians have a classification system for differential equations. In this post, we will talk about separable. Many real world problems can be represented by first order differential equation. Overview of applications of differential equations in real life situations. For this problem a state space representation was easy to find.
OPCFW_CODE
using Microsoft.VisualStudio.TestTools.UnitTesting; using System; namespace mmixal.test.integration { /// <summary> /// Tests that assemble mmixal and run the output against the mmix simulator. /// </summary> [TestClass] public class IntegrationTests { [TestMethod] public void MmixalNoArgs() { using var mmixalProcess = new ExternalProgram().Mmixal(); mmixalProcess.Start(); string output = mmixalProcess.StandardOutput.ReadToEnd(); mmixalProcess.WaitForExit(); Assert.AreEqual(-1, mmixalProcess.ExitCode); Console.WriteLine("MMIXAL output:"); Console.WriteLine(output); } [TestMethod] public void MmixNoArgs() { using var mmixProcess = new ExternalProgram().Mmix(); mmixProcess.Start(); string output = mmixProcess.StandardOutput.ReadToEnd(); mmixProcess.WaitForExit(); Assert.AreEqual(-1, mmixProcess.ExitCode); Console.WriteLine("MMIX output:"); Console.WriteLine(output); } [TestMethod] public void HelloWorldTest() { using var mmixalProcess = new ExternalProgram().Mmixal(); mmixalProcess.StartInfo.Arguments = "programs/hello.mms"; mmixalProcess.Start(); string mmixalOutput = mmixalProcess.StandardOutput.ReadToEnd(); mmixalProcess.WaitForExit(); Console.WriteLine("MMIXAL output:"); Console.WriteLine(mmixalOutput); Console.WriteLine(); Console.WriteLine("-----------"); Console.WriteLine(); Assert.AreEqual(0, mmixalProcess.ExitCode); string mmixOutput; using var mmixProcess = new ExternalProgram().Mmix(); mmixProcess.StartInfo.Arguments = "programs/hello.mmo"; mmixProcess.Start(); mmixOutput = mmixProcess.StandardOutput.ReadToEnd(); mmixProcess.WaitForExit(); Console.WriteLine("MMIX output:"); Console.WriteLine(mmixOutput); Console.WriteLine(); Console.WriteLine("-----------"); Console.WriteLine(); Assert.AreEqual(0, mmixProcess.ExitCode); } [TestMethod] public void HelloWorldWhitespacesTest() { using var mmixalProcess = new ExternalProgram().Mmixal(); mmixalProcess.StartInfo.Arguments = "programs/hello-whitespaces.mms"; mmixalProcess.Start(); string mmixalOutput = mmixalProcess.StandardOutput.ReadToEnd(); mmixalProcess.WaitForExit(); Console.WriteLine("MMIXAL output:"); Console.WriteLine(mmixalOutput); Console.WriteLine(); Console.WriteLine("-----------"); Console.WriteLine(); Assert.AreEqual(0, mmixalProcess.ExitCode); string mmixOutput; using var mmixProcess = new ExternalProgram().Mmix(); mmixProcess.StartInfo.Arguments = "programs/hello-whitespaces.mmo"; mmixProcess.Start(); mmixOutput = mmixProcess.StandardOutput.ReadToEnd(); mmixProcess.WaitForExit(); Console.WriteLine("MMIX output:"); Console.WriteLine(mmixOutput); Console.WriteLine(); Console.WriteLine("-----------"); Console.WriteLine(); Assert.AreEqual(0, mmixProcess.ExitCode); } [TestMethod] public void FindPrimesTest() { using var mmixalProcess = new ExternalProgram().Mmixal(); mmixalProcess.StartInfo.Arguments = "programs/find-primes.mms"; mmixalProcess.Start(); string mmixalOutput = mmixalProcess.StandardOutput.ReadToEnd(); mmixalProcess.WaitForExit(); Console.WriteLine("MMIXAL output:"); Console.WriteLine(mmixalOutput); Console.WriteLine(); Console.WriteLine("-----------"); Console.WriteLine(); Assert.AreEqual(0, mmixalProcess.ExitCode); string mmixOutput; using var mmixProcess = new ExternalProgram().Mmix(); mmixProcess.StartInfo.Arguments = "programs/find-primes.mmo"; mmixProcess.Start(); mmixOutput = mmixProcess.StandardOutput.ReadToEnd(); mmixProcess.WaitForExit(); Console.WriteLine("MMIX output:"); Console.WriteLine(mmixOutput); Console.WriteLine(); Console.WriteLine("-----------"); Console.WriteLine(); Assert.AreEqual(0, mmixProcess.ExitCode); } } }
STACK_EDU
Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up I suppose there is something wrong with With the same array of resolutions in TileCache and Leaflet, Proj4Leaflet works perfectly with portrait bbox, but suffers an offset if bbox is landscape (width>height). You can find a complete example that demonstrate the problem: In the above example, I compute resolutions the same way as TileCache (they are printed out as proof) Using static resolutions/scale array does not solve the problem. Yes, the bounding box is different, on the left, it's a portrait (width < height), and on the right it's a landscape. However, only the landscape bbox introduces offsets... Ok, looking at the code again, I can't really see how the aspect ratio of the bounding box could affect Proj4Leaflet - in fact, it is only the upper left corner of the bbox that is ever sent to Proj4Leaflet (as the origin). Since Proj4Leaflet never sees the bbox, how could it be affected by its aspect ratio? I wonder, might it be possible that the bbox coordinates are in fact not really aligned with your tilegrid? When I look at it, it does not look like the height of the bbox (in either one of them!) is a multiple of the tile size 256, which leads me to believe that these bboxes describe underlying data rather than the actual tileset bounds (which becomes greater if you round them up to next tile boundary). Do you have any other clue as to what might be happening? The use of origin is not very complicated in the code, so I have a bit of a hard time thinking of anything going wrong there, although I might of course be missing some detail. Thanks a lot for your implication ! Indeed the tilegrid is not aligned between tilecache and leaflet if the bbox height is smaller than width. I'll investigate more about that rounding, thanks. BTW in @turban's article, his bbox is 60000 x 60000. (60000 % 256: 96). For those reading this thread with curiosity (or despair!), I leave here some inspiring resources: I changed both bboxes to be multiple of 256, and it's not better : http://mathieu-leplatre.info/media/2013-proj4leaflet/ (previous demo is here: http://mathieu-leplatre.info/media/2013-proj4leaflet/index-1.html) It's not a blocker anyway. I also hard time reading at Leaflet tiling and projections. But can't figure anything out...
OPCFW_CODE
CNET has a blog post about McAfee's Big Blunder. Recently, McAfee pushed out a buggy software update. It didn't affect me, as I've long since banished McAfee and Norton from my network. I'm all for keeping my computer safe, but both are far too needy for attention, and demand too much of a performance sacrifice. I swore off Norton after their Norton Internet Security 2005. I had a copy of NIS 2003. After about 3 weeks, I'd start getting notices that Norton quarantined and deleted my email. I couldn't retrieve it. Then people would start asking me "Did you get my email?" Well, no. I uninstalled and reinstalled NIS 2003 and the same thing happened within 3 weeks. After my 3rd try, I uninstalled it and used the disk as a coaster after it wouldn't sell on Ebay. I got a laptop for Christmas 2005. I was just getting started at the University of Phoenix and my old P-II 233Mhz Toshiba laptop couldn't keep up. I got a Compaq, which came with a 90 day trial of NIS 2005. It didn't last 2 months. NIS 2005 seemed to have to download updates every 20 minutes, and had to flash a status balloon on my taskbar to notify me of each individual step it was taking. The problem is, I like to hide my taskbar for screen Real Estate, so when an attention needy program like NIS 2005 wants to keep telling me what it's doing, it gets in the way. It also started letting me know way too frequently that my 90 day trial was running out in 89 days, 88 days, etc. Go away, Norton. Nobody likes you. I tolerated McAfee for a while. I used to get it through Comcast. I believe Comcast switched over to Norton by now. McAfee was a resource hog, especially when I had to get something done. I'd be working on a paper for class and my system ground to a halt. I did what any good Windows user does when a system grinds to a halt: reboot. But, my system took forever to come back up. I couldn't do anything with it. Then, finally, a balloon popped up saying "McAfee finished downloading important updates!" like it was a good thing. Thanks, McAfee, for downloading updates in the middle of the day when there's work to be done. I honor the place where your designers' butts and my foot become one. Somewhere in the middle of 2007, Comcast notified me that a new version of McAfee was available. I updated all of my computers. Suddenly, they couldn't see each other. In the days when USB sticks and portable hard drives were far too expensive for me, I shared files between my computers across my network. Sometimes I'd work on a paper on my desktop, but take it with me on my laptop to school. Or if we were going to my in-laws' house, I'd take my laptop. If I couldn't get a paper between my computers, I had problems. I spent days trying to figure out why McAfee broke my shared folders, but I couldn't. I finally uninstalled it. No more McAfee. For now, I use Windows Defender and built-in browser security. I stay away from sites that are likely to have malware. I don't use Limewire, which is like asking malware to come on your computer. It's worked for me for several years.
OPCFW_CODE
Computer vision engineers cashing in on AI, robotics craze Editor’s note: This is the sixth article in a weekly series looking at the top jobs in data science and analytics. The first pieces looked at data engineers, data scientists, data analysts, data architects. and machine learning engineers. This article looks at the demand for and opportunities available to computer vision engineers. Job title: Computer vision engineer Reports to: The computer vision engineer will report into any number of people, depending on the business that they employed at. Typically, you would report into a lead or principal computer vision engineer, or in the case of smaller businesses and start-ups a director/head of computer vision or chief research officer perhaps. In the computer vision space there is no real trend as to whom you would report into and will often differ quite heavily form business to business. Demand for this role: Computer vision is becoming huge across multiple industries and work areas. These include robotics, defense, industrial engineering, security and autonomous vehicles, and as robotics and AI grow in popularity and become more commonplace, the demand for expert computer vision engineers is growing simultaneously. Top industries hiring for this job: The top industries hiring for these types of workers are defense and aerospace, robotics, industrial engineering, security and autonomous vehicles, although there is a growing use of leading hedge funds using computer vision techniques to identify investment targets as well. Start-ups in this space - particularly in the NYC, Boston and Silicon Valley markets - are constantly being founded and the computer vision industry is rapidly growing in popularity Responsibilities with this job: In its simplest terms, the computer vision engineer will be building algorithms to analyze images and video, which in turn can be utilized in multiple ways. For example, in the security camera space the algorithms will be used to track and monitor movement and recognize faces, while within virtual reality computer vision engineers will build algorithms to track hand movement, for example, again differing from a hedge fund using computer vision to analyze satellite imagery to identify the activity at investment target buildings. Required background for this job: Typically, a PhD is generally required for these types of opportunities, and top areas of study would include but are not limited to computer science, mathematics, image processing, physics, and even a specific computer vision course. That said, many businesses are looking at candidates with a Master’s degree in a relevant field. Skills requires for this job (technical, business and personal): Regarding technical skills, most computer vision engineers would need to be strong coders in either C++, C or Python, and exposure to deep learning technologies such as Caffe, TensorFlow and Torch would be a huge bonus. Experience with or knowledge of tools such as OpenCV are also necessary in this field. On a personal level, computer vision engineers will be highly intelligent and motivated individuals, with an innate willingness to learn new technologies and always striving to be at the cutting edge of their field. Heavy research into computer vision is also a plus, with many employers looking at citations and specific research projects when hiring. Compensation potential for this job: An entry level, PhD/Master’s graduate would realistically expect a base salary of around $120,000 to $130,000 in New York, rising to around $160,000 to $180,000 after 5-6 years. Highly experienced candidates can realistically expect $200,000 to $250,000 in the current New York City job market. Success in this role defined by: Success in this role will be defined depending on the industry or business. For example, in a hedge fun, if the individual is building algorithms that successfully provide information that results in large financial gain from an investment you would consider that a job well done. Similarly, If they were able to build a cutting edge platform that helps a car to autonomously drive itself than that would be deemed successful in that industry. Advancement opportunities for this job: The progression of computer vision engineers can differ drastically form business to business. Naturally, an individual could move into a principal level role before heading up a dedicated computer vision function, or they may end up leading a cross-functional robotics group that would incorporate multiple different functions. The individual would also be well placed to begin their own start-up in this space, as computer vision can be applied in so many various ways.
OPCFW_CODE
What is the Reformed Protestant understanding of God's Permissive Will, His Sovereign or Decretive Will and Efficacious Will? I have an assignment to explain the Reformed Protestant view of the differences between the Permissive, Sovereign (Decretive) and Efficacious Wills of God. My initial exploration into this subject has left me confused, mainly because there are so many different terms applied to the Will of God in its various forms. From what I've read, there is a view that the sovereign or decretive will of God can be divided into His efficacious will and His permissive will. Also, that in both the decretive and efficacious wills, God is directly responsible for causing His will to come to pass. I have already seen this Christianity Stack article, but it does not help me: How are the Decretive, Preceptive, and Permissive wills of God defined? One short article I found exposed some difficulties with the Permissive Will of God: The distinction between the sovereign will of God and the permissive will of God is fraught with peril and tends to generate untold confusion. In ordinary language, the term permission suggests some sort of positive sanction. To say that God “allows” or “permits” evil does not mean that He sanctions it in the sense that He approves of it. It is easy to discern that God never permits sin in the sense that He sanctions it in His creatures. What is usually meant by divine permission is that God simply lets it happen. That is, He does not directly intervene to prevent its happening. Here is where grave dangers lurk. Some theologies view this drama as if God were impotent to do anything about human sin. This view makes man sovereign, not God. God is reduced to the role of spectator or cheerleader, by which God’s exercise in providence is that of a helpless Father who, having done all He can do, must now sit back and simply hope for the best. He permits what He cannot help but permit because He has no sovereign power over it. This ghastly view is not merely a defective view of theism; it is unvarnished atheism. - Exposing the Permissive Will of God The more I read the more confused I become. Frankly, I'm out of my depth. I would be grateful if someone could at least point me in the right direction to find a straightforward and readily understood Reformed Protestant perspective on the relationship between the various aspects of God's will. Sproul's What Is Reformed Theology? has a good chapter on it, but could be quite similar to article you linked to. My answer to How can God be Sovereign (in the Reformed sense) if a man can ignore His call to repentance? covers a lot of this. To which theologies does "Some theologies view this drama as if God were impotent to do anything about human sin" refer? Any Calvinists know? @Sola Gratia - I honestly don't know. I suspect that the expression "Efficacious Will" is sometimes taken to mean "Efficacious Grace" or "Irresistible Grace". I found that in a Wiki article: https://en.wikipedia.org/wiki/Irresistible_grace Your quote source is part of R.C. Sproul's explanation of Permissive Will, so you will find the context by reading the complete article titled "Discerning God’s Will: The Three Wills of God". After a quick and relatively straightforward definitions of Decretive and Prescriptive wills, he then warns us of "the faulty distinction between willing and permitting" by quoting Calvin along with the Biblical support Calvin used for his thesis. Sproul then ends with a quote from St. Augustine. A related resource I found is a free book by R.C. Sproul "Can I Know God's Will" which I could also find the PDF here where he also discussed Decretive and Preceptive Will of God but introduced "God's Will of Disposition" as roughly equivalent to his "permissive" will, similar to how this article also divide God's will into the same 3 categories. That's in chapter 1 ("The Meaning of God's Will") to which he adds chapter 2 ("The Meaning of Man's Will") followed by 2 application chapters on job and marriage. Other related resources I think you'll find valuable are: Nathaniel's answer to a related question which gives you the relevant Biblical verses for the Decretive and Preceptive Will of God based on R.C. Sproul and Charles Hodge Charles Hodge's treatment of The Will of God in his famous (although dated) 1871 Systematic Theology Volume 1 Chapter V Section 9 Blog article "A Theology Of the Will of God by Adam Setser, a Southern Baptist theology student, surveying different ways (other than decretive vs. preceptive) to view the two sides of the will of God within the Reformed tradition, nicely paraphrasing some of Hodge's treatment in 2015 English. I found Adam Setser's treatment to be the most helpful to address your confusion because he goes into the Latin words of 5 different pairings of tensions that Reformed theologians have identified over the centuries. I put in bold the terms you are including in your question. From the analysis below, I think the confusion comes from mixing terms from different pairings. The backbone: voluntas beneplaciti (will of desire) / voluntas decreti (will of decree) / voluntas occulta (secret will) VS. voluntas signi (signifying will) / voluntas revelata (revealed will) / voluntas praecepti (will of precept) (see subsection C in Hodge) Choice: voluntas absoluta VS. voluntas conditionalis (see subsection E in Hodge) Causation: voluntas antecedens VS. voluntas consequens (see subsection D in Hodge) Theodicy: voluntas efficiens (efficient will) VS. voluntas permittens (permitted will) If God’s will must account for all creation, what about evil, Satan, and hell? Voluntas efficiens: the “efficient will” refers to those aspects of His will who receive His full affirmation. Efficiens also means to create, cause, or produce. So this is the product of His creation which obeys Him perfectly and gets his affirmation. Voluntas permittens: the “permitted will” refers to the part of creation which doesn’t get His affirmation. He still has willed its existence, but He isn’t pleased with it. He permits it to exist because it serves His purposes (e.g., Satan). Efficacy: voluntas efficax (effective will, efficacious) VS. *voluntas inefficax** (ineffective, HERETICAL!) If the above terms are not enough, there are a few more distinctions in the famous Reformed Systematic Theology textbook (1938) by Louis Berkhof: Part One (Doctrine of God), THE BEING OF GOD Chapter VII (The Communicable Attributes), Section D (Attributes of Sovereignty), pages 82-87, HTML version here, PDF here. These 6 pages I think will be the best resource to integrate all the above distinctions into a coherent whole within the Reformed tradition where Dr. Berkhof said how the antecedent/consequent and absolute/condition pairings found little favor in Reformed theology while the decretive/preceptive, eudokia/eurestia, beneplacitum/signum, and secret/revealed pairings were more generally accepted. He discusses the issue of "permissive will" in a subsection titled "God's will in relation to sin". Finally by necessity of its being a textbook Dr. Berkhof's treatment also adds contrast between Reformed understanding of the sovereign power of God to some modern non-Reformed positions (Strauss, Schleiermacher). While the above resources are "certifiably" Reformed in the theology (despite different theologians, e.g. Dr. Berkhof and Dr. Hodge, organizing the discussion of God's will differently), I think the following article titled "Providence of God" from the 1997 Baker's Evangelical Dictionary of Theology should provide you with a larger context to place the will of God in a broader as well as more ancient perspective, contrasting how in OT, 2nd Temple, and NT traditions Jewish and Christian believers view God's will in PERSONAL terms rather than IMPERSONAL fate / deterministic way that the Greco-Roman philosophy adherents (such as Stoics) view God's will. One way to consider God's will in an even larger context is of course by studying Louis Berkhof's textbook treatment of THE WORKS OF GOD where he begins by discussing The Divine Decrees in General (Chapter I), Predestination (Chapter II), Creation (Chapters III to V), and ends with Providence (Chapter VI). I really appreciate the research and the links you have made available and for an explanation that enables me to better understand this complex doctrine. Thanks. I hope I also speak of other contributors like yourself that through answering we learn a lot as well. Your question was of high interest to me so I tackled it as though it was my own. I grew up Reformed so several pastors recommended Berkhof's Systematic theology to me, but I have since tried to approach the Bible on its own terms, i.e. "Biblical Theology" so in other words I have moved on to be more "evangelical" who adopts a big umbrella approach to also include Lutheran, Methodist, Anglican, and even Eastern Orthodox. In researching the answer I also became aware that those distinctions are more or less human attempt to define God's will, so I would be more cautious and give more priority on how the original authors of the books of the Bible would attempt to understand God's will rather than us modern people imposing our categories on their thoughts. Thus, again, Biblical Theology approach. Good luck!
STACK_EXCHANGE
#!/usr/bin/env python # coding=utf-8 from os.path import join as pjoin from typing import Union, List, Any, Sequence from types import ModuleType from numbers import Number import inspect from collections import namedtuple import re import yaml from .backer import get_etc_path, get_trial_dict __all__ = [ 'Configuration', 'load_configuration_from_yaml', 'save_configuration_to_yaml', 'init_and_config', 'configure', 'introspect_constructor' ] _initialize_t = Union[List[Any], Any] class Configuration: """Base class for all configurations""" def __init__(self, *args, **options): self.args = [] self.options = {} self.set(*args, **options) def get(self, key, default_value=None): """Get option""" return self.options.get(key, default_value) def set(self, *args, **options): """Set arguments and options""" self.set_args(*args) self.set_options(**options) return self def filter_args(self, *args): """Keep only the arguments provided""" to_rm_args = [arg for arg in self.args if arg not in args] self.remove_args(*to_rm_args) return self def filter_options(self, *keys): """Keep only the options provided""" to_rm_opts = [opt for opt in self.options if opt not in keys] self.remove_options(*to_rm_opts) return self def set_args(self, *args): """Set arguments""" for a in args: self.args.append(a) return self def set_options(self, **options): """Set options""" for k, v in options.items(): self.options[k] = v return self def remove(self, *params): """Remove arguments and options""" self.remove_args(*params) self.remove_options(*params) return self def remove_args(self, *args): """Remove arguments""" for arg in args: if arg in self.args: self.args.remove(arg) return self def remove_options(self, *keys): """Remove options""" for key in keys: if key in self.options: del self.options[key] return self def update(self, *args, **options): """Update arguments and options""" self.update_args(*args) self.update_options(**options) return self def update_args(self, *args): """Update arguments and options""" for arg in args: if arg not in self.args: self.args += [arg] return self def update_options(self, **options): """Update options""" self.options.update(options) return self def __str__(self): args = [] for arg in self.args: if not isinstance(arg, (Number, str, Sequence)): args += [retrieve_name(arg)] else: args += [arg] string = " ".join(args) for k, v in self.options.items(): if not isinstance(v, (Number, str, Sequence)): v = retrieve_name(v) string += f" --{k} {v}" return string def clone(self): """ Clone configuration Returns ------- cfg : Configuration """ cfg = Configuration() cfg.args = self.args.copy() cfg.options = self.options.copy() return cfg def load_configuration_from_yaml(config_file: str): """ Load configuration from YAML file and update trial information Parameters ---------- config_file : str Path to yaml file with configurations. Returns ------- cfg : Configuration Configuration object with parameters and options set according to yaml configuration file. """ cfg = Configuration() with open(config_file) as handle: config_from_file = yaml.safe_load(handle) cfg.options = config_from_file if 'args' in cfg.options: cfg.set_args(*cfg.options['args']) cfg.remove_options('args') # Get trial information from file name trial = get_trial_dict(config_file) cfg.set_options(trial=trial) return cfg def save_configuration_to_yaml(cfg: Configuration, save_dir: str = None): """ Save configuration options to YAML file Parameters ---------- cfg : Configuration save_dir : str Path to directory where YAML configuration file will be saved """ # if save_dir is None, save configuration file into etc directory: if save_dir is None: save_dir = get_etc_path(cfg.config) cfg_filename = cfg.options["trial"]["ID"] + ".yml" cfg_path = pjoin(save_dir, cfg_filename) with open(cfg_path, 'w') as handle: yaml_object = {'args': cfg.args, **cfg.options} yaml.dump(yaml_object, handle, default_flow_style=False) def init_and_config(module: ModuleType, constructor_type: str, cfg: Configuration, *args, **kwargs) -> Any: available_types = ['transforms', 'dataset', 'dataloader', 'network', 'loss', 'lr_scheduler', 'optimizer', 'metrics'] if get_generic_type(constructor_type) not in available_types: raise KeyError(f'constructor type must be one of {available_types}') if constructor_type not in cfg.options: msg = 'Constructor type missing from provided configuration file' raise AttributeError(msg) if not isinstance(cfg.options[constructor_type], list): recipes = [cfg.options[constructor_type]] else: recipes = cfg.options[constructor_type] cfg_list = [Configuration(**recipe) for recipe in recipes] instances = [initialize(module, constructor_type, cfg_, *args, **kwargs) for cfg_ in cfg_list] configured_instances = [configure(instance, cfg=cfg_, *args, **kwargs) for (instance, cfg_) in zip(instances, cfg_list)] if len(configured_instances) == 1: return configured_instances[0] # TODO find module and args and kwargs of init method by introspection def initialize(module: ModuleType, constructor_type: str, cfg: Configuration, *args, **kwargs) -> _initialize_t: """ Helper to construct an instance of a class from Configuration object Parameters ---------- module : ModuleType Module containing the class to construct. constructor_type : str A key of cfg.options. One of 'transforms', 'dataset', 'dataloader', 'network', 'loss', 'lr_scheduler', 'optimizer', 'metrics' cfg : Configuration Object with the positional arguments (cfg.args) and keyword arguments (cfg.opts) used to construct the class instance. args : list Runtime positional arguments used to construct the class instance. kwargs : dict Runtime keyword arguments used to construct the class instance. Returns ------- _ : Any Instance of module. """ constructor_name = cfg.get('type', None) cfg_args = cfg.get('args', []) cfg_opts = cfg.get('options', {}) cfg_ = Configuration(*args, **kwargs) cfg_.update(*cfg_args, **cfg_opts) argspec = introspect_constructor(constructor_name, module) #TODO figure out how to filter arguments properly # if argspec.varargs is None: # # Then the class does not support variable arguments # cfg_.filter_args(*argspec.args) if argspec.keywords is None: # Then the class does not support variable keywords cfg_.filter_options(*argspec.defaults.keys()) return get_instance(module, constructor_name, *cfg_.args, **cfg_.options) def introspect_constructor(class_name: Any, module_name: str = None): if module_name: func = getattr(module_name, class_name, '__init__') else: func = getattr(class_name, '__init__') sig = inspect.signature(func.__init__) defaults = { p.name: p.default for p in sig.parameters.values() if p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD and p.default is not p.empty } or None args = [ p.name for p in sig.parameters.values() if p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD and p.name != 'self' ] # Only keep the non default parameters args = list(filter(lambda arg: arg not in defaults, args)) varargs = [ p.name for p in sig.parameters.values() if p.kind == inspect.Parameter.VAR_POSITIONAL ] varargs = varargs[0] if varargs else None keywords = [ p.name for p in sig.parameters.values() if p.kind == inspect.Parameter.VAR_KEYWORD ] keywords = keywords[0] if keywords else None argspec = namedtuple('Signature', ['args', 'defaults', 'varargs', 'keywords']) return argspec(args, defaults, varargs, keywords) def get_instance(module: ModuleType, constructor_name: str, *args, **kwargs) -> Any: """ Helper to construct an instance of a class. Parameters ---------- module : ModuleType Module containing the class to construct. constructor_name : str Name of class, as would be returned by ``.__class__.__name__``. args : list Positional arguments used to construct the class instance. kwargs : dict Keyword arguments used to construct the class instance. """ return getattr(module, constructor_name)(*args, **kwargs) def configure(obj: Any, *args, **options) -> Any: # Create object Configuration attribute with object default params, if any if not hasattr(obj, 'cfg'): if not hasattr(obj, 'default_params'): obj.cfg = Configuration() else: default_args, default_options = obj.default_params() obj.cfg = Configuration(*default_args, **default_options) cfg = options.pop('cfg', None) # Update params with supplied params, if any obj.cfg.update(*args, **options) # Update params with supplied Configuration, if any if cfg and isinstance(cfg, Configuration): obj.cfg.update(*cfg.get('args', []), **cfg.get('options', {})) if obj.__class__.__name__ == 'AugmentationFactory': obj.cfg.set(*obj.cfg.options['augmentations']) obj.cfg.remove_options('augmentations', 'train') return obj def get_generic_type(given_name: str) -> str: obj_types = ['transforms', 'dataset', 'dataloader', 'network', 'loss', 'lr_scheduler', 'optimizer', 'metrics'] pattern = '(' + "|".join(obj_types) + ')' regex = re.compile(pattern) obj_type = regex.search(given_name) if obj_type: # If there is a match, its in a sigleton tupple obj_type = obj_type.groups()[0] return obj_type def retrieve_name(var: Any) -> str: """ Gets the name of var. Does it from the out most frame inner-wards. Parameters ---------- var: Any Variable to get name from. Returns ------- _ : str Variable given name """ for fi in reversed(inspect.stack()): names = [var_name for var_name, var_val in fi.frame.f_locals.items() if var_val is var] if len(names) > 0: return names[0]
STACK_EDU
I am trying to measure the performance of an OpenGL application (also using Cg fragment/vertex shader elements). At the moment I am using the Pentium assembly ‘rdtsc’ instruction, which simply returns the cycle count, calling it around the rendering loop as follows: start = rdtsc(); glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, -1.0f); glTexCoord2f(float(xsize), 0.0f); glVertex2f(1.0f, -1.0f); glTexCoord2f(float(xsize), float(ysize)); glVertex2f(1.0f, 1.0f); glTexCoord2f(0.0f, float(ysize)); glVertex2f(-1.0f, 1.0f); end = rdtsc(); time = time + (end - start); Doing this I am getting some (perhaps unbelievable) fast rendering times - is there any other considerations that I need to take into account? I am not rendering to texture - and I ensure the window is fully visable on the screen during execution (to prevent clipping.) Thank you for your help in advance. How fast times? On what GPU? How big is that quad (approximate number of pixels on screen). Basicaly your code looks ok - you did use the glFinish after test. I would also put glFinish command before start = rdtsc(); line to ensure no pending OpenGL operatoin will cause longer times. I can see that you add (end - start) to time. I assume this is part of some loops that repeats this test a few times. Do you have your time variable initialized properly and are start, end and time variables of proper type to use with rtdsc? Also try glGetError to check if your code executed properly. It makes no difference if your window is visible if you render to backbuffer. Perhaps you have your modelview/projection matrices set up incorrectly or you have culling / clipping enabled - do you actually see that quad when you swap buffers? If you’re on Mac OS X then you can use the OpenGL Profiling tool. Sorry if you’re not. Thank you both for your replies. The performance I am getting is 2.3ns for a quad of size 1024*1024 (about 400-500MP/s). (For a 3x3 mask filter.) This is on a GeForce 6800 GT GPU. You are correct I loop the rendering 50 times to get an average value for throughput time. Before the loop time is initialised to 0. I will try double buffering today as you suggest and will check the other parameters you mention. Oh to answer the other post - I am implementing this on a windows platform. Thank you again for your assistance.
OPCFW_CODE
There were nine pins on this LCD unit salvaged from an AT&T CL84209 cordless phone system. I think I’ve figured out five of them: 3.3V supply, ground, enable, I2C clock, and I2C data. The remaining four are still unknown but I noticed a correlation between their start-up behavior and the first control message sent to its I2C address of 0x3E. Perhaps those pins are not all input pins? To get more data, I’m going to repeat the experiment of connecting two LCDs in parallel. One still mounted on remnant of a CL84209 handset circuit board, and the other was a bare module from the base station. The first time I tried this harebrained scheme, I connected all nine pins. Both displays ran, but at much lower contrast than usual. This time, I won’t connect all the pins. Power supply, ground, and enable pins should be safe to connect in parallel. I2C data and clock lines are designed to connect to multiple devices in parallel, so electrically speaking it should be fine as well. But I2C is not designed to have two devices responding to the same address connected together. The protocol has no provision for such conflict resolution and usually I2C address conflict leads to chaos. The only reason we can get away with it this time is because these two screens respond identically (or close enough) so there aren’t conflicts to resolve. With five out of nine pins connected in parallel, I will experiment to see what happens when I selectively connect the rest. For reference, here is the behavior recorded from the handset running standalone: Channels 4 and 7 should rise to 5V, channel 5 should be an 8kHz square wave between 3.3V and 5V, and channel 6 should be another 8kHz square wave from 0V to 3.3V in phase with channel 5. Looking at this plot, I think I need to connect at least a pin to serve as a 5V power supply. Pin 6 (channel 4) and pin 9 (channel 7) are both candidates for 5V supply, since they both rise to 5V once things started up. Before the startup procedure, pin 6 (channel 4) received 3.3V as soon as the system received power, while pin 9 (channel 7) stayed at 0V until the startup procedure. Based on this difference, I think pin 6 (channel 4) is the better candidate to try, so I added a jumper wire to connect that to the handset circuit board. It’s alive! And better yet, display contrast on both screens appears normal. Not faded-out as I saw in the previous parallel test with all nine pins connected. Normal display contrast supports the hypothesis that the 8kHz voltage square waves were used for LCD segments, and furthermore this test implies those square waves were generated by the onboard controller. When these two LCDs were connected in parallel, their 8kHz voltage waves interfered with each other and lowered contrast on both. But if this is true, why do these signals need to be brought out as externally accessible pins? Why not just leave them internal? I don’t have a good guess for that. As an additional data point, I disconnected pin 6 and connected pin 9. If my hypothesis is correct that pin 6 is the 5V power supply, disconnecting it and putting 5V on pin 9 should mean the display stays blank. Nope! My hypothesis was wrong or at least incomplete. When I disconnected pin 6, the display went blank as expected. But when I connected pin 9, the display came back to life. Both pin 6 and pin 9 could apparently serve as 5V power supply. Which one should I use? Perhaps taking another set of analog traces could provide some insight.
OPCFW_CODE
Young stars are typically surrounded by rotating disks of gas and dust, called protoplanetary, or protostellar, disks. These structures are, crucially, the reservoirs of material that go on to form planets, but when the planet-forming process begins is a major open question. In a paper in Nature, Segura-Cox et al.1 cast light on this mystery by reporting a series of rings and gaps in a protostellar disk that is so young that its birth cloud is still collapsing to form the star and disk. Such features are frequently attributed to planets carving lanes through the disk. Given that this is perhaps the youngest disk observed to have such features, the findings help to set the timescale for the emergence of planets and place key constraints on theories of how planets assemble. Planet formation is a complex process that involves tiny dust particles (less than one micrometre in size) accumulating until they become Earth-sized bodies, or even larger. The most popular theory that has been proposed to explain this process is core accretion2, in which the steady accrual of small particles produces pebbles, rocks, boulders and eventually planets. One potential problem with this scenario is that planet formation can be slow, which seems to be at odds with the observation that protostellar disks older than about one million years do not seem to have enough material in them to form planets3. Updates to the theory have been proposed to remedy this4,5, but, ultimately, the only way to refine models of core accretion is to determine how long it takes for planets to actually form. Naturally, the best way of doing this is to find baby planets in young disks. One approach for detecting baby planets is to find evidence of their influence on the structure of the disk in which they are embedded. In the past five years, the Atacama Large Millimeter/submillimeter Array (ALMA) observatory in Chile has provided a wealth of high-resolution imagery of protostellar disks older than one million years . An abundance of interesting ‘substructures’ has indeed been found6. Most common are the narrow bright and dark rings that might be signs of a planet carving out gaps in the disk as it circles the star — although other features, such as spirals or large asymmetries in the distribution of material in the disk, have also been observed. One striking result is that these substructures, which might be related to planet formation, seem to exist in almost every protostellar disk that has been imaged with sufficiently high resolution to detect them7. The ‘planets carving gaps’ scenario would imply that planets can be formed in about one million years. The frequency with which planets have been detected in disks that are more than one million years old raises questions about the earliest time at which planets, or at least disk substructures, can form. A few examples of younger disks (500,000–1,000,000 years old) with substructures have been found8, but these are not much younger than the substructured systems that were observed before them. Segura-Cox and colleagues’ work pushes past this age limit, finding the first clear evidence of narrow bright and dark ring-like substructures in a disk that is less than 500,000 years old. This puts new constraints on when such features can form, and on when planets can be formed (if, indeed, planets explain the presence of these features). In the ALMA image of this young disk, the authors found two dark rings accompanied by two corresponding bright rings (see Fig. 1 of the paper1), similar to what has been observed in older disks. Although these subtle features can be picked out from the image with a careful eye, the authors also enhanced them by subtracting a model of a smooth disk from their image, thereby sharpening the non-smooth features. In doing so, they also found evidence that the disk might be subtly asymmetric. What is interesting is how different these features look from those found in older disks: the rings are quite shallow, and very difficult to pick out, in contrast to the prominent features found in older disks. A much larger set of observations of young disks is needed to see whether this is typical, but if it is, it could provide important clues about the planet-formation process and when it begins. Young planets might be expected to carve out large gaps, so perhaps Segura-Cox et al. have instead observed the build-up of dusty material into highly dense regions that would be conducive to planet formation. Some limitations of the new study should be kept in mind. Ages of young stars are notoriously difficult to measure, and, for systems this young, we are essentially limited to using indirect methods. Stars less than about one million years old are still embedded in the cloud of material that is collapsing to form the disk and star (Fig. 1). Because this envelope of material is expected to deplete over time, the amount of material in the envelope relative to that in the disk should be a gauge of the age of the system. Measurements of these amounts, however, are difficult to make, and provide an inherently imprecise measurement of age. For that reason, it is not clear exactly how much younger the features are, compared with those of previously reported disks, and therefore how much earlier in the lifetime of a disk system this evidence of planet formation is, compared with other reported systems. Moreover, although planets carving out gaps is the most exciting explanation for disk features such as those reported here, other explanations have been proposed, for example the sublimation of gases from dust grains as material travels inwards towards hotter regions in the disk9. This makes it difficult to ascribe these features uniquely to planets in young disks. Still, most of the potential causes of such features could contribute to the planet-formation process — and so, one way or another, Segura-Cox et al. are likely to have seen the beginnings of planet formation in action, in one of the youngest disks yet observed. Nature 586, 205-206 (2020)
OPCFW_CODE
package app import ( "context" "net/http" "net/url" "strconv" "github.com/acoshift/methodmux" "github.com/acoshift/prefixhandler" "github.com/moonrhythm/hime" "github.com/satori/go.uuid" "github.com/acoshift/acourse/internal/app/view" "github.com/acoshift/acourse/internal/pkg/context/appctx" "github.com/acoshift/acourse/internal/pkg/course" "github.com/acoshift/acourse/internal/pkg/me" "github.com/acoshift/acourse/internal/pkg/payment" ) type ( courseIDKey struct{} courseKey struct{} ) func newCourseHandler() http.Handler { c := courseCtrl{} mux := http.NewServeMux() mux.Handle("/", methodmux.Get( hime.Handler(c.view), )) mux.Handle("/content", mustSignedIn(methodmux.Get( hime.Handler(c.content), ))) mux.Handle("/enroll", mustSignedIn(methodmux.GetPost( hime.Handler(c.enroll), hime.Handler(c.postEnroll), ))) mux.Handle("/assignment", mustSignedIn(methodmux.Get( hime.Handler(c.assignment), ))) return hime.Handler(func(ctx *hime.Context) error { link := prefixhandler.Get(ctx, courseIDKey{}) courseID := link _, err := uuid.FromString(link) if err != nil { // link can not parse to uuid get course id from url courseID, err = course.GetIDByURL(ctx, link) if err == course.ErrNotFound { return view.NotFound(ctx) } if err != nil { return err } } x, err := course.Get(ctx, courseID) if err == course.ErrNotFound { return view.NotFound(ctx) } if err != nil { return err } // if course has url, redirect to course url if l := x.Link(); l != link { return ctx.RedirectTo("app.course", l, ctx.URL.Path) } ctx = ctx.WithValue(courseKey{}, x) return ctx.Handle(mux) }) } type courseCtrl struct{} func (ctrl *courseCtrl) getCourse(ctx context.Context) *course.Course { return ctx.Value(courseKey{}).(*course.Course) } func (ctrl *courseCtrl) view(ctx *hime.Context) error { if ctx.URL.Path != "/" { return view.NotFound(ctx) } u := appctx.GetUser(ctx) c := ctrl.getCourse(ctx) enrolled := false pendingEnroll := false var err error if u != nil { enrolled, err = course.IsEnroll(ctx, u.ID, c.ID) if err != nil { return err } if !enrolled { pendingEnroll, err = payment.HasPending(ctx, u.ID, c.ID) if err != nil { return err } } } var owned bool if u != nil { owned = u.ID == c.Owner.ID } p := view.Page(ctx) p.Meta.Title = c.Title p.Meta.Desc = c.ShortDesc p.Meta.Image = c.Image p.Meta.URL = ctx.Global("baseURL").(string) + ctx.Route("app.course", url.PathEscape(c.Link())) p.Data["Course"] = c p.Data["Enrolled"] = enrolled p.Data["Owned"] = owned p.Data["PendingEnroll"] = pendingEnroll return ctx.View("app.course", p) } func (ctrl *courseCtrl) content(ctx *hime.Context) error { u := appctx.GetUser(ctx) x := ctrl.getCourse(ctx) enrolled, err := course.IsEnroll(ctx, u.ID, x.ID) if err != nil { return err } if !enrolled && u.ID != x.Owner.ID { return ctx.Status(http.StatusForbidden).StatusText() } contents, err := course.GetContents(ctx, x.ID) if err != nil { return err } var content *course.Content pg, _ := strconv.Atoi(ctx.FormValue("p")) if pg < 0 { pg = 0 } if pg > len(contents)-1 { pg = len(contents) - 1 } if pg >= 0 { content = contents[pg] } p := view.Page(ctx) p.Meta.Title = x.Title p.Meta.Desc = x.ShortDesc p.Meta.Image = x.Image p.Data["Course"] = x p.Data["Contents"] = contents p.Data["Content"] = content return ctx.View("app.course-content", p) } func (ctrl *courseCtrl) enroll(ctx *hime.Context) error { u := appctx.GetUser(ctx) c := ctrl.getCourse(ctx) // owner redirect to c content if u != nil && u.ID == c.Owner.ID { return ctx.RedirectTo("app.course", c.Link(), "content") } // redirect enrolled user to c content page enrolled, err := course.IsEnroll(ctx, u.ID, c.ID) if err != nil { return err } if enrolled { return ctx.RedirectTo("app.course", c.Link(), "content") } // check is user has pending enroll pendingPayment, err := payment.HasPending(ctx, u.ID, c.ID) if err != nil { return err } if pendingPayment { return ctx.RedirectTo("app.course", c.Link()) } p := view.Page(ctx) p.Meta.Title = c.Title p.Meta.Desc = c.ShortDesc p.Meta.Image = c.Image p.Meta.URL = ctx.Global("baseURL").(string) + ctx.Route("app.course", url.PathEscape(c.Link())) p.Data["Course"] = c return ctx.View("app.course-enroll", p) } func (ctrl *courseCtrl) postEnroll(ctx *hime.Context) error { u := appctx.GetUser(ctx) x := ctrl.getCourse(ctx) // owner redirect to course content if u != nil && u.ID == x.Owner.ID { return ctx.RedirectTo("app.course", x.Link(), "content") } // redirect enrolled user to course content page enrolled, err := course.IsEnroll(ctx, u.ID, x.ID) if err != nil { return err } if enrolled { return ctx.RedirectTo("app.course", x.Link(), "content") } // check is user has pending enroll pendingPayment, err := payment.HasPending(ctx, u.ID, x.ID) if err != nil { return err } if pendingPayment { return ctx.RedirectTo("app.course", x.Link()) } f := appctx.GetFlash(ctx) price, _ := strconv.ParseFloat(ctx.FormValue("price"), 64) image, _ := ctx.FormFileHeaderNotEmpty("image") if price < 0 { f.Add("Errors", "จำนวนเงินติดลบไม่ได้") return ctx.RedirectToGet() } err = me.Enroll(ctx, x.ID, price, image) if err == me.ErrImageRequired { f.Add("Errors", "กรุณาอัพโหลดรูปภาพ") return ctx.RedirectToGet() } if err != nil { f.Add("Errors", "image required") return ctx.RedirectToGet() } return ctx.RedirectTo("app.course", x.Link()) } func (ctrl *courseCtrl) assignment(ctx *hime.Context) error { u := appctx.GetUser(ctx) c := ctrl.getCourse(ctx) enrolled, err := course.IsEnroll(ctx, u.ID, c.ID) if err != nil { return err } if !enrolled && u.ID != c.Owner.ID { return ctx.Status(http.StatusForbidden).StatusText() } assignments, err := course.GetAssignments(ctx, c.ID) if err != nil { return err } p := view.Page(ctx) p.Meta.Title = c.Title p.Meta.Desc = c.ShortDesc p.Meta.Image = c.Image p.Meta.URL = ctx.Global("baseURL").(string) + ctx.Route("app.course", url.PathEscape(c.Link())) p.Data["Course"] = c p.Data["Assignments"] = assignments return ctx.View("app.course-assignment", p) }
STACK_EDU
using Newtonsoft.Json; using Plugin.Connectivity; using SmartInfo.Info; using System; using System.Collections.Generic; using System.Net.Http; using System.Text; using System.Threading.Tasks; namespace SmartInfo.ClassesDeAcessoAPI { public class Confirmacao { //Metodo para poder trazer a lista de Classes public async Task<List<tb_classe_Info>> ListaDeClasses() { List<tb_classe_Info> tb_Classe_Infos = null; try { var client = new HttpClient(); //client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", ConfigSystem.Token); string URL = string.Concat(ConfigSystem.URLAPI, "/Classe"); var uri = new Uri(URL); HttpResponseMessage response = await client.GetAsync(uri); var responseString = response.Content.ReadAsStringAsync().Result; var json = JsonConvert.DeserializeObject<List<tb_classe_Info>>(responseString); return json; } catch (JsonException ex) { return tb_Classe_Infos; } catch (HttpRequestException ex) { return tb_Classe_Infos; } catch (Exception ex) { return tb_Classe_Infos; } finally { } } //Metodo para poder trazer a lista de Curso public async Task<List<tb_curso_Info>> ListaDeCursos() { List<tb_curso_Info> tb_Curso_Infos = null; try { var client = new HttpClient(); //client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", ConfigSystem.Token); string URL = string.Concat(ConfigSystem.URLAPI, "/Curso"); var uri = new Uri(URL); HttpResponseMessage response = await client.GetAsync(uri); var responseString = response.Content.ReadAsStringAsync().Result; var json = JsonConvert.DeserializeObject<List<tb_curso_Info>>(responseString); return json; } catch (JsonException ex) { return tb_Curso_Infos; } catch (HttpRequestException ex) { return tb_Curso_Infos; } catch (Exception ex) { return tb_Curso_Infos; } finally { } } /// <summary> /// Metodo Responsavel Para fazer a confrimação de um aluno /// </summary> public async Task<string> FazerConfirmacao(tb_confirmacao_Info tb_Confirmacao) { try { var connection = CrossConnectivity.Current.IsConnected; if (connection == false) { return "Verifica a sua conexao de internet"; } //else if (string.IsNullOrEmpty(tb_Matricula.Altura) == true) //{ // return "Campo altura vazio prencha"; //} //else if (string.IsNullOrEmpty(Convert.ToString(tb_Matricula.Data_De_Nascimento)) == true) //{ // return "Seleciona data de nascimento"; //} //else if (string.IsNullOrEmpty(enderecorigem) == true) //{ // return "Campo endereco de origem vazio prencha o campo"; //} //else if (string.IsNullOrEmpty(enderecodestino) == true) //{ // return "Campo endereco de destino vazio prencha o campo"; //} //else if (string.IsNullOrEmpty(Convert.ToString(peso)) == true) //{ // return "Campo peso vazio prencha o campo"; //} //else if (string.IsNullOrEmpty(idcategoriacarga) == true) //{ // return "Seleciona a categotria da carga"; //} //else if (string.IsNullOrEmpty(idmunicipiorigem) == true) //{ // return "Seleciona o municpio de origem"; //} //else if (string.IsNullOrEmpty(idmunicipiodestino) == true) //{ // return "Seleciona o municpio de destino"; //} else { var json = JsonConvert.SerializeObject(tb_Confirmacao); var content = new StringContent(json, Encoding.UTF8, "application/json"); using (var client = new HttpClient()) { string URL = string.Concat(ConfigSystem.URLAPI, "/Confirmacao"); var result = await client.PostAsync(URL, content); if (result.StatusCode == System.Net.HttpStatusCode.OK) { var error = await result.Content.ReadAsStringAsync(); return error; } else if (result.StatusCode == System.Net.HttpStatusCode.BadRequest) { var error = await result.Content.ReadAsStringAsync(); return error; } else { var error = await result.Content.ReadAsStringAsync(); return error; } }; } } catch (JsonException ex) { return ex.Message; } catch (HttpRequestException ex) { return ex.Message; } catch (Exception ex) { return ex.Message; } finally { } } } }
STACK_EDU
The recent increase in the volume of online meetings necessitates automated tools for managing and organizing the material, especially when an attendee has missed the discussion and needs assistance in quickly exploring it. no code implementations • 5 May 2022 • Julia Kiseleva, Ziming Li, Mohammad Aliannejadi, Shrestha Mohanty, Maartje ter Hoeve, Mikhail Burtsev, Alexey Skrynnik, Artem Zholus, Aleksandr Panov, Kavya Srinet, Arthur Szlam, Yuxuan Sun, Marc-Alexandre Côté Katja Hofmann, Ahmed Awadallah, Linar Abdrazakov, Igor Churin, Putra Manggala, Kata Naszadi, Michiel van der Meer, Taewoon Kim The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. Current interactive systems with natural language interface lack an ability to understand a complex information-seeking request which expresses several implicit constraints at once, and there is no prior information about user preferences, e. g., "find hiking trails around San Francisco which are accessible with toddlers and have beautiful scenery in summer", where output is a list of possible suggestions for users to start their exploration. Motivated from these two angles, we propose a new task: summarization with graphical elements, and we verify that these summaries are helpful for a critical mass of people. no code implementations • 13 Oct 2021 • Julia Kiseleva, Ziming Li, Mohammad Aliannejadi, Shrestha Mohanty, Maartje ter Hoeve, Mikhail Burtsev, Alexey Skrynnik, Artem Zholus, Aleksandr Panov, Kavya Srinet, Arthur Szlam, Yuxuan Sun, Katja Hofmann, Michel Galley, Ahmed Awadallah Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. Enabling open-domain dialogue systems to ask clarifying questions when appropriate is an important direction for improving the quality of the system response. Both components of our graph induction solution are evaluated in experiments, demonstrating that our models outperform a state-of-the-art text generator significantly. The proposed backward reasoning step pushes the model to produce more informative and coherent content because the forward generation step's output is used to infer the dialogue context in the backward direction. Digital assistants are experiencing rapid growth due to their ability to assist users with day-to-day tasks where most dialogues are happening multi-turn. Motivated by our findings, we present ways to mitigate this mismatch in future research on automatic summarization: we propose research directions that impact the design, the development and the evaluation of automatically generated summaries. Reinforcement learning methods have emerged as a popular choice for training an efficient and effective dialogue policy. The main aim of the conversational systems is to return an appropriate answer in response to the user requests. Then, the traditional multi-label classification solution for dialogue policy learning is extended by adding dense layers to improve the dialogue agent performance. Effective optimization is essential for real-world interactive systems to provide a satisfactory user experience in response to changing user behavior. Reinforcement Learning (RL) methods have emerged as a popular choice for training an efficient and effective dialogue policy. We conclude that existing metrics of disentanglement were created to reflect different characteristics of disentanglement and do not satisfy two basic desirable properties: (1) assign a high score to representations that are disentangled according to the definition; and (2) assign a low score to representations that are entangled according to the definition. Obtaining key information from a complex, long dialogue context is challenging, especially when different sources of information are available, e. g., the user's utterances, the system's responses, and results retrieved from a knowledge base (KB). The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator.
OPCFW_CODE
What is coding (Morse Code, ASCII, Explain Digital and Analogue definitions Advantages of digital vs. analogue Explain frequency, band, bandwidth, baud rate, bit rate and codec theory Explain “Why we need a modem”? time taken by the signal to travel from the source to the destination One way = 22,280 miles (GEO satellite distance about 36000 kms) Time=Distance/speed of signal in the satellite receive the incoming signal and transmit them back to decrease in magnitude of power of the signal. (weakening of the signal) What is coding? character is a symbol that has a common, constant meaning. A character could be a “A” or “B” or “1” or “8” US-ASCII (7 bit code = 128 valid characters and 8 bit code = 256 valid characters),e.g. A=1000001 (7 bit ASCII code) EBCDIC (IBM’s standard Mainframe code, 8 bits, giving 256 valid character combinations) More info on link : A digital signal is discontinuous (gaps) and varies instantaneously, e.g. zero or 1. data is in a continuous form. It varies smoothly. Electronically, an analogue signal is a voltage that is continuously going up and down Analogue signals preserve all the information with no loss. It is, however, difficult and expensive to get a computer to work with analogue data, as well as re create origninal data once affected by random noise Advantages of Digital transmission vs. Analogue error because of only two distinct values more efficient transmission (switching, higher maximum transmission rate more secure (easier to encrypt) integration (of voice, data and video) easier with digital transmission frequency is the number of cycles (waves) in an analogue signal that occur per It is measured in hertz (Hz) - 1HZ is one cycle or wave per second. band is a range of frequencies. For example, the FM radio band covers the range of frequencies from 80MHz to 108 bandwidth is the width of the band. The difference between the highest and lowest frequencies in a band In digital networks, bandwidth is used to refer to the amount of data that can be sent per second (bit rate or bandwidth). example, each FM station has a bandwidth of 25kHz, so an FM station broadcasting on 99MHz (99,000kHz) can broadcast a signal that varies between 98,987.5kHz (LSB) and 99,012.5kHz (USB) Humans can hear 20Hz - 14KHz (varies upto 20 voice grade telephone circuits have a bandwidth of 300HZ to 4 KHz for voice The baud rate is the number of times that a signal changes in one second. It is measured in bauds. For example, 1,200 baud means that the voltage in an electrical cable changes 1,200 times every second. A baud rate of 2,400 baud means that the voltage changes 2,400 times every second. o how fast a modem can transmit data. Technically, the baud is the number of voltage or frequency changes that can be made in one second. When a modem is working at 300 baud, this means that the basic carrier frequency has 300 cycles per second. bit rate is the number of bits of data sent It is measured in bits per second (bps) A bit rate of 33,600 BPS means that 33,600 bits are sent every second. A bit rate of 9.6kbps (9,600bps) means that 9,600 bits are sent every second. Bit rate = baud rate * number of bits per for example… QAM has 4 bits per so…bit rate = baud rate * 4 convert analogue voice into digital form for transmission over a digital link (need them at both ends). Needed for switching over change computer digital signal to analogue for transmission over telephone line (local loop) which is analogue. the technique that modifies the form of an electric signal so the signal can carry information on a communication link. e.g. change digital to analogue signal can adjust the capacity (by changing bits per signal element) depending on quality of the line Chapter 3 (96-106) relevant parts How does analogue data differ from digital signal? Why do we need a Modem? What is coding, how many characters 7-bit ASCII provides? Describe the advantages of digital over analogue. Explain the following terms: Explain the following terms:
OPCFW_CODE
#include <sim_binary.h> #error "FILE NOT USED" static link_t* connectivity[TOSSIM_MAX_NODES]; static link_t* allocate_link(int mote); static void deallocate_link(link_t* link); link_t* sim_binary_first(int src) __attribute__ ((C, spontaneous)) { return connectivity[src]; } link_t* sim_binary_next(link_t* link) __attribute__ ((C, spontaneous)) { return link->next; } void sim_binary_add(int src, int dest, double packetLoss) __attribute__ ((C, spontaneous)) { link_t* current; int temp = sim_node(); sim_set_node(src); current = connectivity[src]; while (current != NULL) { if (current->mote == dest) { sim_set_node(temp); break; } current = current->next; } if (current == NULL) { current = allocate_link(dest); } current->mote = dest; current->loss = packetLoss; current->next = connectivity[src]; connectivity[src] = current; dbg("Binary", "Adding link from %i to %i with loss %llf\n", src, dest, packetLoss); sim_set_node(temp); } double sim_binary_loss(int src, int dest) __attribute__ ((C, spontaneous)) { link_t* current; int temp = sim_node(); sim_set_node(src); current = connectivity[src]; while (current != NULL) { if (current->mote == dest) { sim_set_node(temp); return current->loss; } current = current->next; } sim_set_node(temp); return 1.0; } bool sim_binary_connected(int src, int dest) __attribute__ ((C, spontaneous)) { link_t* current; int temp = sim_node(); sim_set_node(src); current = connectivity[src]; while (current != NULL) { if (current->mote == dest) { sim_set_node(temp); return TRUE; } current = current->next; } sim_set_node(temp); return FALSE; } void sim_binary_remove(int src, int dest) __attribute__ ((C, spontaneous)) { link_t* current; link_t* prevLink; int temp = sim_node(); sim_set_node(src); current = connectivity[src]; prevLink = NULL; while (current != NULL) { if (current->mote == dest) { if (prevLink == NULL) { connectivity[src] = current->next; } else { prevLink->next = current->next; } deallocate_link(current); current = prevLink->next; } else { prevLink = current; current = current->next; } } sim_set_node(temp); } static link_t* allocate_link(int mote) { link_t* link = (link_t*)malloc(sizeof(link_t)); link->next = NULL; link->mote = mote; link->loss = 1.0; return link; } static void deallocate_link(link_t* link) { free(link); }
STACK_EDU
I often face the issue of having to choose a k number of clusters. The partition I end up choosing is more often based on visual and theoretical concerns rather than quality criteria. I have two main questions. The first concerns the general idea of clusters quality. From what I understand criteria, such as the "elbow", are suggesting an optimal value in reference to a cost function. The issue I have with this framework is that the optimal criteria is blind to theoretical consideration, so that there are some degree of complexity (related to your field of study) that would always want in your final groups/clusters. Moreover, as explained here the optimal value is also related to "downstream purpose" constraints (such as economic constraints), so consideration of what you are going to do with the clustering matters. One constraint obviously that one faces is to find meaningful/interpretable clusters, and the more clusters you have the more difficult it is to interpret them. But this is not always the case, very often I find that 8, 10 or 12 clusters are the minimum "interesting" number of clusters I would like to have in my analysis. However, very often criteria such as the elbow suggest much fewer clusters, generally 2,3 or 4. Q1. What I would like to know is what is the best line of argument when you decide to choose more clusters rather than the solution proposed by a certain criteria (such as the elbow). Intuitively, the more should always be better when there are no constraints (such as the intelligibility of the groups you get or in the coursera example when you have a very large sum of money). How would you argue this in a scientific journal article? Another way to put this, is to say that once you identified the minimum number of clusters (with these criteria), should you even have to justify why you picked more clusters than that? Shouldn't justification come only when choosing the minimal meaningful amount of clusters? Q2. Relatedly, I do not understand how certain quality measures, such as the silhouette, can actually decrease as the number of clusters increase. I don't see in the silhouette a penalisation for the number of clusters, so how can this be? Theoretically, the more clusters you have, the greater is the cluster quality? # R code library(factoextra) data("iris") ir = iris[,-5] # Hierarchical Clustering, Ward.D # 5 clusters ec5 = eclust(ir, FUNcluster = 'hclust', hc_metric = 'euclidean', hc_method = 'ward.D', graph = T, k = 5) # 20 clusters ec20 = eclust(ir, FUNcluster = 'hclust', hc_metric = 'euclidean', hc_method = 'ward.D', graph = T, k = 20) a = fviz_silhouette(ec5) # silhouette plot b = fviz_silhouette(ec20) # silhouette plot c = fviz_cluster(ec5) # scatter plot d = fviz_cluster(ec20) # scatter plot grid.arrange(a,b,c,d)
OPCFW_CODE
- Various fixes for seg faults - Some fixes for very large panoramas - Fixed a couple of buffer overflows v2.0 is an almost complete rewrite. - Fix for memalign on MacOS - Now properly ignores Enblend's -f parameter - Rewritten, better quality blending engine - No limit to number of images - Disk caching for large mosaics - Improved boundary wrapping - Image position adjustment via command line parameters Multiblend is a multi-level image blender for the seamless blending of image mosaics, such as those created with Hugin, PTAssembler, or PTGui. It is a significantly faster drop-in alternative to Enblend, although it lacks some of Enblend's advanced features. Usage: Multiblend [options] [-o OUTPUT] INPUT [X,Y] [INPUT] [X,Y] [INPUT]... --levels X / -l X X: set number of blending levels to X -X: decrease number of blending levels by X +X: increase number of blending levels by X --depth D / -d D Override automatic output image depth (8 or 16) --bgr Swap RGB order --wideblend Calculate number of levels based on output image size, rather than input image size -w, --wrap=[mode] Blend around images boundaries (NONE (default), HORIZONTAL, VERTICAL). When specified without a mode, defaults to HORIZONTAL. --compression=X Output file compression. For TIFF output, X may be: NONE (default), PACKBITS, or LZW For JPEG output, X is JPEG quality (0-100, default 75) For PNG output, X is PNG filter (0-9, default 3) --cache-threshold= Allocate memory beyond X bytes/[K]ilobytes/ X[K/M/G] [M]egabytes/[G]igabytes to disk --no-dither Disable dithering --tempdir <dir> Specify temporary directory (default: system temp) --save-seams <file> Save seams to PNG file for external editing --load-seams <file> Load seams from PNG file --no-output Do not blend (for use with --save-seams) Must be specified as last option before input images --bigtiff BigTIFF output --reverse Reverse image priority (last=highest) for resolving --quiet Suppress output (except warnings) --all-threads Use all available CPU threads [X,Y] Optional position adjustment for previous input image - Consistent and integrated seaming Enblend blends input images one at a time, calculating and optimising a new seam line for each new input image against the intermediate output image so far. Not only is this slow (due to the repeated seaming and Enblend's use of an exact but complex algorithm for seam generation), it also makes the routes of Enblend's seams (and the degree to which images are blended) dependent on the order that input files are provided to it. In contrast, Multiblend calculates a unique composite seam for all images simultaneously, using a faster algorithm: Multiblend, however, doesn't optimise seams as Enblend does. (…and 716 other possibilities) Enblend similarly expends a lot of CPU cycles generating a full intermediate image for each new input image, with each intermediate image taking longer to generate than the last as the output image grows. This is where Multiblend really wins out, as all images are (effectively) simultaneously blended: In testing, Multiblend is about 10x faster for small mosaics. Due to Enblend's O(n2) time complexity, compared to Multiblend's O(n) linear time complexity, this speed advantage increases to 300x for a gigapixel mosaic: Based on this exponential difference, Multiblend would take less than a day to blend a terapixel (1000 gigapixel) mosaic; Enblend would take 360 years. - Smoother, more consistent blending The following images show how Multiblend blends images more smoothly than Enblend: Other differences in implementation mean that Multiblend is able to blend images which have very little overlap, or even none at all (including images which have gaps between them): Enblend blends image sequentially, with the amount of blending seeming to be dependent on both the amount of overlap and the current size of the intermediate mosaic. This means that later images get blended more than ealier images do. Multiblend blends all images by the same amount: Although Enblend employs dithering to reduce banding, it appears to be broken as residual banding is still detectable in output images. Additionally, Enblend's dithering is random and non-deterministic which means running Enblend twice on the same inputs will not result in an identical output, which is potentially problematic. Multiblend uses ordered dithering and does not suffer from either of these issues: - Image formats / position adjustment Multiblend supports TIFF, JPEG, and PNG images for both input and output. By specifying a coordinate pair after an input image, that image's position in the mosaic can be adjusted. This allows mosaics to be made from JPEGs or cropped PNGs, neither of which can natively (to the best of my knowledge) provide position information. Multiblend will transparently cache data to disk if necessary, making it capable of blending much larger mosaics than Enblend. - Multiblend doesn't optimise seams, as Enblend does by default. Multiblend was inspired by Enblend. Multiblend uses libtiff: Copyright © 1988-1997 Sam Leffler Copyright © 1991-1997 Silicon Graphics, Inc. Multiblend uses libjpeg-turbo. Multiblend uses libpng. If you find any bugs or have any questions about Multiblend you can email me at firstname.lastname@example.org
OPCFW_CODE
Protected USB storage is a very good idea for keeping mobile data safe. Several USB storage devices are now available with fingerprint scanners to secure data. Trek's ThumbDrive Swipe is another, but with a couple of interesting twists. The most obvious difference is that the ThumbDrive Swipe does not use a standard fingerprint sensor, but requires the user to slide his finger over a narrow strip which scans the print in passing. This makes it impossible for an attacker to "lift" a print off the device. The sensor is concealed in a sliding housing for protection, making the whole package a neat little unit not much bigger than standard USB memory devices. Another difference is that the drive does not require a driver. Instead a login executable resides in the unencrypted ("public") area of the drive, which when run opens a screen with login and management options. An identical executable (just renamed "logout") resides in the encrypted part of the drive and is then used to disconnect. Software is provided on the device for Windows 2000 and XP, and versions for 95/98 and (we were delighted to see) for Mac OS X are available for download from the Trek website. When run for the first time, the login software asks for master registration, at which point the admin username, fingerprint and optional password is set. This went well enough, though it take a lot of practice scans to get the knack of using the scanner. In fact we never did get it to read a finger, finding it instead much more successful in scanning thumbprints, which worked just fine. We do wonder if some users may find it frustrating to master. And once the master registration is complete, you cannot change your mind about wanting a password or not, nor can the master fingerprint be changed. Up to three additional users can be registered, and the system automatically detects which user is present when the finger is scanned. By now we'd got the knack of scanning thumbs, and enrolling other users was quick and easy. The split between encrypted and public can be set to any size, though at least 2mb must be public. This is because when the drive reformats to create the partitions, it rewrites the login application too (and the logout one, if you have assigned any encrypted space), which is a nice touch. The master user must login to resize the partitions. The software started feeling fiddly at this point – because the login and logout executables are the same, if it detects an existing login, the software offers only the option to logout. So you have to logout and then rerun the application in order to get to the user registration and repartitioning options. Logging in is quick and easy, but the sensor dialogue once opened provides no way to exit without logging in. Despite using a standalone application and not a driver to access the drive, I/O performance did not seem affected at all. No documentation is provided in the box, but detailed PDFs are to be found on the company's website. We'd have liked the URL for that to be clearly marked on the packaging: not all users would have the patience or web savvy to trawl around a site looking for drivers and docs. We like the ThumbDrive Swipe a lot. The software niggles are minor and the sensor, though it took a bit of getting used to, works very well. You are paying a premium for the biometric component, but we think it is well worth it.
OPCFW_CODE
Sessions, Workspaces, and Projects Here, we have listed seven tools and applications you can use to compare two files. These tools follow a specific algorithm that compares the two files simultaneously. Once the comparison process is complete, it provides you with a detailed report on any found differences. For example, you may have two people working on a similar project and you want to compare the text line by line. Select Compare, and wait for the program to run the data through its tool. Right from writing to editing text online, sharing, encryption, and downloading, every feature is absolutely free. This online writing notepad allows you to write blogs and articles while staying organized. This Online Wordpad can create a simple and editable to-do list for daily tasks. You can get a new window by clicking on the clone button. SourcePawn syntax highlight & autocompletion Yes, this tool is available to use on mobile devices too that making it perfect for every single person on the planet. Access all Notud features for 14 days, then decide which plan best suits you. Notud lets you master your notes and increase productivity. Notud allows you to mark them up with text, arrows or images. If you have an account, sign in with your email address. If you reload the page, you will see there are no changes. - The instructions below will show you how to use a free recovery software called Recuva www.parkingya.es/blog/does-notepad-have-a-dark-mode-in-windows-10 to recover permanently deleted notepad text files in Windows 11. - XBrackets Lite is one of the notepad++ best plugins worth mentioning among the top most important plugins of Notepad++. - PyCharm is an Integrated Development Environment utilized in programming, specifically for the Python language. - Even with all these criteria in place, I still tested close to 40 different note apps for taking notes online. CACHE files aren’t meant to be opened by anyone because the program that uses it will use it when it needs to and then discard the CACHE files when necessary. Some of these files can get pretty large, depending on the program and data you’re working with. If any of the above fixes seems not to be working for you, try to uninstall the Notepad first. Then go to the Apps menu and add the Notepad as a new feature again. Notepad often faces complications if another text editing application interferes. If that is the problem, try uninstalling those applications first. Handwrite Notes Online You can share a checklist and timeline so your team can give progress updates in real-time. If you start working on a document and realize it needs input from the team, you should be able to get it on your notepad instantly. Until you crossed my path today, looking for a note pad, It did not occur to me to use an online note program. It was like an Epiphany, I was blind but now I see. Right-click the target file and select Open with Notepad++, then save it to a secure location. Choose the files you want to recover by placing a checkmark on their left. Google’s Android OS doesn’t come with a default notepad solution. BİR YORUM YAZIN
OPCFW_CODE
2 weeks ago I participated in the first MITPro LiveMeeting enabled monthly meeting. I know I’m late reporting back to you, but I’ve been touring the country side with TechDays 2008 ( four cities down, 2 to go…) At that meeting we had a panel discussion on “virtualization”. Well, the discussion was lively, the attendees were participating, and all in all a very good experience. I had a good discussion with Daniel Nerenberg, the president of MITPro, after the meeting over some pints and I asked him to write down his thoughts so that we could tell you about it in his own words…. Here’s what he had to say about the experience: “What an exciting night! First I want to thank my speakers Pierre, Bill, and Mitch. You guys did a great job, and everyone was excited to see Mitch up on the big screen. This was also a challenging night; we were trying out several different concepts and ideas. The first concept is a round table/ Panel style meeting. The second concept was linking the meeting up with Live Meeting. The panel meeting format worked great from my perspective. We had 3 very knowledgeable presenters who understood how virtualization was impacting the areas of the business world they worked in. The Live Meeting aspect presented several unforeseen challenges that delayed the start of our meeting a bit, and I had to tweak the live meeting setting throughout the presentation in order to allow the online participant to get the proper access. What we learned: Preparation time: Even though I tested my setup at home setting up the camera and microphone gear onsite required more time that I originally anticipated I would recommend setting up and testing 1 hour before attendees arrive. Equipment checks: Make sure to test out equipment over a long period of time. I tested a camera for a minute on LiveMeeting only to find out that it went into a sleep mode every 5 minutes which interrupted the LiveMeeting video feed. Have Backups and extension cords: We couldn’t use the presentation room’s speaker system; fortunately I brought a set of speakers as backups. And finally bring lots of long connector wires for anything you need to connect. Extensions power cords are also recommended. Overall I am very happy with our first Roundtable discussion. The MITPro board is reviewing all of the feedback we received and we are looking forward to making the next presentation even better. So here you go. I know I said that we would try to get the LiveMeeting recording online, but the difficulties of the evening prevented us from accomplishing our goal. Next Time. Let’s think of this meeting as “MITPro Virtual Meeting, Beta 1” it works, just very few issues to iron out…. So, go out there, don’t be afraid to try new things, stretch your thinking and let us know how it goes. …. Like Bruce Wayne’s father said, “Why do we fall? So we can learn how to pick ourselves up.” (I’m such a geek…. )
OPCFW_CODE
The execution of program blocks is controlled by events. CAPL programs are developed and compiled in a dedicated browser. This makes it possible to access all of the objects contained in the database messages, signals, environment variables as well as system variables. In three consecutive articles, CAPL fundamentals will be discussed as well as tips for all levels of user knowledge. It focuses on the basics of CAPL. It is primarily intended for those who are new to this language; however, it also offers a few insights for well-informed users into the motivation behind individual CAPL constructs. |Country:||Trinidad & Tobago| |Published (Last):||6 June 2016| |PDF File Size:||2.34 Mb| |ePub File Size:||11.79 Mb| |Price:||Free* [*Free Regsitration Required]| If you want send the extended frame the IDE bit in the orbitration field should be recessive 1. HI, for can extended messages put x after identifier. Hi can any help how to extract data from text file and send it on message in capl. If that text file consist message bytes which are supposed to send on bus like 00 00 00 00 Hi, You have mentioned here that we can create cyclic events with timers, but didn't make an example of that. If I reset the timer in my "on timer" by using the setTimer then how will the program go back to the beginning of my "on timer" to run the code there again? This is not working for me, it always only run once. Also how can I put timer in a for loop? Can someone write me a script for this in CAPL, kindly help. I really need it, kinda priority now. IT will be a big help if anyone can do it. RQ 6 Default value of output frequency is 10 Hz RQ 7 If the input signal with the frequency is in error 0 , output is disabled RQ 8 If the input signal frequency is in range , output frequency is in default 2 Make the test specification for the RQ1-RQ8. I have found this post helpful. I have a question tho. How can you send a cyclic message between two signals. How can I program this so that I can send the message with both signals alternating for a specific number of times? Is it possible to convert a string in to bytes in CAPL , if yes please let me know how , example : my string is "10 01" How do I compare my received message to something to verify what I've received is correct or not? Hello, I want to change message ID using capl scripting In run time. I know this is not a proper scenario but right now i am working with this kind of requirement. Please let me know if you have some idea on it. Thank you, Akash Shah. CAPL Basics. These program blocks are known as event procedures. The program code that you define in event procedures is executed when the event occurs. For example, you can send a message on the bus in response to a key press on key , track the occurrence of messages on the bus on message , or execute certain actions cyclically on timer. A CAPL program consists of two parts:. Declare and define global variables. Declare and define user-defined functions and event procedures. CAPL programs have three distinct parts:. Global Variable Declarations. Event Procedures. User-Defined Functions. Data types available for variables include integers dword , long , word , int , byte , char , floating point numbers float and double ,. CAN messages message and timers timer or msTimer. Except for the timers, all other variables can be initial-ized in their declarations. With the exception of timers, the compiler initializes all variables with default values unless otherwise defined: 0. CAPL permits the declaration of arrays arrays, vectors, matrices , analogous to their declaration in the C programming language. The complete declaration includes the message identifier or - when work-ing with symbolic databases - the message name. For example, you might write the following to output messages on the bus that have identifier A hex or dec or the message EngineData defined in the database. It is possible to access control information for the CAN message objects using the following component selectors:. You can react to the following events in CAPL using event procedures :. Initialization of measurement before meas-urement start. Receiving chip is not considered. With on key procedures you can execute certain actions by key press. The code for a key press can either be input as a character, number or a predefined name for a function key. Note:Remember that environmental variables are only enabled in CANoe. Thisfacility can be used to create a cyclic event if you reset the timer at the end of the timer event procedure. Timers can also be used to respond to an event after a delay. The setTimer function takes two parameters, the name of the timer and the length of time to setthe timer. The length of time parameter has different units depending on what kind of timer youare using. For a Timer, the units are seconds; for an msTimer, the units are milliseconds. Themaximum values are seconds and 65, milliseconds, respectively. The cancelTimer function can be called on a timer before it has expired to prevent the timer event from triggering. Calling the cancelTimer function has no effect if the timer is not set or has already expired. Set timer to 20 ms. If they are defined, each is called once permeasurement. You use this procedure to read data from files, initializevariables, or write to the Write window. Other actions, such as outputting a message onto thebus, are not available in the preStart event. Generally, actions that are invalid in the preStart event procedure can be moved to the start event procedure. After the preStart event procedure has completed executing, the start event procedure is executed if one exists. The start event procedure can be used to initialize environmental variables, set timers, and output messages onto the bus. The measurement is also started at this time. You can use this procedure to print statistics in the Write window, output messages onto the bus, or write to a log file. After this event has finished executing, the measurement is stopped. The key word this is used to refer to the the data structure of an object within an event procedure for receiving a CAN object or environment variable. When information only needs to be transferred on an event basis, the event message is used. When information requires transferring on a repetitive basis, the periodic message is used. When information requires transferring on a repetitive basis only when a certain set of conditions is true, the conditionally periodic message is used. A number of run-time errors are monitored in CAPL:. If a run-time error is detected, the instrinsic function runError is called. This out-. The location of the particular CAPL source text which. The measurement is termi-. The function runError can also be called directly by the user to generate asser-. Create a black box to simulate the rest of the network. Create a module simulator. Simulate event messages, periodic messages, or conditionally repetitive messages. Simulate human events like button presses using the PC keyboard. Simulate timed node or network events. Create a functional gateway between to different CAN networks. CAPL Programming. Unknown June 3, at AM. Srikent June 14, at AM. Unknown April 22, at AM. Unknown December 10, at AM. Unknown July 25, at AM. Unknown February 21, at PM. Unknown March 8, at PM. Sankar April 11, at PM. Shravan April 19, at PM. Swetha B May 30, at PM. Unknown July 18, at PM. Unknown July 20, at PM. Anonymous July 25, at AM. Unknown January 8, at PM. Unknown December 11, at AM. Unknown April 12, at AM. Subscribe to: Posts Atom. Tips and Tricks for the Use of CAPL
OPCFW_CODE
import copy import numpy as np from dnpy.layers import Layer from dnpy import initializers, utils class Dropout(Layer): def __init__(self, l_in, rate=0.5, name="Dropout"): super().__init__(name=name) self.parents.append(l_in) self.oshape = self.parents[0].oshape self.rate = rate self.gate = None def forward(self): if self.training: self.gate = (np.random.random(self.parents[0].output.shape) > self.rate).astype(float) self.output = self.parents[0].output * self.gate else: self.output = self.parents[0].output * (1-self.rate) def backward(self): self.parents[0].delta = self.delta * self.gate class GaussianNoise(Layer): def __init__(self, l_in, mean=0.0, stddev=1.0, name="GaussianNoise"): super().__init__(name=name) self.parents.append(l_in) self.mean = mean self.stddev = stddev self.oshape = self.parents[0].oshape def forward(self): if self.training: noise = np.random.normal(loc=self.mean, scale=self.stddev, size=self.parents[0].output.shape) self.output = self.parents[0].output + noise else: self.output = self.parents[0].output def backward(self): self.parents[0].delta = np.array(self.delta) class BatchNorm(Layer): def __init__(self, l_in, momentum=0.99, bias_correction=False, gamma_initializer=None, beta_initializer=None, name="BatchNorm"): super().__init__(name=name) self.parents.append(l_in) self.oshape = self.parents[0].oshape # Params and grads self.params = {'gamma': np.ones(self.parents[0].oshape), 'beta': np.zeros(self.parents[0].oshape), 'moving_mu': np.zeros(self.parents[0].oshape), 'moving_var': np.ones(self.parents[0].oshape), } self.grads = {'gamma': np.zeros_like(self.params["gamma"]), 'beta': np.zeros_like(self.params["beta"])} self.cache = {} self.fw_steps = 0 # Constants self.momentum = momentum self.bias_correction = bias_correction # Initialization: gamma if gamma_initializer is None: self.gamma_initializer = initializers.Ones() else: self.gamma_initializer = gamma_initializer # Initialization: beta if beta_initializer is None: self.beta_initializer = initializers.Zeros() else: self.beta_initializer = beta_initializer def initialize(self, optimizer=None): super().initialize(optimizer=optimizer) # Initialize params self.gamma_initializer.apply(self.params, ['gamma']) self.beta_initializer.apply(self.params, ['beta']) def forward(self): x = self.parents[0].output if self.training: mu = np.mean(x, axis=0, keepdims=True) var = np.var(x, axis=0, keepdims=True) # Get moving average/variance self.fw_steps += 1 # Add the bias_correction part to use the implicit correction if self.bias_correction and self.fw_steps == 1: moving_mu = mu moving_var = var else: # Compute exponentially weighted average (aka moving average) # No bias correction => Use the implicit "correction" of starting with mu=zero and var=one # Bias correction => Simply apply weighted average moving_mu = self.momentum * self.params['moving_mu'] + (1.0 - self.momentum) * mu moving_var = self.momentum * self.params['moving_var'] + (1.0 - self.momentum) * var # Compute bias correction # (Not working! It's too aggressive) if self.bias_correction and self.fw_steps <= 1000: # Limit set to prevent overflow bias_correction = 1.0/(1-self.momentum**self.fw_steps) moving_mu *= bias_correction moving_var *= bias_correction # Save moving averages self.params['moving_mu'] = moving_mu self.params['moving_var'] = moving_var else: mu = self.params['moving_mu'] var = self.params['moving_var'] inv_var = np.sqrt(var + self.epsilon) x_norm = (x-mu)/inv_var self.output = self.params["gamma"] * x_norm + self.params["beta"] # Cache vars self.cache['mu'] = mu self.cache['var'] = var self.cache['inv_var'] = inv_var self.cache['x_norm'] = x_norm def backward(self): m = self.output.shape[0] mu, var = self.cache['mu'], self.cache['var'] inv_var, x_norm = self.cache['inv_var'], self.cache['x_norm'] dgamma = self.delta * mu dbeta = self.delta # * 1.0 dxnorm = self.delta * self.params["gamma"] df_xi = (1.0/m) * inv_var * ( (m * dxnorm) - (np.sum(dxnorm, axis=0, keepdims=True)) - (x_norm * np.sum(dxnorm*x_norm, axis=0, keepdims=True)) ) self.parents[0].delta = df_xi self.grads["gamma"] += np.sum(dgamma, axis=0) self.grads["beta"] += np.sum(dbeta, axis=0)
STACK_EDU
Asterisk/chan_dongle: Receiving Concatenated SMS I'm using Asterisk with chan_dongle (and Huawei UMTS stick) on Debian Wheezy. I can send and receive SMS successfully. The 7-bit de/encoding work is done by a simple PHP script. My problem is, that I can't receive concatenated SMS. This is the Base64 text from two different messages: Ym9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVX WHNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVtBYm9LZWl1a1NfYUNnSU1PUVVXWXNxR21FXVs= CENdV0tBTH1lQUhLU11LQShLU1ldQ1FbS0FCXUFIS2VBYENzZ0NNS0dDZUlbAldpU19dXUAIS1NdS0FgS2VneV1ZU0dRS0FgQ3NnQ01LR0NlSUEEX11rZ0EgEx1BUGpACislU0BYQ2tpS2l1QEBgcGBsQGxyaHJAaHJiYkBubGBkdkAmS2VTS11da1tbS2V1cmJiZmxsbmxo XEAOfVlpU09BRFNnQWZiXGBwXGRgYmhcAA== The first one is a simple repeating "qwertz..." dummy text, sent from my Android phone. The second one is a (German) reply from a service provider with a coupon code. The first one looks easy to decode. 7/8 bit magic and we're done. But the second one is really strange. Any ideas how to decode it? There is only way to deal with multipart SMS is working as PDU with it and concatenating in your application. Looks like a bug in chan_dongle or a wrong format by the the sender. Because I can receive other multipart SMS without any problems.
STACK_EXCHANGE
A module containing a Composed monad can be thought of as an equivalent to functions elsewhere in chp-plus (especially the Control.Concurrent.CHP.Connect module) that support partial application of processes when wiring them up. Binding in this monad can be thought of as "and then wire that like this". You compose your processes together with a series of monadic actions, feeding processes into each function that wires up the next parameter, then taking the results of that action and further wiring it up another way. At the end of the monadic block you should return the full list of wired-up processes, to be run in parallel using the Here is a simple example. You have a list of processes that take an incoming and outgoing channel end and a barrier, and you want to wire them into a cycle and enroll them all on the barrier: processes :: [Chanin a -> Chanout a -> EnrolledBarrier -> CHP ()] runProcesses = do b <- newBarrier run $ cycleR processes >>= enrollAllR b The order of the actions in this monad tends not to matter (it is a commutative monad for the most part) so you could equally have written: processes :: [EnrolledBarrier -> Chanin a -> Chanout a -> CHP ()] runProcesses = do b <- newBarrier run $ enrollAllR b processes >>= cycleR Remember with this monad to return all the processes to be run in parallel; if they are not returned, they will not be run and you will likely get deadlock. A little more background on the monad is available in this blog post: http://chplib.wordpress.com/2010/01/19/the-process-composition-monad/ - data Composed a - runWith :: Composed a -> forall b. (a -> CHP b) -> CHP b - run :: Composed [CHP a] -> CHP [a] - run_ :: Composed [CHP a] -> CHP () - enrollR :: Enrollable b p => b p -> (Enrolled b p -> a) -> Composed a - enrollAllR :: Enrollable b p => b p -> [Enrolled b p -> a] -> Composed [a] - connectR :: Connectable l r => ((l, r) -> a) -> Composed a - pipelineR :: Connectable l r => [r -> l -> a] -> Composed (r -> l -> [a]) - pipelineCompleteR :: Connectable l r => (l -> a) -> [r -> l -> a] -> (r -> a) -> Composed [a] - cycleR :: Connectable l r => [r -> l -> a] -> Composed [a] - wrappedGridFourR :: (Connectable below above, Connectable right left) => [[FourWay above below left right -> a]] -> Composed [[a]] A monad for composing together CHP processes in cross-cutting ways; e.g. wiring together a list of processes into a pipeline, but also enrolling them all on a barrier. Given a list of CHP processes composed using the Composed monad, runs them as a parallel bunch of CHP results (with runParallel) and returns the results. Wires a list of processes into a pipeline that takes the two channels for the ends of the pipeline and returns the list of wired-up processes. Connects together a list of processes into a cycle.
OPCFW_CODE