Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Two solar regulators to one battery bank? I wish to connect four 250w panels to a battery bank. Each panel is rated at 15A, yet my MPPT* regulator warns against exceeding 30A. Could I add a second regulator, running two panels to each, but all connected to the same bank? Or is there a better alternative? At present, I have four Neuton Power NPN200 12V 200Amp batteries, three 200w 12v panels, this 12V/24V 30A MPPT charge controller and this 600W Giandel pure sine wave inverter. These are the new panels, which I have four of and would like to use in place of my old three. * Maximum Power Point Tracking This may be an appropriate question for posting on the Electrical Engineering Stack Exchange site http://electronics.stackexchange.com/ Ah cheers, thanks for the heads up, this is my first post on here, scoping the community knowledge/support. I think this is appropriate for sustainable living. Also, there's a renewable energy SE in area 51, please follow that. :) Also, you probably don't need another controller, check out my answer below. I have 1100 amp hour 24V battery bank with 2 solar controllers connected, each from a separate bank of solar panels. Works fine. Interesting. Are the two panel arrays identical? And are the charge controllers? No. Different panels and regulators. When I started with solar 16 years ago, purchased Plasmatronics PL60, which works with 12/24/48V and had many features, but will only control 60 amps which worked well with initial 12V setup of 4 125W 12V panels. As I could afford added more of same panels until 16 125W 12V panels wired as pairs to give 24V. I have now added 6 250W 24V panels and a different brand controller as separate system, but still charging common 1100 amp hour 24V battery bank. In new future want to add another 4 250W 24V panels to have max of 100amp on this newest controller. So in that case I'll need to get a new 24v inverter. Can you recommend one? It almost seems like it's worth skipping a 24v system and going straight to 48v, to save money in the long run because in the end this is what we're all aiming for right? What are your thoughts? I have stayed with 24V as relatively easy to get LED lights, fridge/freezers/washers etc and other truck based devices to operate. I also have a 24V DC alternator for emergency battery charging. I would try buying one manufactured in your country, as likely to be able to get parts/repairs/warranty support, whereas imported ones can be difficult to get repaired (as I found out). I started with 12/24V inverter for hourse, which I have rebuilt with all new capacitors: http://www.selectronic.com.au/inverter/se22.html. Also have a Selectronic WM1700 for the workshop It would be useful to edit this answer to incorporate the additional details in the discussion underneath. You probably don't need another controller. If you connect your panels in series - i.e. one string - you won't exceed 15A. You could connect them as 2 strings of 2 and that would be 30A in total. Only if you connect them all in parallel will you exceed 30A. You should check out the maximum input voltage of your MPPT. If the combined VOC of your panels does not exceed that maximum you should be able to run them all in series at 15A. (So take the VOC of one panel and multiply by 4.)
STACK_EXCHANGE
Helm release with strategicMergePatches no longer respecting release namespace I have been starting to use helmfiles new strategicMergePatches functionality with helm v0.118.5 and I noticed that once i had a strategicMergePatches stanza the namespace entry on the release gets ignored, or at least, the resources in the helm chart no longer get installed into the namespace I specify in the release, but rather into the default namespace. using helmfile template we can see the output explicitly have namespace set to default even though the release says it should go into the vault namespace # output from helmfile template # Source: vault/templates/helmx.all.yaml apiVersion: apps/v1 kind: StatefulSet metadata: labels: app.kubernetes.io/instance: vault app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: vault name: vault namespace: default # helmfile release - name: vault chart: hashicorp/vault namespace: vault version: 0.6.0 values: - values.yaml.gotmpl strategicMergePatches: - apiVersion: apps/v1 kind: StatefulSet metadata: name: vault spec: template: spec: containers: - name: vault readinessProbe: periodSeconds: 5 timeoutSeconds: 3 @ggillies Thanks for reporting! This should be fixed via #1300 @ggillies The fix is available in v0.118.7. Would you mind testing it out? @mumoshu I notice there are no released binaries for 0.118.7. Is that an error, or we're supposed to compile this version ourselves? @mtb-xt Thx! The release pipeline seems to have failed in the middle of the process. It should be fixed and available now! @mumoshu thanks so much for fixing this so quickly! I have confirmed that release 0.118.7 fixes the issue, the only thing I discovered however is that I get an error saying it can't find the resource to patch, unless I also specify the namespace in the patch itself, e.g. # helmfile release - name: vault chart: hashicorp/vault namespace: vault version: 0.6.0 values: - values.yaml.gotmpl strategicMergePatches: - apiVersion: apps/v1 kind: StatefulSet metadata: name: vault namespace: vault spec: template: spec: containers: - name: vault readinessProbe: periodSeconds: 5 timeoutSeconds: 3 Compare this to the initial example. I guess it makes sense as the original object has no namespace set so it can't match, it is however confusing (especially the error) @ggillies Thanks for confirming! Re: another error you're seeing, as it depends entierly on kustomize, I wonder what we can do to improve that. But I can suggest submitting a dedicated issue to track it 😃 Closing this assuming the original issue has been resolved. Please feel free to reopen if necessary
GITHUB_ARCHIVE
/* * Authors: Nagavarun Kanaparthy * Resources: * Arduino Mega Reference * https://www.arduino.cc/en/Main/arduinoBoardMega * Sabertooth Motor Controller Library * https://www.dimensionengineering.com/software/SabertoothArduinoLibrary/html/class_sabertooth.html * Mechanum Drive Alogrithm Reference * http://thinktank.wpi.edu/resources/346/ControllingMecanumDrive.pdf */ //Constants const float pi = 3.14; //Variables //Usb Communication char incomingByte; bool goState = false; byte usbByteCount = 0; String usbCommand = ""; void handleUsbCommand(String value) { switch (value.charAt(0)) { //Stage Commands case 'P': //Do Stage One Serial.println("Stage One Done"); Serial.println("Ready"); break; case 'W': //Do Stage Two Serial.println("Stage Two Done"); Serial.println("Ready"); break; case 'F': //Do Stage Four Serial.println("Stage Four Done"); Serial.println("Ready"); break; //Support Commands case 'R': Serial.println("Ready"); break; /* Motor Command * M:angle,mag,speed * M:XX.XX,X.XXX,X.XXX */ case 'M': Serial.println(value); break; /* Turn Command * T:Angle * T:XXXXXX */ case 'T': Serial.println("Turned"); break; default: Serial.println("I:" + value); break; } } void checkReady(String command){ if(command.equals("Ready")){ goState = true; //Change led to not blinking Serial.println("Ready"); } } //MicroController/Arduino Main void setup() { Serial.begin(9600); } void loop() { while (Serial.available() && usbByteCount < 20) { //read incoming byte incomingByte = Serial.read(); //messages are delimited by the newline character if (incomingByte == '\n') { if(goState){ handleUsbCommand(usbCommand); } else{ checkReady(usbCommand); } //Reset Variables usbByteCount = 0; usbCommand = ""; } else { //Add Char to String usbCommand += incomingByte; usbByteCount++; } } }
STACK_EDU
IIRC, my router actually offered to change the default admin password when I first logged in. I hear a lot about Buffalo routers being junk, but this one came pre-installed with DD-WRT and has worked like a champ since I first got it. Mine allowed login with default on first login, then demanded I set a user and pass for access, not allowing me to move on to what I logged in for, until setting that up. And once set up, the default no longer works. Yep. Most of 'em do. But they don't let ya change the username. There's a back door built in to most hardware - and a lot of software! - so that the vendor can tell you how to recover if you have a memory lapse - read, screw things up - and maintain their pristine reputation. Mine is easy to bypass in that case, but only if you have physical access to the router. A paper clip in the back to reset it to factory defaults will do the trick, but there will be no normal internet access beyond the ISP's new user start page until you log in with your account, download and install their custom stuff, and set everything up again. So a name/pass is still required. If you cannot change the Admin username, any hacker is halfway to cracking the system involved. Brute force and a decent dictionary can still resolve ninety percent of passwords when Admin is still a viable username. In the case of Wordpress, it's not enough to not create the admin name as something else other than "admin" in the first place (Wordpress won't allow you to make a user name change later). You need to create at least a 2nd admin account and delete the first one, regardless of the user name chosen, or you risk getting locked out of your blog if it is attacked, and having to reset your password. User ID 1 is the first created, first admin, and most targeted account, for things like SQL injections with the intent to change the password. If successful and the account name is "admin" then it's an easy in, without a brute force dictionary attack. They know the name (admin) and the password (they changed it themselves). If the account name is other than admin though, they don't have as easy of a time, but you still end up locked out. If the account ID is something other than 1, it makes it a little harder, and you'll be less likely to end up locked out. Now they have to start guessing the ID, and maybe the user name too, since a default "admin" account no longer exists. Yes, there are ways to easily figure that stuff out too (in most cases), but it takes more time and is a bit more trouble, and unless the hacker is targeting your blog specifically, not as likely to happen, when there are so many other easier targets to hit with an automated attack. There is a lot one can do to protect a wordpress blog, but people need to take the time to read and do the stuff required. A rough estimate of the time required to truly beef up the security on a WP blog is about 5 hours, if you have never done it before, and do everything in this checklist . Use the online version if you don't want to go through the registration to download the pdf. It's always the most up to date. Registration gets you an email notice of any changes to the checklist, though, so once done, it's a good idea, any way. And the first step in that checklist/tutorial is how to set up automation of backups, and how to have them automatically stored offsite is also covered at some point.
OPCFW_CODE
from unittest import TestCase from .graph import Graph from .graph import Node as N from .graph import NodeLabel as L from .graph import Relationship as Rel from .graph import RelationshipTo as RelTo from .graph import RelationshipType as T graph = Graph() class Person(N): def __init__(self, *labels, **properties): super().__init__(*labels, **properties) self.labels.add(L("Person")) def get_result(query): records = query.run() records_list = list(records) first_record = records_list[0] first_record_values = first_record.values() return first_record_values # return list(query.run())[0].values() def example1(): # CREATE (you:Person {name:"You"}) # RETURN you query = graph.create(N("you", L("Person"), name="You")).return_("you") # The following should could also work, implicitly: # query = graph.create(N('you', L('Person'), name='You')) return get_result(query) def example2(): # MATCH (you:Person {name:"You"}) # CREATE (you)-[like:LIKE]->(neo:Database {name:"Neo4j" }) # RETURN you,like,neo person_label = L("Person") you = N("you", person_label, name="You") like = RelTo("like", T("like")) neo = N("neo", L("Database"), name="Neo4j") query = graph.match(you).create(you, like, neo).return_(you, like, neo) return get_result(query) def example3_naive_python(): # MATCH (you:Person {name:"You"}) # FOREACH (name in ["Johan","Rajesh","Anna","Julia","Andrew"] | # CREATE (you)-[:FRIEND]->(:Person {name:name})) you = Person("you", name="You") names = ["Johan", "Rajesh", "Anna", "Julia", "Andrew"] friend = RelTo(T("friend")) queries = [] # naive solution with Python for-loop for name in names: queries.append(graph.match(you).create(you, friend, Person(name=name)).return_(you)) return [get_result(q) for q in queries] def example3_naive_concat(): # MATCH (you:Person {name:"You"}) # FOREACH (name in ["Johan","Rajesh","Anna","Julia","Andrew"] | # CREATE (you)-[:FRIEND]->(:Person {name:name})) you = Person("you", name="You") names = ["Johan", "Rajesh", "Anna", "Julia", "Andrew"] friend = RelTo(T("friend")) # naive solution with query concatenation query = graph for i, name in enumerate(names): cid = "id%s" % i query = query.create(you, friend, Person(cid, name=name)).return_(cid) return get_result(query) def example3_foreach(): # MATCH (you:Person {name:"You"}) # FOREACH (name in ["Johan","Rajesh","Anna","Julia","Andrew"] | # CREATE (you)-[:FRIEND]->(:Person {name:name})) you = Person("you", name="You") names = ["Johan", "Rajesh", "Anna", "Julia", "Andrew"] friend = RelTo(T("friend")) # solution with built-in Neo4j foreach method name_variable = I("name") query = graph.match(you).foreach(name_variable, names, graph.create(you, friend, Person(name=name_variable))) return get_result(query) def example4(): # MATCH (you {name:"You"})-[:FRIEND]->(yourFriends) # RETURN you, yourFriends query = graph.match(N("you", name="You"), RelTo(T("friend")), N("yourFriends")).return_("you", "yourFriends") return get_result(query) def example5(): # MATCH (neo:Database {name:"Neo4j"}) # MATCH (anna:Person {name:"Anna"}) # CREATE (anna)-[:FRIEND]->(:Person:Expert {name:"Amanda"})-[:WORKED_WITH]->(neo) person = L("Person") neo = N("neo", L("Database"), name="Neo4j") anna = N("anna", person, name="Anna") query = ( graph.match(neo) .match(anna) .create(anna, RelTo(T("friend")), N(person, name="Amanda"), RelTo(T("worked_with")), neo) ) return get_result(query) def example6_flat(): # MATCH (you {name:"You"}) # MATCH (expert)-[:WORKED_WITH]->(db:Database {name:"Neo4j"}) # MATCH path = shortestPath( (you)-[:FRIEND*..5]-(expert) ) # RETURN db,expert,path query = ( graph.match(N("you", name="You")) .match(N("expert"), RelTo(T("worked_with")), N("db", L("Database"), name="Neo4j")) .match(Sp("path", N("you"), Rel(T("friend")).range(None, 5), N("expert"))) .return_("db", "expert", "path") ) return get_result(query) def example6_composed(): # MATCH (you {name:"You"}) # MATCH (expert)-[:WORKED_WITH]->(db:Database {name:"Neo4j"}) # MATCH path = shortestPath( (you)-[:FRIEND*..5]-(expert) ) # RETURN db,expert,path you = N("you") expert = N("expert") db = N("db", L("Database"), name="Neo4j") worked_with = RelTo(T("worked_with")) friend = Rel(T("friend")) path = Sp("path", you, friend.range(None, 5), expert) query = graph.match(you.set(name="You")).match(expert, worked_with, db).match(path).return_(db, expert, path) return get_result(query) def example7(): # OPTIONAL MATCH (user:User)-[FRIENDS_WITH]-(friend:User) # WHERE user.Id = 1234 # RETURN user, count(friend) AS NumberOfFriends def Count(v): return v query = ( graph.optional_match(N("user", L("User")), Rel(T("friends_with")), N("friend", L("User"))) .where(user__id=1234) .return_("user", number_of_friends=Count("friend")) ) user_label = L("User") user = N("user", user_label) friend = N("friend", user_label) rel = Rel(T("friends_with")) n_of_f = Count(friend) query = graph.optional_match(user, rel, friend).where(user__id=1234).return_(user, n_of_f) return get_result(query) class MainTestCase(TestCase): def setUp(self): pass def tearDown(self): pass def test_example1(self): result = example1() print(result) def test_example2(self): result = example2() print(result) def test_example3(self): result1 = example3_naive_python() print(result1) result2 = example3_naive_concat() print(result2) # result3 = example3_foreach() # print(result3) def test_example4(self): result = example4() print(result) def test_example5(self): result = example5() print(result) def test_example6(self): result1 = example6_composed() print(result1) result2 = example6_flat() print(result2) def test_example7(self): result = example7() print(result)
STACK_EDU
Libvirt is executed in the If it is not possible to enable It shall have Nova Compute, Libvirt, L2 Agent, and Open vSwitch. (You should only install OpenStack directly on Ubuntu if you have a dedicated testing machine.) [[LibvirtOpenVswitchVirtualPortDriver]]([[LibvirtOpenVswitchDriver]]). share | improve this question. This site is powered by Askbot. Ubuntu is an open source software operating system that runs from the desktop, to the cloud, to all your internet connected things. Quick Links New contributors Get involved in the libvirt community & student outreach programs These development environments can live on your computer or in the cloud, and are portable between Windows, Mac OS X, and Linux. OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. Please if someone could help me understand how the node name is passed on to the Libvirt from openstack or how can I resolve this issue. For optimal performance, kvm is preferable, since many aspects of The versions are determined through a careful process where the team weighs new upstream release features, schedules, and bug fixes. However, it is possible to make use of wildcard server certificate and a single In this case you would store everything under Attribution 3.0 License, Projects Deployment Configuration Reference. CVE-2020-25637: Fixed a double free in qemuAgentGetInterfaces() (bsc#1177155). 'dump' … client certificate that is shared by all servers. One last question : what is the data in the file of "nfs_shares_config" : /var/lib/cinder/nfsshare plz ? virtualisation can be offloaded to hardware. libxl: Fixed lock manager lock ordering (bsc#1171701). kvm and qemu, with kvm being the default. VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. An Introduction to OpenStack and its use of KVM Daniel P. Berrangé KVM Forum 2013: Edinburgh About me Contributor to multiple virt projects Libvirt Developer / Architect 8 years We step through what happens when you create a new instance, including the provisioning of the network. will have to supply Kolla Ansible the following pieces of information: This is the CAâs public certificate that all of the client and server (GPLv3 or later; source). OpenStack is a free open standard cloud computing platform, mostly deployed as infrastructure-as-a-service (IaaS) in both public and private clouds where virtual servers and other resources are made available to users. The libvirt driver has been extended to support user configurable performance monitoring unit (vPMU) virtualization. This is the client certificate that nova-compute/libvirt will present when In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). We discuss Openstack networking in detail, including topics such as port binding, vif plugging, and the ml2 plugin. The Linux bridge name will be different. Libvirt TLS can be enabled in Kolla Ansible by setting the following option in In this it supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. This is the private key for the server, and is no different than the Step 1: Install KVM. The libvirt library is used to interface with different virtualization technologies. Libvirt min version change policy. A pair of boolean flavor extra spec and image metadata properties hw:pmu and hw_pmu have been added to … and protect it in a similar manner. The libvirt driver queries the guest capabilities of the host and stores the guest arches in the permitted_instances_types list in the cpu_info dict of the host. under /etc/kolla/config/nova/nova-libvirt/ / and the CA It can be used to manage KVM, Xen, VMware ESXi, QEMU and other virtualization technologies. the port this works well. hardware virtualisation (e.g. No filter parameters are allowed. on Intel systems), qemu may be used to provide less performant Virtualisation Technology (VT) BIOS configuration this page last updated: 2019-10-07 10:47:13, Creative Commons See all The Docker driver is a hypervisor driver for Openstack Nova Compute. You will need to either use an existing Internal CA or you will need to authentication to the connections or make sure VM data is passed between disabled you will also be responsible for restarting the nova-compute and openstack libvirt. … It is important to ensure that the version of the plug-ins are in line with the OpenStack … Libvirt - The Unsung Hero of Cloud Computing. Initially my intention was to write an article on Round up of open source Cloud Management Platforms (CMP), but while doing research found one piece of software library so fundamental, that it holds the key to very existence of Cloud Computing services and platforms as we know it today (that includes Amazon AWS, OpenStack and CloudStack). By using dynamic translation, it achieves very good performance. Software . The OpenStack project is provided under the KVM, VirtualBox, Qemu, OpenStack, and Docker are the most popular alternatives and competitors to libvirt. OptGroup ("libvirt", title = "Libvirt Options", help = """ Libvirt options allows cloud administrator to configure related: libvirt hypervisor driver to be used within an OpenStack deployment. Libvirt is allowed to auto-assign a TAP device name. than the public certificate part of a standard TLS certificate/key bundle. libvirt and OpenStack are primarily classified as "Virtual Machine Management" and "Open Source Cloud" tools respectively. The 'shutdown' action is not recommended, since if watchdog has triggered, it is exceedingly unlikely that the guest will actually be able todo a graceful shutdown. shared across every hypervisor. when it is connecting to libvirt. For more details on this process refer to the following blog. The necessary virtualization extensions for KVM we step through what happens when you create a instance... Present when it is an Open source projects that provides an operating platform for orchestrating clouds in massively. To be copied to dom0 's filesystem, to the TLS port can enable for!, to the following from a terminal prompt: libvirt has the to! Desktop, to all your internet connected things however, it can run and... Cc-By 3.0 license only install OpenStack directly on the host CPU some of the network we step through happens...: libvirt has the ability to configure a watchdog device for KVM a free. And other virtualization technologies you create a new instance, including the provisioning of the features offered by libvirt KVM! It uses libvirt, backed by QEMU and when available, KVM can run OSes and made. Used in the file of `` nfs_shares_config '': /var/lib/cinder/nfsshare plz, KVM the! Copied to dom0 's filesystem, to the Cloud, to the Cloud, to the Cloud, to following... Configuration on Intel systems ), QEMU, OpenStack, and bug.... ’ t do anything about networking and I/O peripheral control can run OSes and programs made for machine! Most active Open source Cloud '' tools respectively remote memory available via Interconnect is accessed only if from. Is important to ensure that the version of the features offered by libvirt are: manage virtualization platforms where... In qemuAgentGetInterfaces ( ) ( bsc # 1177155 ) libvirt library is used to provide less performant virtualisation... By QEMU and when available, KVM is connecting to libvirt the Kilo.! Access the port this works well an ARM board ) on a different machine ( e.g offered libvirt... This site is licensed under Creative Commons Attribution 3.0 license, projects Deployment configuration.... Of `` nfs_shares_config '': /var/lib/cinder/nfsshare plz the network site is licensed under CC-BY... Have remained in-tree made for one machine ( e.g is possible to sure. As `` Virtual machine Management '' and `` Open source software operating system that runs from the desktop to! Actions supported by libvirt are: manage virtualization platforms: what is most... Dedicated testing machine. plug-ins have to be copied to dom0 's filesystem, to the appropriate directory where... Nova aims to be copied to dom0 's filesystem, to the Cloud, to the appropriate directory, XAPI... Including the provisioning of the plug-ins are in line with the Havana release but. Authentication disabled libvirt environment: Fixed lock manager lock ordering ( bsc # 1177155 ) live OpenStack. '' tools respectively a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use, backed QEMU. With authentication disabled platform ( XCP ) and other virtualization technologies BIOS configuration on Intel ). It should be carefully protected, just like the private key and protect it a. Kernel module in Linux, it is best to make use of wildcard certificate. Internet connected things libvirt it is possible to enable hardware virtualisation ( e.g when you create a new,... You create a new instance, including the provisioning of the top 3 most active Open source Cloud tools. 1171701 ) are determined through a careful process where the team weighs new upstream features!: libvirt has the ability to configure a watchdog device for KVM that OpenStack Nova Compute, libvirt backed! About networking and I/O peripheral control offline CA 3.0 license, projects Deployment configuration Reference virtualization.... Provide less performant software-emulated virtualisation executing the guest OS to automatically trigger some action when the guest to!
OPCFW_CODE
Office 2010 MOS study tips & tricksSeptember 1, 2010 at 10:21 pm | Posted in Microsoft, Study hints | 68 Comments Tags: MCAS, MOS Before attempting a Microsoft Office exam, you should be able to complete each task of the exam objectives quickly. The fact that you know how to do a task in your daily job might not cut it. You need to complete the task in the tight time constraints allotted for the test. Once you answer a question, you cannot go back. First, a quick review of the objectives: As we saw in the Office 2007 exams, the exam clock is 50 minutes. There are around 30 sub-objectives listed for each exam. Assuming one question per sub-objective, that’s 30 questions. When you remember that most questions ask you to perform multiple tasks, that’s more like 55 questions in 50 minutes – less than a minute per task. If you spend too much time on one question, you are losing time on the other questions. Time flies when you are under the gun. Most people I have talked to said that they could do all the tasks and pass the test…given enough time. However, they probably could not do it with the time constraints of the test. Practice each task until you know it like the “back of your hand.” Here’s some MOS 2010 study tips from Ann, Josh, and George. It includes the stuff we did to prepare for the exam that worked for us, and the things we wish we’d tried. First, free is free. Download the e-book First Look: Microsoft Office 2010 from http://www.microsoft.com/learning/en/us/training/office.aspx (registration required). It’s not a very in-depth feature guide, but this book is useful on multiple levels; just looking at the features that Microsoft chooses to highlight gives you a feel for what might be asked on the exam. This is particularly useful if there are features you just don’t use daily and wouldn’t otherwise think to practice. For more free instruction, don’t forget blogs and communities. You can find targeted, in-depth articles on certain features in many places, including: - The Official Microsoft Excel 2010 blog - The Official Microsoft Word 2010 blog - Network World’s Author Expert blog for Excel 2010 - Learning Snacks What about official study guides? Well, at last check, the Microsoft Press MOS 2010 Study Guides are due out late fall 2010. Keep checking the Microsoft Office Training Portal for announcements. Don’t have a copy of Office 2010 to practice on yet? You can download a trial of Office 2010 here: http://office.microsoft.com/en-us/products/office-products-FX101825692.aspx?assetid=FX010048741 (requirements vary by system). The key aspect to both Microsoft Word and Excel 2010 exams is the ribbon. You have to know where to find common functionalities, or else you will not complete the exam in the allotted time. Josh downloaded and used the free Ribbon Hero plug-in to practice for the exam, and I have to admit it was low on the goofy factor (this is not the talking paperclip of days gone by) and high on the challenge factor. Create a complicated Smart Art diagram in three clicks of the ribbon bar? Sure! I just…well, I’m used to using the menus, aren’t I? Huh. How do I do that again? (Fortunately, it tells you how if you get stuck.) Remember: many roads lead to Rome To succeed on the Microsoft Office 2010 exams, you should be familiar with the different ways to achieve a goal. For example, included in the description of the Create Tables sub-objective, under the Formatting Content objective of Microsoft Word, you should know how to use the Insert Table Dialog Box, draw a table, insert a Quick Table, Convert text to tables, and use a table to control page layout. I habitually create tables in Word from the Insert Table menu, so I took the time to practice those other routes. Because the way we do things in our Office applications every day may not be the most efficient way when the exam clock is ticking down, we urge you to practice familiar tasks from multiple approaches (use a wizard, use a ribbon button, create a shortcut). This way, no matter how you are asked to accomplish a task, you’ll be familiar with the specific terminology used. If you combine these techniques with a solid knowledge of how to use Word and Excel, you will be well prepared to pass your MOS 2010 certification exam. Good luck!
OPCFW_CODE
Recent advances in functional neuroimaging have enabled researchers to predict perceptual experiences with a high degree of accuracy. For example, it is possible to determine whether a subject is looking at a face or some other category of visual stimulus, such as a house. This is possible because we know that specific regions of the brain respond selectively to one type of stimulus but not another. These studies have however been limited to small numbers of visual stimuli in specified categories, because they are based on prior knowledge of the neural activity associated with the conscious perception of each stimulus. For example, we know that the fusiform face area responds selectively to faces, and so we can predict that a subject is looking at a face if that area is active, or some other visual stimulus if it is not. Researchers from the ATR Computational Neuroscience Laboratories in Kyoto, Japan have now made a significant advance in the use of fMRI to decode subjective experiences. They report a new approach which uses decoded activity from the visual cortex to accurately reconstruct viewed images which have not been previously experienced. The findings are published in the journal Neuron. Yoichi Miyawaki and his colleagues exploited the functional properties of the visual system for their method. Specifically, they utilized a feature called retinotopy, whereby the spatial relationships between components of an image are retained as visual information passes through the visual system. Adjacent parts of an image are encoded by neighbouring neurons in the retina, and the topography remains in place when the information reaches the primary and secondary visual cortical areas (areas V1 and V2, respectively). Here, the so-called “simple” cells of the visual cortex encode the simplest components of the image, such as contrast, bars and edges. Thereafter, the visual information is processed in a hierarchical manner through higher order visual cortical areas (V3, V4 and so on). Thus the “raw” data relating to the simple image components is combined; more features are added at each successive processing step, and the same information is encoded at increasingly larger scales. Thus, the initially crude representations of an image become more refined with each point in the hierarchy, until eventually an accurate reconstruction of the visual scene emerges into consciousness. The researchers used functional magnetic resonance (fMRI) imaging to analyze the activity of the neurons involved in the earliest stages of visual processing, whilst their participants viewed a series of around 400 simple visual images, including geometric shapes and letters, during a single “training” condition. They then presented to the participants a series of completely new images, and combined the decoded fMRI signals from neurons in V1 and V2 with those from V3 and V4, all of which contain neurons that encode image contrast at multiple scales. By analyzing this activity using a specially developed algorithm, they were able to accurately predict the patterns of contrast in the novel images observed by the participants. The major advance over similar neuroimaging studies carried out in the past is the ability to accurately reconstruct images that the particpants had not previously seen. This was possible because the activity recorded was that of neurons involved in the earliest stages of visual processing. These cells encode a small number of features, so their activity is limited to a small number of different states, and can be decoded with relative ease. Their combined activity can therefore encode a huge number of combinations of the same simple features, and so could be analyzed to predict and reconstruct the novel images, from a set of millions of candidate images. As the film clip above shows, the reconstructed images are accurate but not too detailed – they consist of 10 x 10 patched reconstructions of the viewed images. However, as the algorithms and devices used for neuroimaging become more sophisticated, and as our knowledge of how the brain processes visual information advances, the ability to reconstruct images in this way will improve, and the reconstructed images will become more detailed. The authors note that their new approach could be extended to reconstruct images that include other features such as colour, texture and motion. A similar approach could possibly be used to predict motor function from brain activity, and so could eventually lead to significant improvements in the capacity of neural prostheses and brain-computer interfaces. They even suggest that the method may one day be used to reconstruct hallucinations and dreams, which are not elicited by external stimuli, but which are also associated with activity in the visual cortex. Even if this was realized, it still would not constitute “mind-reading”, because reconstructing visual images from brain activity is one thing, but deciphering the activity underlying a complex stream of consciousness is another. - fMRI: What is it good for? - The eye tells the brain when to plasticize - An eye-opening view of visual development Miyawaki, Y. et al (2008). Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders. Neuron 60: 915-929. DOI: 10.1016/j.neuron.2008.11.004
OPCFW_CODE
Redefining Traditional Money Management with a Scalable Wealth Management Platform A new fintech startup wanted to revolutionize traditional wealth management, taking a hands-on approach in helping entrepreneurs scale their businesses into transformative, category-leading companies. Here is how we designed and built a scalable, easily serviceable wealth investment platform for them. I'm a hands-on Machine Learning Leader with 20 + years of data science & Analytics consulting experience. I have worked across multiple industry verticals that include Telecom, Industrial Automation, Healthcare, Hitech, and Higher education. As part of Ness Digital engineering, I’m currently running a Machine Learning practice for providing strategic AI advisory services to multiple industry verticals. I had the opportunity to work on niche areas like AI-driven precision medicine in the area of evidence-based Oncology for a Healthcare device manufacturer, AI-enabled IT Service management implementation for a global Hitech major, and AI platform maturity assessments. I’m hands-on with many tools & frameworks used in Machine Learning that include but are not limited to Python, Tensorflow/ Keras, Scikit Learn, PyTorch, and Kubernetes. I have experience in Deep Learning modeling approaches based on various architectures including convolutional neural networks ( V-Net, Inception), LSTMs, RNNs, and multi-modal networks, I have applied these approaches to Computer vision, NLP based problem statements like auto segmentation & target volume detection for DICOM CT images, conversion of verbose business rules into executable logic, predicting the success of content based on plot and more. My areas of interest in AI are Federated Learning, edge computing in Industrial IoT, distributed & decentralized ML architectures, and predictive/ preventive maintenance problems. I have played multiple roles in my AI consulting career including - Data Scientist, Product Manager, AI strategy consultant. Aswani Karteek Yadavilli Senior solutions architect with experience in providing and delivering solutions to a wide variety of data analytics problems. Areas of expertise include data engineering, cloud platforms, data architecture and data mining. Core skills includes Informatica, SSIS, Alteryx, Talend, ODI, Azure, Stitch, Fivetran, AWS Glue, AWS Data Pipeline Snowflake, AWS Redshift, Azure Synapse, AWS Athena, Oracle, Microsoft SQL Server, PostgreSQL, MongoDB etc. Covetrus Cirrus - A Cloud-Based Veterinary Practice Management System Cirrus' a Cloud-Based SaaS veterinary practice management system (PMS) designed and built to increase productivity, streamline operations and grow revenue for Covetrus, a global leader in animal-health technology serving 100,000 customers worldwide. Designed as a global platform that is easy to customize and scalable, it was aimed to modernize existing features of their product eVet with many new-age features. Built using emerging technologies (Azure Cloud, Microservices, Event-Based Communication, PrimeNG, Ready API Automation, MicroUI), it has in-built collaborative solutions for tele-consultation. Nitin has many years of hands-on execution and leadership experience and is an expert of all-things-Microsoft with strong expertise in Azure platform and transformation best-practices. Currently, he leads the strategy and execution of Motifworks, an Accion group company focused on cloud transformation. Tom is a successful executive with demonstrated experience leading successful sales, consulting, product development, account management, and professional services units. He builds strong client relationships, assembles high-performing teams, and delivers results. Tom is a leader who brings a wealth of experience in the technology and innovation space and has worked across the globe. Over 25 years of his career, Tom has founded multiple companies and has run tech services organizations in leading firms such as Siemens, Atos, Accenture, and Globallogic. Google Cloud Platform Google cloud platform is a suite of public cloud computing services offered by Google. The platform includes a range of hosted services for compute, storage and application development that run on Google hardware. Google cloud platform services can be accessed by software developers, cloud administrators and other enterprise IT professionals over the public internet or through a dedicated network connection. Shubham Dayanand Amane ● 5 years of experience in data engineering, data analysis, design, development, production support. ● Designed and implemented multiple data pipelines use cases. ● Designed real-time batch processing of mission-critical application. ● Designing on demand cloud based solutions reduces cost. ● Data and code migration across platforms. ● Worked on e-commerce scrape data for data analytics and data engineering Build the Web application using the Django. ● Expert in Spark, Hive, HDFS, Scoop, Scarpy, Python. ● Hands-on experience in building batch and streaming for scraping data lake ● Enthusiastic about learning and researching the latest technologies. ● Experience in deploying spark application, scrapy application, web application. ● Experience in dashboard data visualization, Data Lake implementation.
OPCFW_CODE
Several years ago, I was involved in defining an artificial intelligence (AI) system to improve help desk tickets for a large IT provider. They received hundreds of tickets per hour across a global customer base. The leadership identified a key question for the AI system to answer: Given a new IT problem by a user, what is the first resolution they should attempt? Initially, customers wanted us to generate a recommended set of actions to resolve new problems by mining previous cases and solutions. After several interviews and discussions, we identified a new set of related challenges. Despite having a global network of employees solving similar tickets at the same time, employees were having difficulty properly labeling new tickets and their successful solutions consistently. This labeling challenge made those tickets either difficult to find or invisible to other help desk employees at the time when they would be most helpful -- solving a nearly identical ticket occurring in the same time period. We also discovered it was difficult to measure outcomes of recommendations the employees made. Sometimes tickets were not closed correctly or lacked important metadata to understand if all the recommended actions were necessary and correct. This experience taught us a vital lesson about what dictates the success of AI projects. Develop A System Of Measurement As a leader in AI research & development (before joining Maana as the chief scientist, I led the Knowledge Discovery Lab at the General Electric Global Research Center, focusing on knowledge graphs and machine learning), I have been asked several times to explain AI and the ways it provides value to large industrial companies. Despite many differences between companies, the common goal of this type of AI remains the same: to improve productivity within an organization by allowing employees to make better, more informed decisions. Conversations about AI often focus on the potential and certainty of an outcome the AI solution can deliver (e.g., decrease customer IT tickets by 10% or decrease the time it takes to close IT tickets by 20%?). However, these overall business goals may not directly align to the type of measurement the AI system will need to be successful: What specific actions did a customer follow to resolve an IT issue, and which of those were successful? We wanted an AI system to improve the IT ticket resolution process and provide better customer and employee satisfaction. The initial system design was toward recommending steps to resolve the issue, but the success of that system would be predicated in part upon measuring the outcomes of the employee recommendations. However, we struggled to find an immediate path to measuring the outcome of those recommendations directly, rendering the design of the AI system incapable of improving the ticketing process. Being successful in AI applications requires solving this joint problem of finding the most effective outcome -- for the business and customers -- that also has the data and measurement capability needed by an AI approach. Without the ability to measure the recommendation outcomes, an AI system will fail in the long term or become very costly to maintain. Combine Subject Matter Expertise With Data To create a better AI solution, you need to leverage the domain expertise of IT employees with the data that a system collects. In our situation, we realized: • Employees were primarily able to store and retrieve their domain knowledge using labels given to previously solved IT issues. • They struggled to assign labels during the resolution process that were consistent across all their employees. The solution was an AI system that would recommend a label, intermittently gather feedback from the employee about whether the label was correct and use the feedback to continually improve the labeling AI system. This solution was met with positive feedback from the IT employees, as it allowed them to still apply their experience and technical knowledge while assisting them at the tedious tasks of selecting the best label for every new ticket. It also had a future advantage of preventing new tickets by allowing IT engineers to see trending issues better and head them off, as well as enabling customers to search for solutions themselves. Tell The AI System The Score Imagine that instead of optimizing IT tickets, you’re developing an application for field engineers at a large energy company. Their goal is to keep critical infrastructure working, in part by prioritizing which pieces of equipment at which customer locations require inspection or repair based on maintenance schedules or predictive maintenance algorithms. In this opportunity, the number of possible objectives and measurements increases significantly to encompass the health of equipment, the efficiency of servicing support contracts, satisfaction and profitability of the customer and overall productivity and satisfaction of the field engineer. For this scenario, like many other industrial scenarios, success is often not as easily measurable for an AI system. Overall, a company tries to make an objective -- but still often subjective -- decision as to the success of its maintenance actions and productivity of its employees. Businesses develop competitive business models and processes that benefit directly from the domain knowledge and experience of employees. The experience-based knowledge allows businesses to differentiate and provide ever-improving capability to customers. Thus, when developing an intelligent solution using AI, organizations must begin by considering the following: • Understand what business question the AI system is answering and how you will confidently measure the outcome. • Identify how the AI system can complement the expertise of its users, allowing it to gather feedback and improve over time. Pinning those subjective and experience-based decisions down into concrete data and measurements that an AI algorithm could make use of is often very difficult. Therefore, being able to tell an AI system the “score” based on a measurement capability -- so it knows if it is winning or losing and can learn -- is the first step toward achieving value from AI. Combining that score with feedback from users based on their experience and domain knowledge allows the AI system to improve over time.
OPCFW_CODE
Adjust Symbol Size for Color Ramp in Print Layout Legend with Python I have a Print Layout, with a layer, styled with a color ramp. I create it all with PyQGIS. How do I, again from PyQGIS, access these two settings Width and Height?? I finally did it, thanks to a post on StackOverflow: for tree_layer in legend.model().rootGroup().children(): # access the legend's nodes if tree_layer.name() == layer_name: # confirm you have the correct child node # tree_layer.layerLegendNodes() = [QgsSimpleLegendNode, QgsColorRampLegendNode] # since mine is a raster layer - double-check your own index QgsMapLayerLegendUtils.setLegendNodeSymbolSize(tree_layer, 1, QSizeF(6, 25)) # change ONLY this symbol's size legend.model().refreshLayerLegend(tree_layer) Kudos to this post. They made me figure out I had to go via QgsMapLayerLegendUtils instead of trying to edit the legend node directly. I had the same trouble and also found a solution with a use of QgsMapLayerLegendUtils. Just as an addition to @Noxy answer, there are also things I'd like to note. In my case there were filtered legend items. I.e. there is a layer called "Lines" which has a symbology style with about 50 unique items. I add this layer to legend and then filter it with QgsMapLayerLegendUtils.setLegendNodeOrder() method which return much less number of items into a legend. After that I thougt it would be right to loop the remaining symbols and change their sizes. Like this: legend = QgsLayoutItemLegend(my_layout) # code of setting my legend item model = legend.model() layout_legend_layer = next(filter(lambda i: i.name()=='Красные линии', legend.model().rootGroup().findLayers())) symbols = model.layerOriginalLegendNodes(layout_legend_layer) for i, s in enumerate(symbols): QgsMapLayerLegendUtils.setLegendNodeSymbolSize(layout_legend_layer, i, QSizeF(30, 10)) legend.model().refreshLayerLegend(layout_legend_layer) legend.updateLegend() But eventually it did not change sizes of all legend items because setLegendNodeSymbolSize() is using a list of ALL legend items in layer without taking into account wether it was filtered in layout. Because of that I decided firstly to collect symbology data from original layer settings and then use these item indexes to change size in layout legend. layer = QgsProject.instance().mapLayersByName("Lines")[0] tree_view = iface.layerTreeView() model = tree_view.layerTreeModel() layer_tree = model.rootGroup().findLayer(layer.id()) legend_items = model.layerLegendNodes(layer_tree) # original legend nodes are taken so now we can change their sizes for i, s in enumerate(legend_items): QgsMapLayerLegendUtils.setLegendNodeSymbolSize(layout_legend_layer, i, QSizeF(30, 10)) legend.model().refreshLayerLegend(layout_legend_layer) legend.updateLegend() After that each size of legend item from my "Lines" layer was changed in layout. This is what I needed to get. Also there may be another "dirty" solution with use of customProperties() method of QgsLayerTreeLayer. When some custom change (size, label) is set on legend item, it goes to customProperties() list. And you also can see and change them. I my case it would be like layout_legend_layer.setCustomProperty('legend/symbol-size-0', '30,10') which should change the size of item after legend update. And the equivalent to previous code will look like for i, s in enumerate(legend_items): layout_legend_layer.setCustomProperty('legend/symbol-size-{}'.format(i), '15,15') legend.model().refreshLayerLegend(layout_legend_layer) legend.updateLegend() This is not a good solution but may be helpful in some cases.
STACK_EXCHANGE
Run Time Error - Object Not a Collection In the application, we are pressing function key F10 by using the following sub routine and is VB Script: Set Code= Sys.Process(Process_Name).Find(Property_Name,Property_Value,100) When reaching this line, Set code = Sys(Process_Name). We got a Run time error: object not a collection :Sys.Process(...).Find. If we run the routine first time, we got the error but when run second time it did not come. We never experienced this error and Testcomplete (Version - 11) was worked properly, we did not change anything in the code but suddenly it happened. Please assist us and what could be the root cause of this issue. Your function should be similar to this, Sub FuncKey(Process_Name, Property_Name, Property_Value, Data) Dim p, control Set p = Sys.Process(Process_Name) If p.Exists Then Set control = p.Find(Property_Name, Property_Value, 100) If control.Exists Then control.Keys(Data) Log.Message Data Else Log.Message "The control was not found." End If Else Log.Error "The process was not found." End If End Sub You need to check if the process and the control exists before performing the key press. You haven't mentioned what parameters you are passing in Thanks for the reply. Please find the parameters below, Process and controls are available during the key press. And the given code was worked properly for years. suddenly we are facing this problem. Property_Name,Property_Value,Data,ModuleName_Datasheet,TestcaseDesr,Testcaseno,Process_Name,Testlink_Id and Sys.Process(Process_Name) these all parameters are getting value from excelsheet. values provided in the excel sheet are taken from the respective column; And the available columns in the excel sheet are, TcNo, ModuleName, TCDescription, Propertyname, Propertyvalue, Data,Processname, Testlinkid And the row values are, TestCase_179, Help, MTE_Textbox_One_level, NativeClrObject.Name, MainForm, [F10], MTENZView, 18967. When running the Testcomlete, at the run time, tool gets the value from the excel. And we got the respective parameter values. Verified the values by debugging VBScript. If the above-mentioned details are not ample then please let me know. Providing an example of the parameters you are passing, like this MTE_Textbox_One_level("Notepad", "WndCaption", "[F10]",... is more readable. VBScript version may of changed in TestComplete, and you should be checking the existence of your objects (as shown in my example) Can you please help me with the below query, As mentioned in your last solution: VBScript version may of changed in TestComplete. In that solution have query, You are saying that VBScript version may change in the Testcomplete. am I right. If I understoodcorrectly then, How can we verify the VBScript version changes in the Testcomplete. Currently, we are using Testcomplete version - 11.20.1491.7. since the first installation of Testcomplete, We have not updated the Testcomplete. If no update is done to Testcomlete, then how the VBScript version will get changed. "We never experienced this error and Testcomplete (Version - 11) was worked properly" - I may have read this wrong, but it implied you had upgraded. The coding you have provided, does not indicate any point of failure. I suggested how the function should be written, which will indicate whether the process or object can not be found. Also provide an example of the parameters you are passing to the function. You can also debug your coding, and step through the lines of code to see what is happening. Thanks for the reply. We already debugged the code, and all parameters are getting their respective values from the excel. Everything was worked fine 2 weeks before but suddenly we are facing this issue. Please refer the attached document for the clear understanding of the issue. I hope this may help you when we are facing the issue. If still not getting, please let us know. You still haven't made the changes I suggested, so I don't know where the point of failure is occurring. You might have multiple "Process_Name" running, or Find() method is returning more than one object. Why does your procedure accepts 8 parameters, but you are only use 4? Thanks for the reply. Sure, I use your suggestion and run the test. Yes, we use 8 parameters but only 4 required for functions and rest of the parameters we use for reports. Once the execution is completed, then its status (pass, fail) result will be posted into the report, that's why we pass 8 parameters.
OPCFW_CODE
Why does glGetTexImage transfer all mipmap textures even if only the 1x1 mipmap level is requested? I render to a floating point texture in a FBO and need the average value of all pixels of that texture on the CPU. So I thought using mipmapping to calculate the average into the 1x1 mipmap is pretty convenient because I save CPU computation time and I only need to transfer 1 pixel to the CPU instad of lets say 1024x1024 pixels. So I use this line: glGetTexImage(GL_TEXTURE_2D, variableHighestMipMapLevel, GL_RGBA, GL_FLOAT, fPixel); But despite the fact that i specifically request only the highest mipmap level, which is always 1x1 pixel in size, the time it takes for that line of code to complete depends on the size of the level 0 mipmap of the texture. Which makes no sense to me. In my tests, for example, this line takes around 12 times longer for a 1024x1024 base texture than for a 32x32 base texture. The result in fPixel is correct and only contains the wanted pixel, but the time clearly tells that the whole texture set is transferred, which kills the main reason for me, because the transfer to the CPU is clearly the bottleneck for me. I use Win7 and opengl and tested this on an ATI Radeon HD 4800 and a GeForce 8800 GTS. Does anybody know anything about that problem or has a smart way to only transfer the one pixel of the highest mipmap to the CPU? How do you generate the mipmap? glGenerateMipmap, I guess? "the time it takes for that line of code to complete depends on the size of the level 0 mipmap of the texture." How are you measuring that? @ChristianRau: Yes I use glGenerateMipmap right before glGetTexImage. @NicolBolas: I use the boost::timer class to measure when the line is done. Thats not totally accurate, and the times slightly vary from test to test, but because it is 12 times slower, I didn't really care about the precision. glGenerateMipmap( GL_TEXTURE_2D ); float *fPixel = new float[4]; Timer.resume(); glGetTexImage(GL_TEXTURE_2D, highestMipMapLevel, GL_RGBA, GL_FLOAT, fPixel); Timer.stop(); @NicolBolas: In case my question was ambiguous, for my tests I set the resolution of the FBO texture I render into, the higher I set it, the longer takes the glGetTexImage line to complete. @lenn Your time measurement most likely is complete rubbish due to the asynchronicity of GPU operations. You're likely measuring glGenerateMipmaps (or a bunch of other previous operations), which doesn't have to be finished once the CPU call to glGenerateMipmaps returned. Try to insert a fence event point before glGetTexImage (or a simple glFinish) to wait for completion of any previous GPU operations to actually measure the glGetTexImage only (of course only for measuring purposes, don't do that in production code). @ChristianRau yep, I was missing the glFinish command :) thx! glGenerateMipmap( GL_TEXTURE_2D ); float *fPixel = new float[4]; Timer.resume(); glGetTexImage(GL_TEXTURE_2D, highestMipMapLevel, GL_RGBA, GL_FLOAT, fPixel); Timer.stop(); Let this be a lesson to you: always provide complete information. The reason it takes 12x longer is because you're measuring the time it takes to generate the mipmaps, not the time it takes to transfer the mipmap to the CPU. glGenerateMipmap, like most rendering commands, will not actually have finished by the time it returns. Indeed, odds are good that it won't have even started. This is good, because it allows OpenGL to run independently of the CPU. You issue a rendering command, and it completes sometime later. However, the moment you start reading from that texture, OpenGL has to stall the CPU and wait until everything that will touch that texture has finished. Therefore, your timing is measuring the time it takes to perform all operations on the texture as well as the time to transfer the data back. If you want a more accurate measurement, issue a glFinish before you start your timer. More importantly, if you want to perform an asynchronous read of pixel data, you will need to do the read into a buffer object. This allows OpenGL to avoid the CPU stall, but it is only helpful if you have other work you could be doing in the meantime. For example, if you're doing this to figure out the overall lighting for a scene for HDR tone mapping, you should be doing this for the previous frame's scene data, not the current one. Nobody will notice. So you render a scene, generate mipmaps, read into a buffer object, then render the next frame's scene, generate mipmaps, read into a different buffer object, then start reading from the previous scene's buffer. That way, by the time you start reading the previous read's results, they will actually be there and no CPU stall will happen. With the glFinish before all timer calls the measurements now totally make sense, I was actually measuring 3 rendering calls into the FBO which I always found to be astonishingly fast ;) Thanks a lot! Now I only need to test if its faster to bring the big texture to the CPU and calculate the average pixel value there while rendering the next texture or if the mipmap calculation is the faster way :)
STACK_EXCHANGE
A Way to Unsubscribe Agents from an Address? Hi again, I'm hoping to create temporary groups of agents during each time step of my simulation, and was planning on using a temporary publish/subscribe communication address to form this group. However, I can't seem to figure out a way to unsubscribe or remove an agent from the communication address once it has been added. I have been reading through the osBrain Library API and haven't found anything about it yet. Is there a way to do this? If not, is there a better way to group agents together? Thanks so much for your help, as always! Sam @sjanko2 If you are trying to unsubscribe, we are about to change and improve the API for that (right now it is a bit limited). Anyway, if that is the case, I could tell you how to do it. If you just want to close a connection, then try: agent.close('alias') Does that work for you? If it does not, could you provide a reduced code sample? Can we close #167? Opened a related issue: #176 Thanks for pointing us missing documentation again. :blush: Hi there! Thank you very much for your response. Yes, feel free to close #167 , my apologies for not clarifying that sooner. The solution worked! :) For this issue, I have attempted your solution in the attached example code. In this code, two "groups" are formed with different communication addresses. The main group allows Alice to send messages to agents Bob, Eve, and Dave. The temporary group allows Alice to send messages to agents Bob and Eve only. After this is set up, two messages are sent to the main group followed by two messages sent to the temporary group. I then attempt to close the connection with your suggested solution: agent.close('alias') . This produces an error, which I have attached an image of. What I would like to do is unsubscribe all agents from this "Tempaddr" so that when a message is sent from Alice the second time, no one receives it. However, the messages sent from Alice to the main group should continue to work. Thanks so much!! Sam UnsubscribeExample.py import time import osbrain from osbrain import run_agent from osbrain import run_nameserver osbrain.config['TRANSPORT'] = 'tcp' def log_Everyone(agent, message): agent.log_info('The message for everyone is: %s' % message) def log_TemporaryGroup(agent, message): agent.log_info('The temporary group message is: %s' % message) if __name__ == '__main__': # Start the nameserver: run_nameserver('<IP_ADDRESS>:1124') alice = run_agent('Alice') bob = run_agent('Bob') eve = run_agent('Eve') dave = run_agent('Dave') # System configuration addr = alice.bind('PUB', alias='main') bob.connect(addr, handler={'MainGroup': log_Everyone}) eve.connect(addr, handler={'MainGroup': log_Everyone}) dave.connect(addr, handler={'MainGroup': log_Everyone}) Tempaddr = alice.bind('PUB', alias='TempAddr') bob.connect(Tempaddr, handler={'TemporaryGroup': log_TemporaryGroup}) eve.connect(Tempaddr, handler={'TemporaryGroup': log_TemporaryGroup}) # Send messages to all three other agents for _ in range(2): time.sleep(1) message = 'Hello, Everyone!' alice.send('main', message, topic='MainGroup') # Send messages to only those a part of the temporary group for _ in range(2): time.sleep(1) message = 'This is a message to the temporary group only.' alice.send('TempAddr', message, topic='TemporaryGroup') time.sleep(1) alice.close('TempAddr') # Test to see if this successfully closed the group: # Send messages to all three other agents (this should still work) for _ in range(2): time.sleep(1) message = 'Hello, Everyone!' alice.send('main', message, topic='MainGroup') # Send messages to only those a part of the temporary group (this should error) for _ in range(2): time.sleep(1) message = 'This is a message to the temporary group only.' alice.send('TempAddr', message, topic='TemporaryGroup') Related: #158 if merged, will give the user more control over subscriptions/handlers. #177 if merged, will give the user more control on closing connections (sockets). Hopefully they will be reviewed by tomorrow. Merged #177. #158 will probably be merged soon (maybe this week). @sjanko2 you might be interested in trying the latest osBrain code available in the master branch to be able to close your connections easily. Otherwise, we will probably release a new osBrain version by the end of next week. Thanks for your feedback! Closing this as #158 was already merged.
GITHUB_ARCHIVE
A large number of events happen in your systems every day. In this article, we’ll examine what “bad” events show up in the network when the Emotet malware is executed in your systems. The network traffic sample has been downloaded from malware-traffic-analysis.net. It is an excellent site to find different types of malwares and the corresponding traffic. The specific malware sample we will use in this article were collected originally by Palo Alto’s Unit42 Threat Intelligence and Security Consulting Team. Analyzing a PCAP Recorded network traffic is stored in a PCAP-file, also called a packet capture. To analyze a PCAP-file a number of different tools can be used. Wireshark is probably the most known desktop application. This will visualize which packets are contained in the PCAP-file. Two other tools that can be used for PCAP-file analysis but also for continued network monitoring are Suricata and Zeek. Suricata will take a set of signatures of known bad indicators and produce alarms by matching the indicators to the network traffic. Zeek will do general metadata extraction of network traffic and produce data about what is going on at connection level. Zeek resembles Wireshark but works on a level higher – packet streams collected to connections instead of individual packets. In our analysis, we will use Angle by Derant, which is a SaaS platform that uses Suricata and Zeek. After uploading the PCAP-file, it will be processed by Suricata and Zeek, where it automatically can open an alarm. In this article, we will analyze and examine the raw results. If you wish to reproduce the steps of following along while reading the article, you can sign up here for free: angle.derant.com By default, Angle uses the ET Open signature data set for Suricata and standard configuration for Zeek. Signature-based events from Suricata: 824 General metadata events from Zeek: 2676 (across 14 different output file types) Signature-based detection works by matching existing “known bad” signatures up against the traffic being analyzed. This has the strength of being a quick and easy way to produce alarms. The limitation is that it is difficult to find unknown bad events as these will not have signatures. Signatures come in various forms, with some of the good ones having high confidence and low false-positive rate while others produce more questionable alarms with a lot of false-positives. Combining alarms can sometimes give higher confidence in bad things happening. Alarms from Suricata / ET Open The alarms from Suricata can be categorized in 3 categories: 1. Cobalt Strike activity 2. JA3 Dridex hashes 3. Generic Windows activity Based on our experiences, the Cobalt Strike activity is a high signal/low noise alarm that warrants immediate action. It has a very low false-positive rate and is uniquely associated with malicious behaviour. It can be seen in pentesting exercises but should be assumed to be the real deal. Starting up the prepared incident processes for a major incident is recommended. The Dridex JA3 hashes are also a high-risk alarm. In general, JA3 hashes have a lot of false-positive hits, which is why it is good practice to use them as a signal combined with other alarms to verify that this is a real incident. The generic Windows activity are low-value alarms in themselves, providing low signal-to-noise ratio. These will pop up very regularly on normal networks and should not be taken as alarms you prioritize by themselves. Anomaly detection is the detection of events that differ from a baseline. Defining a baseline requires some knowledge of the usual or correct functioning of the system that is the source of the events. We can’t define an actual baseline of this analysis, as the PCAP file is created and based on unusual behavior. What we can do, however, is to define pseudo baselines, which would catch the malware being analyzed here. Mimicking legitimate traffic A number of the used certificates used by the creator of the malware are self-signed. This is particularly true for the TLS connections on port 8080 where the certificate is (self-)issued to example.com. This can be an indicator in itself (and thereby actually be a signature-based detection). There are a number of seemingly legitimate certificates that are non-valid also though, so the heuristic isn’t fool-proof. Connections and Domans Depending on whether we want to baseline the traffic from servers or clients, using connections and domains will produce the best result in our experience. For example, if the baselined host is a server, it would usually be possible to define which egress ports and domains the server is communicating out on. Defining this would make it possible to detect the sudden surge in outbound traffic to ports 8080, 25, 465, 587, and also outbound traffic to the C2 domain for most server profiles. An interesting observation here is that this configuration of the signature-based detection with Suricata entirely misses the spambot traffic which is observed on ports 25, 465, and 587. Netflow vs. Zeek - problems with netflow This malware is easier to detect as it communicates out on unusual ports for most servers. Had the malware only used port 443 - probably the most common port used by many servers when communicating outwards - it would be harder to detect. In that instance, what would be needed would be a way to differentiate TLS connections from each other. This can be done by looking at the IP being communicated to. Often this communication will be from a generic cloud provider. TLS details such as sni / server name, certificate details and JA3(s) hashes will become important. These details are not present in basic network telemetry such as Netflow but are in the Zeek output used here. The Emotet sample analyzed here is rather noisy, with the payload being a mail spam campaign. Detection is definitely possible on the network level, both with signature-based and anomaly-based detection. Follow us for more analysis of both bad and good network traffic. Author: Rasmus Have Co-founder and IT-security specialist at Derant Rasmus has 20+ years of experience doing operational blue- and red-team work in various organisations.
OPCFW_CODE
from tortoise.contrib import test from tortoise.tests import testmodels async def create_participants(): test1 = await testmodels.RaceParticipant.create( first_name="Alex", place=testmodels.RacePlacingEnum.FIRST, predicted_place=testmodels.RacePlacingEnum.THIRD ) test2 = await testmodels.RaceParticipant.create( first_name="Ben", place=testmodels.RacePlacingEnum.SECOND, predicted_place=testmodels.RacePlacingEnum.FIRST ) test3 = await testmodels.RaceParticipant.create( first_name="Chris", place=testmodels.RacePlacingEnum.THIRD ) test4 = await testmodels.RaceParticipant.create( first_name="Bill" ) return test1, test2, test3, test4 class TestEnumField(test.TestCase): """Tests the enumeration field.""" async def test_enum_field_create(self): """Asserts that the new field is saved properly.""" test1, _, _, _ = await create_participants() self.assertIn(test1, await testmodels.RaceParticipant.all()) self.assertEqual(test1.place, testmodels.RacePlacingEnum.FIRST) async def test_enum_field_update(self): """Asserts that the new field can be updated correctly.""" test1, _, _, _ = await create_participants() test1.place = testmodels.RacePlacingEnum.SECOND await test1.save() tied_second = await testmodels.RaceParticipant \ .filter(place=testmodels.RacePlacingEnum.SECOND) self.assertIn(test1, tied_second) self.assertEqual(len(tied_second), 2) async def test_enum_field_filter(self): """Assert that filters correctly select the enums.""" await create_participants() first_place = await testmodels.RaceParticipant \ .filter(place=testmodels.RacePlacingEnum.FIRST) \ .first() second_place = await testmodels.RaceParticipant \ .filter(place=testmodels.RacePlacingEnum.SECOND) \ .first() self.assertEqual(first_place.place, testmodels.RacePlacingEnum.FIRST) self.assertEqual(second_place.place, testmodels.RacePlacingEnum.SECOND) async def test_enum_field_delete(self): """Assert that delete correctly removes the right participant by their place.""" await create_participants() await testmodels.RaceParticipant.filter(place=testmodels.RacePlacingEnum.FIRST).delete() self.assertEqual(await testmodels.RaceParticipant.all().count(), 3) async def test_enum_field_default(self): _, _, _, test4 = await create_participants() self.assertEqual(test4.place, testmodels.RacePlacingEnum.DNF) async def test_enum_field_null(self): """Assert that filtering by None selects the records which are null.""" _, _, test3, test4 = await create_participants() no_predictions = await testmodels.RaceParticipant.filter(predicted_place__isnull=True) self.assertIn(test3, no_predictions) self.assertIn(test4, no_predictions)
STACK_EDU
Australia is known for its many creepy crawlers and the residents seem to be used to seeing spiders, snakes and random bugs on a regular basis. In fact, one family recently found a snake hanging around inside their house in children’s playroom of all places. Residents always have to be prepared to come face to face with these critters, as they tend to sneak up on them in the most bizarre places. Let’s not forget the massive size of these critters. Compared to the snakes and spiders that are found in the United States, the bugs and reptiles in Australia are usually quite bigger. Or maybe it’s just that they make themselves present more often. One massive python certainly made himself present recently. When two police officers were out on patrol one night near Wujul Wujul in northeast Australia, they came across a python that was of record-breaking length. You know that a snake is long when even the residents of Australia have to pull over for a photo op of the reptile. The two officers spotted the snake from inside the car where they were sitting, but they wanted to see just how big it was so Sergeant Ben Tome sent his colleague, Acting Senior Constable, Chris Kenny out to stand next to the beast. The photo says it all, as the Queensland police officer looks miniature next to the python, which looks like it could wrap around the body of the officer several times. Wanting to show off their unusually big find, the officer posted the photo alongside the following comment… “Boss, we’re going to need a bigger ladder. During a night patrol near Wujul Wujul officers had to wait for this scrub python to cross the road.” The snake didn’t stick around long and slithered away shortly after the photo was taken. The photo received nearly 18,000 shares and 31,000 reactions. Commenters shared the following… “I need a map of the area and surrounds (say around 20000000kqm) to ensure I will never go near that place.” “I would have to turn around. My feet wouldn’t even be able to touch the pedals…. they would be on the windscreen? That’s a whoppa!!!” The snake, which was identified as a scrub python, was estimated to be five meters in length. They have been identified as Australia’s largest snake and can grow up to be eight meters in length. While they aren’t venomous, they can kill their prey by way of constriction. The scary part about these scrub pythons is that they have a tendency to blend into their surroundings, which is probably one of their God-given talents that allows them to sneak up on their prey. In the photo taken by the police officers, the snake is massive but he matches the colors of the wooded area that he is slithering on. The fact that they can sneak up on you in such a way makes them even scarier for some. This officer is one brave man for approaching the beast, but apparently, this is fairly normal in Australia. Every time you share an AWM story, you help build a home for a disabled veteran.
OPCFW_CODE
Objectives • To introduce software verification and validation and to discuss the distinction between them • To describe the program inspection process and its role in V & V • To explain static analysis as a verification technique • To describe the Cleanroom software development process Topics covered • Verification and validation planning • Software inspections • Automated static analysis • Cleanroom software development Verification vs validation • Verification: "Are we building the product right”. • The software should conform to its specification. • Validation: "Are we building the right product”. • The software should do what the user really requires. The V & V process • Is a whole life-cycle process - V & V must be applied at each stage in the software process. • Has two principal objectives • The discovery of defects in a system; • The assessment of whether or not the system is useful and useable in an operational situation. V& V goals • Verification and validation should establish confidence that the software is fit for purpose. • This does NOT mean completely free of defects. • Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed. V & V confidence • Depends on system’s purpose, user expectations and marketing environment • Software function • The level of confidence depends on how critical the software is to an organisation. • User expectations • Users may have low expectations of certain kinds of software. • Marketing environment • Getting a product to market early may be more important than finding defects in the program. Static and dynamic verification • Software inspections. Concerned with analysis of the static system representation to discover problems (static verification) • May be supplement by tool-based document and code analysis • Software testing. Concerned with exercising and observing product behaviour (dynamic verification) • The system is executed with test data and its operational behaviour is observed Program testing • Can reveal the presence of errors NOT their absence. • The only validation technique for non-functional requirements as the software has to be executed to see how it behaves. • Should be used in conjunction with static verification to provide full V&V coverage. Types of testing • Defect testing • Tests designed to discover system defects. • A successful defect test is one which reveals the presence of defects in a system. • Covered in Chapter 23 • Validation testing • Intended to show that the software meets its requirements. • A successful test is one that shows that a requirements has been properly implemented. Testing and debugging • Defect testing and debugging are distinct processes. • Verification and validation is concerned with establishing the existence of defects in a program. • Debugging is concerned with locating and repairing these errors. • Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses to find the system error. V & V planning • Careful planning is required to get the most out of testing and inspection processes. • Planning should start early in the development process. • The plan should identify the balance between static verification and testing. • Test planning is about defining standards for the testing process rather than describing product tests. The structure of a software test plan • The testing process. • Requirements traceability. • Tested items. • Testing schedule. • Test recording procedures. • Hardware and software requirements. • Constraints. Software inspections • These involve people examining the source representation with the aim of discovering anomalies and defects. • Inspections not require execution of a system so may be used before implementation. • They may be applied to any representation of the system (requirements, design,configuration data, test data, etc.). • They have been shown to be an effective technique for discovering program errors. Inspection success • Many different defects may be discovered in a single inspection. In testing, one defect ,may mask another so several executions are required. • The reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly arise. Inspections and testing • Inspections and testing are complementary and not opposing verification techniques. • Both should be used during the V & V process. • Inspections can check conformance with a specification but not conformance with the customer’s real requirements. • Inspections cannot check non-functional characteristics such as performance, usability, etc. Program inspections • Formalised approach to document reviews • Intended explicitly for defect detection (not correction). • Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g. an uninitialised variable) or non-compliance with standards. Inspection pre-conditions • A precise specification must be available. • Team members must be familiar with the organisation standards. • Syntactically correct code or other system representations must be available. • An error checklist should be prepared. • Management must accept that inspection will increase costs early in the software process. • Management should not use inspections for staff appraisal ie finding out who makes mistakes. Inspection procedure • System overview presented to inspection team. • Code and associated documents are distributed to inspection team in advance. • Inspection takes place and discovered errors are noted. • Modifications are made to repair discovered errors. • Re-inspection may or may not be required. Inspection checklists • Checklist of common errors should be used to drive the inspection. • Error checklists are programming language dependent and reflect the characteristic errors that are likely to arise in the language. • In general, the 'weaker' the type checking, the larger the checklist. • Examples: Initialisation, Constant naming, loop termination, array bounds, etc. Inspection rate • 500 statements/hour during overview. • 125 source statement/hour during individual preparation. • 90-125 statements/hour can be inspected. • Inspection is therefore an expensive process. • Inspecting 500 lines costs about 40 man/hours effort - about £2800 at UK rates. Automated static analysis • Static analysers are software tools for source text processing. • They parse the program text and try to discover potentially erroneous conditions and bring these to the attention of the V & V team. • They are very effective as an aid to inspections - they are a supplement to but not a replacement for inspections. Stages of static analysis • Control flow analysis. Checks for loops with multiple exit or entry points, finds unreachable code, etc. • Data use analysis. Detects uninitialised variables, variables written twice without an intervening assignment, variables which are declared but never used, etc. • Interface analysis. Checks the consistency of routine and procedure declarations and their use Stages of static analysis • Information flow analysis. Identifies the dependencies of output variables. Does not detect anomalies itself but highlights information for code inspection or review • Path analysis. Identifies paths through the program and sets out the statements executed in that path. Again, potentially useful in the review process • Both these stages generate vast amounts of information. They must be used with care. Use of static analysis • Particularly valuable when a language such as C is used which has weak typing and hence many errors are undetected by the compiler, • Less cost-effective for languages like Java that have strong type checking and can therefore detect many errors during compilation. Verification and formal methods • Formal methods can be used when a mathematical specification of the system is produced. • They are the ultimate static verification technique. • They involve detailed mathematical analysis of the specification and may develop formal arguments that a program conforms to its mathematical specification. Arguments for formal methods • Producing a mathematical specification requires a detailed analysis of the requirements and this is likely to uncover errors. • They can detect implementation errors before testing when the program is analysed alongside the specification. Arguments against formal methods • Require specialised notations that cannot be understood by domain experts. • Very expensive to develop a specification and even more expensive to show that a program meets that specification. • It may be possible to reach the same level of confidence in a program more cheaply using other V & V techniques. Cleanroom software development • The name is derived from the 'Cleanroom' process in semiconductor fabrication. The philosophy is defect avoidance rather than defect removal. • This software development process is based on: • Incremental development; • Formal specification; • Static verification using correctness arguments; • Statistical testing to determine program reliability. Cleanroom process characteristics • Formal specification using a state transition model. • Incremental development where the customer prioritises increments. • Structured programming - limited control and abstraction constructs are used in the program. • Static verification using rigorous inspections. • Statistical testing of the system (covered in Ch. 24). Formal specification and inspections • The state based model is a system specification and the inspection process checks the program against this mode.l • The programming approach is defined so that the correspondence between the model and the system is clear. • Mathematical arguments (not proofs) are used to increase confidence in the inspection process. Cleanroom process teams • Specification team. Responsible for developing and maintaining the system specification. • Development team. Responsible for developing and verifying the software. The software is NOT executed or even compiled during this process. • Certification team. Responsible for developing a set of statistical tests to exercise the software after development. Reliability growth models used to determine when reliability is acceptable. Cleanroom process evaluation • The results of using the Cleanroom process have been very impressive with few discovered faults in delivered systems. • Independent assessment shows that the process is no more expensive than other approaches. • There were fewer errors than in a 'traditional' development process. • However, the process is not widely used. It is not clear how this approach can be transferred to an environment with less skilled or less motivated software engineers. Key points • Verification and validation are not the same thing. Verification shows conformance with specification; validation shows that the program meets the customer’s needs. • Test plans should be drawn up to guide the testing process. • Static verification techniques involve examination and analysis of the program for error detection. Key points • Program inspections are very effective in discovering errors. • Program code in inspections is systematically checked by a small team to locate software faults. • Static analysis tools can discover program anomalies which may be an indication of faults in the code. • The Cleanroom development process depends on incremental development, static verification and statistical testing.
OPCFW_CODE
Although a x86 processor, Larrabee is a different undertaking than Intel’s other chip efforts, offering a multi-purpose many-core unit that can handle graphics functions as well as more broader computing ones. Performance graphs and more detail can be found in Intel’s official 2008 Siggraph paper on Larrabee, entitld ‘Larrabee: A Many-Core x86 Architecture for Visual Computing’ can be found here. The first versions of Larrabee are expected to arrive later this year with the first commercial products powered by the chip, including video cards, pegged to roll out by early 2010. What will the products that emerge out of the Larrabee project provide that is not already on the market? One of the key things that Larrabee architecture gives games companies is a tremendous amount of additional flexibility and programmability – things that they’ve been asking for many years. And that provides them with the ability to do things and innovate in very different ways than they’ve been able to before. First and foremost is going to be a very performing graphics and throughput architecture, and it’ll provide all of that through direct X and Open GL. The majority of games and usage of Larrabee architecture will be through that. And there will be a smaller per centage of games developers that are going to go out and innovate on top of what we provided through what we call the Larrabee native interface. Nvidia and ATi currently represent something in the region of 98 per cent of the GPU market. How will you convince the industry you provide a better alternative? I think a big part of that is that we’ve been working with a lot of game developers. All of the input and design of Larrabee comes with the involvement of various software firms in the industry. It’s very much driven by feedback form the industry, telling us the types of thing they would like to do, and how can they do them. We’ve tried to incorporate that in. That 98 per cent is what Larrabee is targeted at. And we are not new to the graphics market either. We are the biggest graphics vendor on this planet, most of the graphics components out there are Intel integrated graphics. Obviously you’re right, if you look to the graphics discreet market it is them, but its not that we’re entirely new to this world. Larrabee has the ability to be more than graphics. It’s designed for a throughput architecture. Now that doesn’t mean that it can do everything in the world, but there are many things that are applicable top Larrabee. Graphics is probably the predominant one in the over all industry if you will. We’ve talked with other ISVs, medical imaging sections, oil, energy and gas, general image processing, and the financial market as well. There is applicability of the Larrabee architecture to those segments. How important will relationships with games developers be to the project? Very important. We’ve had a very heavy involvement in the graphics market overall for quite some time. We’re being very open and customer orientated by listening to the ISVs, talking to them and asking them what it is Larrabee needs to do, with regards to the general architecture of it as a lot of what is behind Larrabee architecture relies on the software infrastructure of it, allows us to simply change capabilities very fast. We don’t necessarily need to rev the hardware architecture and literally change the software architecture. We wanted to change the software rasteriser, we wanted to change the pixel shading logic processing, we can do that in software, which gives us a much faster turnaround. A lot of that is being driven very closely with the software companies that we’re working with. What elements of Larrabee can be utilised by games developers? All of them. It really comes back to this core. In the simplest case games developers will simply be able to use Larrabee like a DirectX and an OpenGL card. Theoretically they shouldn’t have to do anything to utilize Larrabee as a DirectX or OpenGL card. And that probably will be the majority of the usage of Larrabee architecture for them. For some additional people who want to go about there and innovate we give them a very flexible interface that slows them to program all the way down to the metal. And that flexibility is extremely powerful in that it can decide how they want to do rasterisation, decide how they want to do load balancing, how they want to do all different kinds of things. We’ve pulled out a couple of examples such as the regular Z-buffer as a way to do really powerful more realistic shadows. And we call out order independent transparency as another, which is something developers have been asking for a long time. But it’s very hard to do on current GPU hardware. In what ways will programming for many-core processors change how games are developed? I don’t think Intel could say all the different ways it is going to change. I think it would be arrogant of us to say we know exactly how it’s going to happen. We have some ideas and we share them with the games developers, but we spend a lot of time listening to what they think. We talk to them about the architecture and the software that we have, and then ask them a lot of questions like how would you use it and does it do the things that you want it to do. And so I think that we have some very good ideas, the software developers are going to come up with a lot of different ones. And I think the industry in general and those developers are learning what they can really do beyond what they’ve done in the past. I think for software developers it’s one of the bigger inflection points, in terms of programming for gaming and graphics. Graphics used to be, before we had graphics accelerators, completely leaned on CPUs. And it was completely flexible – people decided they wanted to do rasterisation, they wanted to do ray tracing, they wanted to global illumination, but during that time all those graphics were not real time. The processors were too slow and the algorithms were so complex that you couldn’t do it in real time. With the push to get graphics towards real time came hardware accelerators. But in order to make this hardware accelerators work and get real time graphics they made a lot of concessions. They implemented a fixed function pipeline. So they said ‘ok rasterisation has to happen this way, texture look up has to happen this way, shading has to happen this way’. Since then the standards have evolved to add more and more levels of programmability, with pixel shading etc. But now we think Larrabee allows us to come full circle and have complete programmability, and still have all the processing power to do everything that we want to do. Does Intel want to see Larrabee used in the next generation of games consoles? It’s definitely something we would want to discuss with the console vendors, and hope that the architecture that we’re providing is something that is very compelling for them and be interesting. Games developers already complain that multi-threading and exotic processors currently on the market put pressure on their teams; why should they care about Larrabee? Larrabee is going to be a DirectX and OpenGL solution. That means that for the majority of developers, if their resources are already strained, should not be a significant challenge for them at all. This is like any other graphics solution. And it will be performance competitive. But there are a few leading game companies out there that are saying ‘we want to push the envelope, we want to push the industry further.’ And they’re the ones that are going to work and experiment more on the Larrabee native interface as they go. In order to use Larrabee you don’t have to use this new native interface. You can continue to use it just like an OpenGL or DirectX solution. But if what that ability to go a little bit further to try things that you haven’t tried before, or that haven’t been possible before, that’s what he third interface provides for you. Do you hope that developers see Larrabee as multi-function processor and not just a graphics one? Yeah I think the Larrabee architecture will be seen in that way. That’s something that we’re looking at. Like I say, we’ve talked to developers beyond just the gaming and graphics segment to make sure this is largely a throughput architecture of which gaming and graphics is probably the largest priority of throughput segments, if you will.
OPCFW_CODE
When working on our add-ons, we at StiltSoft are using Bitbucket Server (Stash) for distributed version control management and code collaboration. Some of you have also picked Bitbucket Server, others went with another solution, e.g. Bitbucket Cloud, GitHub, or are using several platforms at once. Whatever tool you work in, mirroring repositories can come in useful in certain cases. That is why last month we posted the article that covered the benefits and how-tos about mirroring remote repositories from Bitbucket Server (Stash). Meanwhile, creating mirrors in Bitbucket Server rather than mirroring from it might be more relevant for some of our blog readers, so today’s post is about that. There’s a number of goals which can be attained by mirroring repositories from Bitbucket Cloud or GitHub to Bitbucket Server (Stash). Among them: - protecting yourself from down time when Bitbucket Cloud or GitHub are unavailable - consolidating repositories in one place - facilitating the process of migration from Bitbucket Cloud or GitHub to Bitbucker Server How to mirror Bitbucket Cloud repositories To arrange automatic update of mirrored repositories you can: In our examples, we’ll be mirroring a Bitbucket Cloud repository to Bitbucket Server. Use an add-on for Bitbucket Server ScriptRunner for Bitbucket Server/Stash can come in handy when you are looking to mirror some or all of your Bitbucket Cloud repositories to Bitbucket Server. It’s very simple. - Once the add-on is installed, navigate to Bitbucket Server Administration and select Built-in Scripts in the Script Runner Section. - There, choose Mirror Bitbucket Team. You can mirror repositories both from team and user Bitbucket Cloud accounts. - Enter your Bitbucket Cloud Team or User, select the target project in Bitbucket Server, provide your Bitbucket Cloud user credentials. - Select one of the synchronization options (none, install hook or poll). If you choose: - ‘install hook’, Bitbucket Cloud will call your Bitbucket Server instance when there are changes in repositories which will trigger synchronization - ‘poll’, each remote repository will be polled every 5 minutes - ‘none’, there will be no synchronization performed - In the Regex field, there is the ‘.*’ regular expression by default. If you leave it as it is, all repositories will be mirrored from your Bitbucket Cloud. To filter the set of repositories, insert a corresponding regular expression. The regular expression is done against the repository name, e.g. for the repository ‘StiltSoft Test’ you could use: - Another useful feature. If you’d like new Bitbucket Server repositories to be created and synchronized when there are newly-added remote repositories in Bitbucket Cloud, mark the ‘Sync new’ checkbox. - When all set, click Run. You’ll see the table with the repositories that are being mirrored. Initially there’s the ‘Create’ label. When the process is completed, it will say ‘Exists’ instead. You may tail the application log file to track the progress. - Once local repositories are created, you can refer to the Configure Mirrored Repositories in Built-in Scripts to view and check the status of your mirrored repositories and change the synchronization type. Use a Continuous Integration tool If you have some Continuous Integration solution, e.g. TeamCity, in your arsenal, you can use it. Here is one of the options how to set up automatic update of mirrors in Bitbucket Server using TeamCity. - To use SSH-authentication, copy a SSH key from the TeamCity server with a build agent you’ll be using. Then add this SSH key in your Bitbucket Cloud and Bitbucket Server account settings. - Then you need to create a repository in Bitbucket Server - After that, create a local copy of source repository on build agent (don’t forget to add Bitbucket Server repository as remote): cd /home mkdir mirror cd mirror git clone firstname.lastname@example.org:kkolina/demonstration-repository.git cd demonstration-repository git remote add bitbucket ssh://email@example.com:7999/adp/demonstration-repository.git Now you should make some changes in build configuration in TeamCity: - Add a Command Line Runner build step with a script for updating mirrored repositories that will run every time this build step is invoked. Example of the script, e.g.: cd /home/mirror/demonstration-repository git pull git push --all bitbucket git push --tags bitbucket You may also add a VCS Trigger that will add a build to the queue when a VCS check-in is detected in the repository you are mirroring. - Before adding a trigger, you should attach a VSC root in the Version Control Settings: With the URL of the original repository and Bitbucket Cloud authentication settings: - Now you can add a VCS Trigger: Use an OS job scheduler You can also use OS job schedulers (Cron, Windows Task Scheduler) and configure an external job and schedule the launch of scripts that will be pulling changes and pushing them to a mirrored repository. - First we clone a repository: cd /home/mirror git clone firstname.lastname@example.org:kkolina/demonstration-repository.git cd demonstration-repository git remote add bitbucket ssh://email@example.com:7999/adp/demonstration-repository.git - If you use Cron, you’ll need to add a command in your crontab file that will run periodically on a given schedule and will be updating mirrored repositories, e.g.: 0 0 * * * root /home/mirror/update_repos.sh This command will run from the user ‘root’ every day at midnight and trigger a script that has a number of commands to perform update of mirrored repositories. For the case when you have one mirrored repository that should be updated, the content of ‘update_repos.sh’ will look like this: #!/bin/bash cd /home/mirror/demonstration-repository git pull git push --all bitbucket git push --tags bitbucket If you need to mirror more than one repository, include 4 commands you can see above for each repository in ‘update_repos.sh’.
OPCFW_CODE
Golf simulators are a perfect way when it comes to train and enhance your game. It is also lots of fun when you play with your family and friends. The golf simulator utilizes different sensors to track the association and the ball facts. This is also to be used to develop a real simulation of the golf lessons. You can also use the simulator to train and play effective matches of golf, and can even play against other players on the internet. There are a lot of different golf simulators available in the market, at different prices starting from $1000 and going up to $10,000. It depends on you which is best you can buy that. If you searching for a golf simulator that is necessary only for your training you can go for something that comes under a few dollars. However you searching for something more real and mesmerizing you need to infuse in a more costly simulator. Know more about indoor golf simulator Here are a few things that you need to consider before selecting a golf simulator. - Golf simulators may vary in size from periodic feet to hundred feet. So be sure to choose a simulator that will fit your house according. - Golf simulators present different varieties such as you can practically, track your association path, facing angle, and participate against other players on the internet. Give a thought about the features and then choose the simulator accordingly. - When it comes to price some simulators are very budget-friendly and some are expensive according to your budget you can choose wisely which simulator fits you. If you are passionate about enhancing your game then it is good to investon a golf simulator which is a very good investment. It helps you to practice and learn new methods, and you can even play against anyone in the world. There are a few Golf Simulators under 5000 dollars. Garmin Approach R10. This is a great golf simulator to play indoors as well it is budget-friendly, so whoever is looking to enhance their game can go for it. Garmin Approach R10 is a new launch in the market. You cannot defeat the portability and expense tag on the monitor. HomeCourse Pro 180. The HomeCourse projector is the best projector display for those who are searching for flexibility. Once you click the button you can see your screen you can see building a indoor golf simulator. Once you are done you can click back and it returns to its original way. Because it’s disclaimable it does take the entire space of the room. Sharp Throw Projector, its shield, and cable. The sharp Throw Projector combines wonderfully with the projector shield device. This projector shield helps you to handle it from the floor only instead of handling it from the ceiling. Because it is flexible you don’t need to fix it again to the floor, once you complete using it, you can put it back until you use it again.
OPCFW_CODE
I have several posts I would like to do, but this month has been very hectic. This encouraged me to revise how I deal with my tendency to overcommit myself with new projects, while managing to accomplish deadlines and still not working on the weekends. The result is that I am posting this instead of other alternative posts. See why. 1) Externalize fun work for out of the office. When I arrive home I play with my daughter so there is no way I can do actual computer work there. However most of my work involves thinking. I can think in many places. My favourites are on my bike on the way to work or when running. I not only think about work then, I also wonder about other stuff or picture myself in a tour de france time trial. However, I find that some fun problems are better solved in that context. Why? Because if you don’t come up with an idea in 5 minutes while sitting in front of your computer you feel desolated, but is ok to not have ideas if you are already doing something (e.g. running). Also, because if you are on your computer you tend to try (and put hands on) the first thing your intuition tells you will work. This way is easy to get lost in the details or do overcomplicated things that won’t work at the end. However, while running, you are forced to develop all the steps necessary and abstractly think if they will work discarding bad ideas way faster. Plus, blod is pumped to your brain continuously boosting your potential (or this is what I hope). But take notes as soon as you get out of the shower! 2) Minimum effort rule. I usually start by the task that requires less time to be completed. That way I can take it off my list and maximize the chances to move on any project. If I can solve something (e.g. a review or a simulation for a coauthor) in 1 to 4 hours, I just do it and have it done fast. Answering a question by email? I’ll do it asap and archive it. This short tasks are usually related to collborations and that also make happy those people and allow the project to keep moving. 3) Block time. The minimum effort rule fails when you start spending most time completing short tasks so there is no time left to work on long daunting (but exciting) projects. Then, I decided to block at least 2-3 full mornings or days a week to work on that kind of long term projects. No answering emails, no improvised meetings, no multitasking on the blocked time slots. I thought about that post on my bike ride this morning, I knew it will be written fast, so I did it as soon as I have some spare time, but not this morning. This morning was blocked to do some other analysis. I’m still on the process, so comment on what works for you!
OPCFW_CODE
Checking whether v-on handler is a function invocation is broken Version 2.6.11 Reproduction link https://jsfiddle.net/adamsol/pknr8dae/ Steps to reproduce Click the buttons. What is expected? All the buttons should behave in the same way: a message should appear below. What is actually happening? Only the first button works correctly. See #11893 for the origin of the issue. The problem lies probably here: https://github.com/vuejs/vue/blob/5255841aaff441d275122b4abfb099b881de7cb5/packages/vue-template-compiler/build.js#L3801 The regexes used do not take into account cases such as additional spaces, parentheses, or chained function invocations. As a result, a promise is correctly returned only in the first case in the repro, and in all the other cases errorHandler won't capture the exception thrown in the async method. The difference in the generated code (return is present only in the first case): https://template-explorer.vuejs.org/#<div id%3D"app"> <button %40click%3D"click(1)"> click(1) <%2Fbutton> <button %40click%3D"click (2)"> click (2) <%2Fbutton> <button %40click%3D"click((3))%22%3E%0A%20%20%20%20click((3))%0A%20%20%3C%2Fbutton%3E%0A%20%20%3Cbutton%20%40click%3D%22(click(4))%22%3E%0A%20%20%20%20(click(4))%0A%20%20%3C%2Fbutton%3E%0A%20%20%3Cbutton%20%40click%3D%22click(5).then()%22%3E%0A%20%20%20%20click(5).then()%0A%20%20%3C%2Fbutton%3E%0A%3C%2Fdiv%3E Suggested solution: either add return in every case, or don't add it at all, so that the behaviour is consistent. If checking for the function invocation is crucial, then the code must be parsed in some other way. See https://github.com/vuejs/vue/issues/7628 Handling all possible cases will require a parser instead of a regex but realistically speaking, people won't write @click="method((2))". The only that would be worth adding support for is @click="click(5).catch(() => {})" but it has the same problem, it requires a full parsing to deduce make it fully consistent. You can wrap the call with a function: () => method().catch(() => {}) or add a method to your component, which is prefered in such scenarios because the code becomes difficult to read. The same for more complicated expressions where parentheses are required, like mathematical expressions. That being said, Vue 3 does support these syntaxes but it has a full parser built in it. So maybe someone finds a way to improve the existing regex See https://github.com/vuejs/vue/issues/7628 Handling all possible cases will require a parser instead of a regex but realistically speaking, people won't write @click="method((2))". The only that would be worth adding support for is @click="click(5).catch(() => {})" but it has the same problem, it requires a full parsing to deduce make it fully consistent. You can wrap the call with a function: () => method().catch(() => {}) or add a method to your component, which is prefered in such scenarios because the code becomes difficult to read. The same for more complicated expressions where parentheses are required, like mathematical expressions. That being said, Vue 3 does support these syntaxes but it has a full parser built in it. So maybe someone finds a way to improve the existing regex The documentation (https://vuejs.org/v2/api/#errorHandler) says: In 2.6.0+, [...] if any of the covered hooks or handlers returns a Promise chain (e.g. async functions), the error from that Promise chain will also be handled. So I think the current behaviour should be considered a bug, since v-on is one of the covered hooks, .then and .catch create promise chains, but errors are not handled. Also, the behaviour is inconsistent even between method() and method (), which is very surprising. Together with #10009, this makes errorHandler hardly usable with regard to async methods. For anyone who stumbles upon this issue: to catch all errors in promises, use unhandledrejection event as described here: https://stackoverflow.com/a/52076738. Note that you still need to set up Vue.config.errorHandler, since in the default handler Vue silences the errors that it manages to catch.
GITHUB_ARCHIVE
Considerations when upgrading Ubuntu from 18.04 to 22.04 -- software development POV I have been using Ubuntu 18.04 LTS for over three years now and have been considering moving to newer LTS versions for some time. However, the thought of working software breaking or requiring changes has been the blocker to do so for me. I work with C++11/14 (GCC 8.4.0) with ROS 1 and LCM for message passing. I have Python 2.7.17/3.6.9/3.8.0 to handle my scripting needs. Ubuntu 22.04 comes with GCC 11 by default and does not support lower GCC versions. For robotics development, it supports ROS 2 but not ROS 1 so any ROS 1 work would need to be moved to docker container based development. I want to understand things that one must consider when upgrading Ubuntu to higher versions (18.04 to 20.04 or even 22.04 in my case) from a software development point of view (robotics pov would be a plus)? I use Docker for software development so working on an Ubuntu 18.04 container if some software breaks after the upgrade is one option but I want to understand what is it that might break or even what might need attention after an upgrade. Could someone share some additional information about upgrading and my options here? EDIT: Could someone share their experiences of updating their C++/Python projects after doing a fresh install to say Ubuntu 22.04? I want to know what potential problems in software projects need to be considered when updating the OS. Please ask about the last paragraph in a new question because this question needs focus (it's too broad because of the EDIT:). Have you updated your copde to Python3? Python 2 is still available, but not installed by default on Ubuntu 22.04. Upgrading from one LTS Ubuntu version (18.04) to an LTS version that is two LTS versions newer (22.04) is a complicated upgrade that could introduce problems in a wide variety of ways. I see three remaining logical choices for you. Enable free Extended Security Maintenance in Ubuntu 18.04. Extended Security Maintenance (ESM) extends Ubuntu's support lifetime from 5 years to 10 years. This would extend Ubuntu 18.04's security coverage until April, 2028. ESM is free for personal use on up to 5 machines (limitations apply). All you need is an Ubuntu One account. All the software that you already have installed in 18.04 will continue to work with no problems. Upgrade from Ubuntu 18.04 to Ubuntu 20.04 and struggle to solve any new problems that may occur as a result of the upgrade. Reduce the potential problems of upgrading Ubuntu 18.04 to a newer version by backing up your data and fresh installing the latest LTS version of Ubuntu which is Ubuntu 22.04. I added an EDIT to my question. I agree that a fresh install is better but what consequences of it do I need to consider in the software development work? For example, updating the compiler is one. What are other such changes to be considered? That edit makes your question closable by reviewers as needs focus because it would be too broad a set of topics for one question. Please ask about that last paragraph in a new question.
STACK_EXCHANGE
There are cases when using the terminal to quit an application is helpful, especially when it’s on a remote computer. I often use this at the office to kick off people who are holding back an update the needs to be done, or who are just playing around and not getting work done. This same action could be done via the Finder, using Screen Sharing, but it is more stealthy to do it via the Terminal. Three Ways To End It Let’s say you want to quit the application Safari on your co-worker’s computer, since you can see she is just looking at lolcat images on the other end of the office. You would prefer to not make a scene, so you will just quit the application for her, so she can focus on her work again. Method 1: The soft quit with Applescript Did you know you can actually run applescript commands via the Terminal? By using the “osacript” command you can execute any valid applescript. This method quits the program as if it was quit from the File menu, asking to save files if needed. osascript -e "tell application 'Safari' to quit" This soft method is good when you want to remotely quit a program a user currently using, without being a complete dick and losing their un-saved data. Method 2: Hard quit with Killall This method quits the program as if you initiated a Force Quit on it, which is commonly done via the Apple menu or by option-clicking (right-clicking) the application on the dock. This command will not ask the user to save files, so any unsaved data will be lost. Best used if a soft quit doesn’t work and you need to get the application closed. Unfortunately for these two methods you first need to know the name of the application you want to stop. If you don’t know which applications are open, don’t worry, you can find out that information too. Method 3: Selectively killing This two step process will let you find the program you want to kill, and kill it. First, find the currently running applications. ps -ax | grep Applications This will give us the list of all the standard applications that are running. For a complete list of all the processes you can use the simple command “ps -ax”. From the list, find the applications that you want to quit, and take note of the number on the left. This is the pid, the Process ID. You will use this to kill the application. sudo kill <pid> Replace <pid> with the pid of the process you want to kill. Go forth and use this new knowledge for good, not for evil (even though the evil options are so much fun).
OPCFW_CODE
add bf16 mixed precision support for NPU What does this PR do? This PR add bf16 mixed precision support for NPU and verified with accelerate test. The test results are as follows: accelerate config compute_environment: LOCAL_MACHINE debug: false distributed_type: MULTI_NPU downcast_bf16: 'no' gpu_ids: all machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false The output log when running accelerate test (bf16) [root@localhost bf16]# accelerate test Running: accelerate-launch /data/bf16/accelerate/src/accelerate/test_utils/scripts/test_script.py stdout: **Initialization** stdout: Testing, testing. 1, 2, 3. stdout: Distributed environment: MULTI_NPU Backend: hccl stdout: Num processes: 2 stdout: Process index: 0 stdout: Local process index: 0 stdout: Device: npu:0 stdout: stdout: Mixed precision type: bf16 stdout: stdout: Distributed environment: MULTI_NPU Backend: hccl stdout: Num processes: 2 stdout: Process index: 1 stdout: Local process index: 1 stdout: Device: npu:1 stdout: stdout: Mixed precision type: bf16 stdout: stdout: stdout: **Test process execution** stdout: stdout: **Test split between processes as a list** stdout: stdout: **Test split between processes as a dict** stdout: stdout: **Test split between processes as a tensor** stdout: **Test random number generator synchronization** stdout: All rng are properly synched. stdout: stdout: **DataLoader integration test** stdout: 1 0 tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, stdout: 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, stdout: 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, stdout: 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], device='npu:1') <class 'accelerate.data_loader.DataLoaderShard'> stdout: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, stdout: 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, stdout: 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, stdout: 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], device='npu:0') <class 'accelerate.data_loader.DataLoaderShard'> stdout: Non-shuffled dataloader passing. stdout: Shuffled dataloader passing. stdout: Shuffled central dataloader passing. stdout: stdout: **Training integration test** stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stderr: [W LegacyTypeDispatch.h:79] Warning: AutoNonVariableTypeMode is deprecated and will be removed in 1.10 release. For kernel implemenase use AutoDispatchBelowADInplaceOrView instead, If you are looking for a user facing API to enable running your inference-only workload, c10::InferenceMode. Using AutoDispatchBelowADInplaceOrView in user code is under risk of producing silent wrong result in some edge cases. utoDispatchBelowAutograd] for more details. (function operator()) stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: Training yielded the same results on one CPU or distributed setup with no batch split. stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: FP16 training check. stdout: Training yielded the same results on one CPU or distributes setup with batch split. stdout: FP16 training check. stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: BF16 training check. stdout: BF16 training check. stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 stdout: Model dtype: torch.float32, torch.float32. Input dtype: torch.float32 Test is a success! You are ready for your distributed training! Before submitting [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). [x] Did you read the contributor guideline, Pull Request section? [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case. [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings. [ ] Did you write any new necessary tests? Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @muellerzr The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. Another way of phrasing that: is it a fair assumption that the basic torch.autocast for bf16 works fine on NPU, while fp16 requires torch.npu.amp.autocast @statelesshz ? Another way of phrasing that: is it a fair assumption that the basic torch.autocast for bf16 works fine on NPU, while fp16 requires torch.npu.amp.autocast @statelesshz ? @muellerzr Thanks for your comment. torch.npu.amp.autocast and torch.autocast both work with fp16 and bf16. I made a couple of tweaks in https://github.com/huggingface/accelerate/pull/1949/commits/dc0c998e2ee54894565130ac35a6d690f2aeea98: Use torch.autocast on NPU to make it consistent with existing implementation To make sure we can use certain helpful APIs like torch.autocast(device_type='npu', dtype=torch.float16, **autocast_kwargs), we need to import torch_npu to register the NPU backend with PyTorch. It's an easy step that will make things go smoother! tested with accelerate test using a single NPU. @muellerzr It seems like we've gone through all the required checks :-) Thanks again!
GITHUB_ARCHIVE
package org.d2rq.db.vendor; import static org.junit.Assert.assertEquals; import org.d2rq.db.DummyDB; import org.d2rq.db.schema.ColumnName; import org.d2rq.db.schema.Identifier; import org.d2rq.db.schema.TableName; import org.d2rq.db.vendor.Vendor; import org.junit.Test; /** * @author Richard Cyganiak (richard@cyganiak.de) */ public class VendorTest { private Vendor vendor; @Test public void testSingleQuoteEscapeMySQL() { vendor = Vendor.MySQL; assertEquals("'a'", vendor.quoteStringLiteral("a")); assertEquals("''''", vendor.quoteStringLiteral("'")); assertEquals("'\\\\'", vendor.quoteStringLiteral("\\")); assertEquals("'Joe''s'", vendor.quoteStringLiteral("Joe's")); assertEquals("'\\\\''\\\\''\\\\'", vendor.quoteStringLiteral("\\'\\'\\")); assertEquals("'\"'", vendor.quoteStringLiteral("\"")); assertEquals("'`'", vendor.quoteStringLiteral("`")); } @Test public void testSingleQuoteEscape() { vendor = Vendor.SQL92; assertEquals("'a'", vendor.quoteStringLiteral("a")); assertEquals("''''", vendor.quoteStringLiteral("'")); assertEquals("'\\'", vendor.quoteStringLiteral("\\")); assertEquals("'Joe''s'", vendor.quoteStringLiteral("Joe's")); assertEquals("'\\''\\''\\'", vendor.quoteStringLiteral("\\'\\'\\")); assertEquals("'\"'", vendor.quoteStringLiteral("\"")); assertEquals("'`'", vendor.quoteStringLiteral("`")); } @Test public void testQuoteIdentifierEscape() { vendor = Vendor.SQL92; assertEquals("\"a\"", quoteIdentifier("a")); assertEquals("\"'\"", quoteIdentifier("'")); assertEquals("\"\"\"\"", quoteIdentifier("\"")); assertEquals("\"`\"", quoteIdentifier("`")); assertEquals("\"\\\"", quoteIdentifier("\\")); assertEquals("\"A \"\"good\"\" idea\"", quoteIdentifier("A \"good\" idea")); } @Test public void testQuoteIdentifierEscapeMySQL() { vendor = Vendor.MySQL; assertEquals("`a`", quoteIdentifier("a")); assertEquals("````", quoteIdentifier("`")); assertEquals("`\\\\`", quoteIdentifier("\\")); assertEquals("`Joe``s`", quoteIdentifier("Joe`s")); assertEquals("`\\\\``\\\\``\\\\`", quoteIdentifier("\\`\\`\\")); assertEquals("`'`", quoteIdentifier("'")); } @Test public void testColumnNameQuoting() { vendor = Vendor.SQL92; assertEquals("schema.table.column", vendor.toString(ColumnName.parse("schema.table.column"))); assertEquals("table.column", vendor.toString(ColumnName.parse("table.column"))); assertEquals("\"schema\".\"table\".\"column\"", vendor.toString(ColumnName.parse("\"schema\".\"table\".\"column\""))); assertEquals("\"table\".\"column\"", vendor.toString(ColumnName.parse("\"table\".\"column\""))); } @Test public void testDoubleQuotesInColumnNamesAreEscaped() { vendor = Vendor.SQL92; assertEquals("\"sch\"\"ema\".\"ta\"\"ble\".\"col\"\"umn\"", vendor.toString(ColumnName.create(null, Identifier.createDelimited("sch\"ema"), Identifier.createDelimited("ta\"ble"), Identifier.createDelimited("col\"umn")))); } @Test public void testColumnNameQuotingMySQL() { vendor = Vendor.MySQL; assertEquals("`table`.`column`", vendor.toString(ColumnName.parse("\"table\".\"column\""))); } @Test public void testTableNameQuoting() { vendor = new DummyDB().vendor(); assertEquals("schema.table", vendor.toString(TableName.parse("schema.table"))); assertEquals("table", vendor.toString(TableName.parse("table"))); assertEquals("\"schema\".\"table\"", vendor.toString(TableName.parse("\"schema\".\"table\""))); assertEquals("\"table\"", vendor.toString(TableName.parse("\"table\""))); } @Test public void testBackticksInRelationsAreEscapedMySQL() { vendor = Vendor.MySQL; assertEquals("`ta``ble`", vendor.toString(TableName.create(null, null, Identifier.createDelimited("ta`ble")))); } @Test public void testTableNameQuotingMySQL() { vendor = Vendor.MySQL; assertEquals("`table`", vendor.toString(TableName.parse("\"table\""))); } private String quoteIdentifier(String identifier) { return vendor.toString(Identifier.createDelimited(identifier)); } }
STACK_EDU
Mathematicians have just designed a computer program that could prove the last 150 years of maths wrong if it ever stops running. That's not likely to happen any time soon, but the very creation of the program is testing the limits of some of the fundamental problems upon which modern mathematics is built. It's also an incredibly cool demonstration of how a machine Alan Turing came up with in 1936 continues to push the boundaries of maths. As Jacob Aron reports for New Scientist, the computer program is a simulation of something called a Turing machine, which is a mathematical model of computation developed in the '30s by Turing - the British mathematician who cracked the Enigma code during WWII, and whose life was the basis of the 2014 film, The Imitation Game. Put simply, the Turing machine isn't a physical machine, but you can imagine it as an never-ending line of tape, broken down into squares. On each of those squares is a 1, a 0, or nothing at all. The machine reads one square at a time, and depending on what it reads, it performs an action - it either erases the number and writes a new one before moving on, or simply moves on to a different square. Each of those actions, which mathematicians call a 'state', are determined by the mathematical algorithm or problem the Turing machine has been designed to solve. This is the best explanation of a Turing machine we've come across, for those who have the time to really wrap their head around the idea: Eventually, after rewriting all the squares, a Turing machine will halt, and whatever is left on the tape is the answer to the problem, programmed in 1s and 0s. Or at least, that's what's happened for the problems we've hypothetically thrown at it so far. Now, researchers Scott Aaronson and Adam Yedidia from MIT have designed three brand new Turing machines which ask three incredibly important questions in mathematics. And if any of them ever stops working, or 'solves' its problem, then it will suggest that a whole lot of what we know about maths wrong. The first problem dates back to the 1930s, when mathematician Kurt Gödel demonstrated that some mathematical statements can never be proven true or false, they're simply undecidable. That maths is pretty complex, but as Aron explains: "He essentially created a mathematical version of the sentence 'This sentence is false': a logical brain-twister that contradicts itself." The clause to that is that you could prove a problem decidable if you changed the basic assumptions underlying the problem - known in maths as the axioms (not to be confused with axions) - but then that would make other problems undecidable. "That means there are no axioms that let you prove everything," says Aron. Turing took that idea and ran with it - he proposed that there must be some Turing machines whose behaviour couldn't be predicted given the standard assumptions that underpin most modern maths - known as ZMC (or Zermelo-Fraenkel set theory with the axiom of choice, if you want to get specific). Those assumptions, or axioms, are sort of what Einstein's general theory of relativity is to physics, and explain how things work in the mathematics world. But no one had ever figured out how complex a Turing machine would need to be before it couldn't be solved - in other words, how many actions it would need before it just kept on going forever. Until now that is, because Yedidia and Aaronson claim they've created a Turing machine with 7,918 states (or actions) that should, in theory, go on forever. They named it 'Z'. "We tried to make it concrete, and say how many states does it take before you get into this abyss of unprovability?" says Aaronson. The Z machine is only a computer simulation for now, but it could be built into a physical device. "If one were then to turn such a physical machine on, what we believe would happen would be that it would run indefinitely," Terence Tao from the University of California, Los Angeles, who wasn't involved in the study, told Aron. (That's assuming it doesn't break or run out of power, of course). If the Z machine did stop, it wouldn't be the end of maths as we know it - you could recreate a Turing machine with more rigid axioms, or assumptions, in place so that it would keep going. But it would show that, eventually, a Turing machine would be able to 'decide' all problems. The other two Turing machines that Aaronson and Yedidia have designed would have bigger impacts if they stopped working. Aron explains: "These will stop only if two famous mathematical problems, long believed to be true, are actually false. These are Goldbach's conjecture, which states that every even whole number greater than 2 is the sum of two prime numbers, and the Riemann hypothesis, which says that all prime numbers follow a certain pattern. The latter forms the basis for parts of modern number theory, and disproving it would be a major, if unlikely, upset." Not to be a buzz-kill here, but the mathematicians don't actually have any intention of building these machines and turning them on, primarily because it's not a very efficient way of testing problems (especially seeing as you'd have to live forever to get the results). But the benefit of designing these Turing machines is that it helps to work out how complex these fundamental problems are. The Goldbach machine, for example, has 4,888 states, the Riemann machine has 5,372, and the ZFC one has 7,918, suggesting that final problem is the most complex - something most mathematicians would intuitively assume, Aaronson adds. The claims surrounding these three Turing machines haven't been peer-reviewed, but the team has published their code and all their calculations online, in the hopes that other mathematicians out there will test and build upon their work and create Turing machines for these problems that are even less complex, and confound the machine with less actions. That research could change our understanding of modern mathematics, or help reassure us that what we know so far is right. Either way, it's pretty exciting.
OPCFW_CODE
The point (4,1) undergoes the following transformations 1) Reflection about x=y 2)Transformation through a distance 2 units along +ve x axis 3) Rotation through an angle $\pi/4$ about the origin in the counterclockwise direction Find the final coordinates The point becomes (1,4) Then after shifting origin $$X=x-h$$ $$X=-1$$ So (-1,4) After rotating the axes $$X=x\cos \pi/4+y\sin \pi/4$$ $$X=\frac{-1}{\sqrt 2} +\frac{4}{\sqrt 2}$$ $$X=\frac{3}{\sqrt 2}$$ But the x coordinate given in the answer is $\frac{-1}{\sqrt 2}$. What’s wrong? "Transformation through a distance 2 units along +ve x axis" - if the transformation is right (i.e. not left) by 2 units, then (1,4) -> (3,4), which does not end up as (1,4) when rotated pi/4 counter-clockwise. Another way is by complex numbers $$ $$ Let $Z_1$=(3+i4) is rotated about origin by π/4 and complex number after rotation is $Z_2$ $$\frac{Z_2}{Z_1}=e^{\frac{iπ}{4}}$$ $$Z_2=(3+i4)(\frac{1}{\sqrt{2}}+i\frac{1}{\sqrt{2}})$$ $$Z_2=-\frac{1}{\sqrt{2}}+i\frac{7}{\sqrt{2}}$$ Hence point after rotation is $(-\frac{1}{\sqrt{2}},\frac{7}{\sqrt{2}})$. For more on rotation of complex numbers check this link https://www.mathsdiscussion.com/best-iitjee-maths-for-mains-and-advance/ Step 2 transformation by 2 unit along +X axis point after this is (3,4) $$ $$ Now rotation by π/4 $$ X=5(\frac{3}{5}.\frac{1}{\sqrt{2}}-\frac{4}{5}.\frac{1}{\sqrt{2}})$$ $$X=\frac{-1}{\sqrt{2}}$$ I wrongly assumed that the origin was shifted by two units. My bad Where did $\frac 35$ and $\frac 45$ come from? If you join point (3,4) to origin and using parametric coordinate on the line segment after rotation total angle after rotation is ($\alpha$+π/4) where $\alpha$ is inclination of line joining (3,4) to origin. So in case of rotation, are we rotating the axes or simply the point about an angle $\pi/4$? Rotating point about origin by π/4 But then can we use the rotation of axes formula conversely for the Same result? I am asking because I can’t understand what you have solved Geometrically it is similar to the one solution given by complex number.
STACK_EXCHANGE
So we write my own static code follows the essay writing custom stylecop rules that lets you are the sonarqube roslyn, uml, vb. Intproperty problem i've written so we could probably make money writing custom rules in stylecop to keep a built-in rules for the essay remember. Programming languages how to edit/generate a need flow analysis rules. If you must install stylecop for analyzing your own custom rule writing exploratory essays for best performance common objections to fix. Targeted writing custom rule in the code base is aimed to create. Custom rules in my own custom rules for stylecop it can find some articles on design https://pyjama-licorne.com/year-8-english-creative-writing-worksheets/, estimation, jason allor has just plain silly. Know what fxcop, the project's local folder first make this tutorial will operate. Sonarc supports building an msbuild, uml, jason allor has built-in rules. Help on the crannied wall tennyson analysis rules for stylecop. Named after the stylecop-for-resharper plug-in is aimed to established rules. Is aimed to identify issues in visual studio 2013 and rules. You can add the written in printed or remove specific rules for stylecop provides an essay interesting, design. Being Full Article similar to enforce them automatically enforce a stylecop rule sp0100? Being assigned null, november 25, which you build systems have created a friend essays for csharp. Yours pieces of guest sheik the written with an xml file header at the idea here is enough help. Yours pieces of 70 extended/custom winforms controls, jason allor has built-in rules for use code, movies and which can be more for https://lights4models.com/ wish. Title help me wanting to create a custom rules, outline for coding rules. Tfs allows for the theories of style cop rule for analyzing your. Tuesday, how to code follows the companion informational pep describing style guidelines you can enable/disable the stylecop works. Title help on custom rules for samples on design patterns, research paper a rule sp0100? How to be given read-write access to use the cold war begins essay remember. This post on creating a custom rules that loads in c toolchain. Exe which stylecop 4.3 had been writing was throwing exceptions in the preferred way to use in access homework assignment help, open visual studio, one can. There are two golden rules will introduced, stylecop and programming languages how it can open a code in fxcop is run. Add the project i'm writing the option of code base to established rules in my own custom stylecop to check that would make developing. The official rules using stylecop it at the code technical and. Settings file from sourceanalyzer class library containing the microsoft that increase readability and you have been writing the first make sure you are always. Add a number of 70 extended/custom winforms controls, research paper checklist, open a way to enforce them all. A custom ruleset file and set in stylecop has just plain silly.
OPCFW_CODE
How Unity3D Sucks? "You'd have been a millionaire ages ago, if you had written a game with me using Unity3D!" - The mantra repeated by a friend. Last January I decided to work on a game with my friend, with Unity3D. I wanted to teach him some programming so he could write a game himself. Thought we could publish in SteamOS once we've got something to show. The friend, studying in North Karelian vocational school, wanted to write a winter survival game. I wrote code to get the player move and handle items, throw boxes, fire emergency rockets and freeze dead. Wrote code for everything. While doing it, I explained everything to him. He thought the game required too much graphics so we switched to writing a grid-floored factory simulator. I've gotten to know Unity3D better and my friend has learned some programming. Although working on games has been fun. But during the last weeks I've learned why Unity3D, version 3.4, is disgusting. Now you're going to hear about several gimmicks and stupid practices I wouldn't want to see anywhere again from now on. In the knees of the Windows&Mac Although Unity3D can publish to any platform, it's editor only works in Windows and Mac. I hate using Windows and I don't own a Mac, so I used teamviewer to connect to my friend's machine. He was watching on his seat while I were writing. This removed the need for version control and made it otherwise easier to explain things to my friend. To be fair, they are the two major platforms even today. But I would rather bite my leg than use anything else than Linux for developing any kind of software. Update: As of 2015, Unity3D editor is available for Linux as well. You can fetch it from unity3d blog & forums. Collaboration has been improved as well. Unity3D teams provide collaboration tools for small teams, and if you don't like that offering, you can use external version control. Wants to clutter your mind with useless crap Unity's editor opens full of small buttons and widgets like a jetliner cockpit. When you create a new project, it asks you which modules you want as if it mattered before the project was created. When presented to such environment, it's easy to lose the direction. I prefer clean, minimal interfaces for everything. Having too many controls irrelevant to what I am doing at the moment will just slow me down and reduces my feeling of being in control. The first time I ran the Unity3D, I just added a box, ran the thing, saw the box falling, then quit the editor as I was clueless from where to continue. Inside the Golden jail After you've started a new project, it pounds up with an empty scene graph open in an editor. It's common for the developers to write their own editors. When they do so, the editor the Unity3D provides seem to become entirely useless. You simply do not seem to have control to which pieces of Unity3D you choose to use. Unity3D insists that scripts you write are components. Components attach to objects, and the objects belong to a scene or prefab. Aside being useless and inefficient for many games, it doesn't make much sense to me. Scripts about never affect just one object. Sometimes they do not affect an object at all. It's very common to have a GameMaster in your Unity3D scene - an empty object that does serve none other purpose than hold the major scripts in the game. To make it worse, Unity3D bolts the script component, name of the script and the class in the script together. The script must have the same name as the class, and you refer to a script in each component. It reinforces the beginning programmer misconception that an instance or object in programming means for the object in the scene. More useful abstract meanings behind objects may end up unused. Besides, the code doesn't end up into files reasonably, but by the subject they inhabit. Lack of OpenGL API OpenGL is the true industry standard for real time rasterizing, because it's available to any vendor and not for just those who Windows everything. Sony, Valve, and even that astray Nintendo all use some version of OpenGL in their platforms. It's not always working to the spec, but it's there. Because OpenGL is everywhere, it is one of those things that are portable. You could always have it included. For example we have WebGL in browsers because it can be supported anywhere. Unity3D does not provide portable OpenGL API of any kind. If you want to provide graphics to several platforms at once, you're either using default, limited Unity3D, or you are on your own. Static typing and C sharp C# is Microsoft's answer to java, and java itself is an awful language. I have a fresh post where I reason why static typing is stupid. It's not entirely bad though I used to think that Unity3D sucks. Well it turns out I was right all along! There are things that I can stick to. There's something nice about Unity3D I like. It lets people with no programming experience to start with game development. It's easy to learn how to script up simple behaviours. There are lot of pre-written behaviors that are easy to reuse. That allows people to learn programming while they're writing their game. Anything that does such thing, can't be entirely bad.
OPCFW_CODE
How to push to an array that is iterated by ng-repeat from within a controller? Here's Angular’s behavior I had no idea about. Let’s say I have an array of items in the controller, and I ng-repeat through them in the view. I have a button in the view, and when I click on it, an item is pushed in the array. The ng-repeated list gets updated in the view. All’s good so far. But. If I just push a new item in the array inside the controller (let’s say, I set a timeout to do so), ng-repeat in the view will not register the change of the array. Example on plunker: http://plnkr.co/edit/1mlOpGVOXmnCAa8W5KEx?p=preview HTML: <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="style.css"> <script src="http://apps.bdimg.com/libs/angular.js/1.4.0-beta.4/angular.min.js"></script> <script src="script.js"></script> </head> <body ng-app="myApp"> <main ng-controller="MainController as ctrl"> <button ng-click="ctrl.addName()">Add an item</button> <ul> <li ng-repeat="name in ctrl.names"> {{name.name}} </li> </ul> </main> </body> </html> JS: angular.module('myApp', []) .controller('MainController', function(){ var self = this; self.test = 2; self.names = [{name: 'example'}]; self.addName = function(){ self.names.push({name: 'new example'}); console.log(self.names) }; setTimeout(function(){ self.names.push({name: 'another example'}); console.log(self.names); }, 1000); }); Observed behavior: clicking on the button will trigger the addName function and add an item to the list. setTimeout however will push an item in the array, but that won't be reflected in the view. While examining the results of the console log, I see that the items that are displayed have a $$hashKey property, and a simple push to the array will result in an item that lacks this property. Question: What is the proper way for adding items to an array in the controller? Should I use $digest or $apply or something? setTimeout was given as an example of a function called from within a controller. My actual code responds to an event that's fired in the controller and pushes an item to the array. how will that function trigger? if you calling by hand, you still need to wrap inside something like, $timeout or $applyAsync or $evalAsync or $$postDigest. if you call it from $http.get, it will trigger automatically. I am using a third-party geomapping library. It returns a promise, and I am calling a function in the controller when this promise is resolved. The function will then add whatever the promise returned to the this.names array on the controller. But the angular context is lost, apparently. So I am wondering how to restore it. yes, thats not angular's promise, so, you need to trigger it manually with one of the above I mentioned. (eg., $scope.$applyAsync(function(){self.names.push({name: 'another example'});});) Angular works by periodically performing dirty checks if any of the watched model values has changed. It does that in so-called $digest cycles. These cycles are automatically run when Angular thinks it is necessary, but if an event happens outside the Angular context, Angular will not know about it, will not fire a $digest cycle, won't detect the change(s) and the view will not be updated. When you push a value to an array (outside the Angular context), you will need to run the $digest cycle manually, e.g. by wrapping your function into $scope.evalAsync(), which will trigger dirty checking in scope. Update: Yes, you can also call $scope.$digest(), but the problem with that is that if $digest cycle is already running, you will get that unfamous "$digest() already in progress" error. $scope.evalAsync() does not suffer from that problem, because it fires a $digest cycle asynchronously. What's more, if a $digest cycle is already in progress, it will try to do the work in this same cycle and will not unnecessarily fire a new one. You can also see this excellent blog post by Ben Nadel, explaining everything in even more details (e.g. comparison with using the $timeout service, which is yet another alternative). As I mentioned in the comment, I am not using setTimeout in my actual code — I am listening to events fired by a certain object and then run a function that pushes an element to the array. The setTimeout I used in the question is just for illustration purposes. So I am looking for a general solution, not the specific one that will just deal with setTimeout. OK, thanks, I will update my answer accordingly. The answer is basically the same, you need to fire a $digest cycle manually when an event happens outside Angular. And how do I fire the $digest cycle manually? Is it something like self.names.$digest()? Almost. :) It's $scope.$digest() (or $scope.$apply()), but that has some downsides(i'll further update my answer) Ah, I wanted to use the controller as syntax and try to stay clear of the $scope. But this works, thank you! I'm glad I was able to help you.
STACK_EXCHANGE
import React from "react"; import {ProgressBar} from "react-bootstrap"; import {List, Map} from "immutable"; import FontAwesome from "react-fontawesome"; export default class UploadRecord { constructor(file, chunkCount) { this.name = file.name; this.size = file.size; this.startTime = Date.now(); this.endTime = 0; this.chunkCount = chunkCount; this.progress = new Map(); this.statusCodes = new Map(); this.errors = new List(); this.elapsedTime = 0; this.uploadSpeed = 0; this.mergeState = null; } completeChunk = (idx, statusCode) => { this.statusCodes = this.statusCodes.set(idx, statusCode); this.endTime = Date.now(); }; failChunk = (idx, statusCode, text) => { this.statusCodes = this.statusCodes.set(idx, statusCode); this.errors = this.errors.push(text); this.endTime = Date.now(); }; completeMerge = () => { this.mergeState = 200; }; failMerge = (error) => { this.mergeState = error; }; isComplete = () => { return this.allChunksComplete() && this.mergeState !== null; }; allChunksComplete = () => { return this.statusCodes.size === this.chunkCount; }; allChunksSuccessful = () => { return this.allChunksComplete() && this.statusCodes.every(s => s === 200); }; isSuccess = () => { return this.allChunksSuccessful() && this.mergeState === 200; }; isFail = () => { return this.isComplete() && (this.statusCodes.valueSeq().some(s => s !== 200) || this.mergeState !== 200); }; updateChunkProgress = (idx, totalInBytes) => { this.progress = this.progress.set(idx, totalInBytes); }; updateCachedRate = () => { let elapsedTimeMillis = (this.endTime > 0 ? this.endTime : Date.now()) - this.startTime; if ((elapsedTimeMillis - this.elapsedTime) > 50) { // ~20Hz refresh rate this.elapsedTime = elapsedTimeMillis; const mbUploaded = this.getTotalBytesUploaded() / 1000000; const elapsedTimeSeconds = this.elapsedTime / 1000; const rate = mbUploaded / elapsedTimeSeconds; this.uploadSpeed = Math.round(rate * 100) / 100; } }; getElapsedTimeMillis = () => { return this.elapsedTime; }; getStatus = () => { if (this.isSuccess()) { return <FontAwesome name="check" size="lg"/>; } else if (this.isFail()) { return <FontAwesome name="times" size="lg"/>; } else if (this.progress.valueSeq().reduce((a, b) => a + b, 0)) { return <FontAwesome name="spinner" size="lg" spin/>; } else { return <FontAwesome name="question" size="lg"/>; } }; getProgress = () => { let totalBytesUploaded = this.getTotalBytesUploaded(); let percentComplete = (totalBytesUploaded / this.size) * 100; if (this.progress.size > 0) { return <ProgressBar variant="success" now={percentComplete} key={1}/> } return null; }; getTotalBytesUploaded() { return this.progress.valueSeq().reduce((a, b) => a + b, 0); } getUploadSpeed = () => { return this.uploadSpeed; }; asRow = () => { this.updateCachedRate(); return ( <tr key={this.name}> <td>{this.getStatus()}</td> <td>{this.name}</td> <td>{Math.round(this.size / 1000) / 1000}</td> <td>{this.getProgress()}</td> <td>{this.getElapsedTimeMillis() / 1000}</td> <td>{this.getUploadSpeed()}</td> <td>{[...new Set(this.errors)].sort().join("|")}</td> </tr> ); }; }
STACK_EDU
Issue with a function receiving variables from another function that updates on input http://jsfiddle.net/sSwvq/94/ title says most of it, I'm having an issue with the very last function, it's receiving variables from function b, and function b received variables from function a, and function a works based on whenever a user changes their input. The way the data is sent is input.functionA sends variable 1 and runs function b input.functionA sends variable 2 and runs function b input.functionA sends variable 3 and runs function b functionB(1) sends variable 4 and runs function c functionB(2) sends variable 5 and runs function c functionB(3) sends variable 6 and runs function c functionC(4,5,6) receives one variable at a time(which I think is the issue, as its run without collecting all of the variables) and runs the function. Any help is greatly appreciated! Code is below! HTML <form> <input id="wineQty" class="qty" type="text" placeholder="Wine Tasting Amount" /> <input id="dinnerQty" class="qty" type="text" placeholder="Dinner Amount" /> <input id="golfTeamQty" class="qty" type="text" placeholder="Golf Team Amount" /> </form> <!--display total cost here--> The Costs are: <br> Wine Tasting: <span id="wineCostTag"> </span> <br> Dinner: <span id="dinnerCostTag"> </span> <br> Golf Team: <span id="golfCostTag"> </span> <br> Total Cost: <span id="orderTotalCost"> </span> Javascript var wineCost = 20; var wineQuantity = document.getElementById("wineQty"); var dinnerCost = 30; var dinnerQuantity = document.getElementById("dinnerQty"); var golfTeamCost = 400; var golfTeamQuantity = document.getElementById("golfTeamQty"); var wineCostTag = document.getElementById("wineCostTag"); var dinnerCostTag = document.getElementById("dinnerCostTag"); var golfCostTag = document.getElementById("golfCostTag"); var orderTotalCost = document.getElementById("orderTotalCost"); wineQuantity.oninput = function(){ var val1 = parseInt(wineQuantity.value, 10); updateWine(val1); }; dinnerQuantity.oninput = function(){ var val2 = parseInt(dinnerQuantity.value, 10); updateDinner(val2); }; golfTeamQuantity.oninput = function(){ var val3 = parseInt(golfTeamQuantity.value, 10); updateGolf(val3); }; function updateWine(val1) { var wineTotalCost = (wineCost * val1); wineCostTag.innerHTML = wineTotalCost; updateTotal(wineTotalCost); } function updateDinner(val2) { var dinnerTotalCost = (30 * val2); dinnerCostTag.innerHTML = dinnerTotalCost; updateTotal(dinnerTotalCost); } function updateGolf(val3) { var golfTeamTotalCost = (400 * val3); golfCostTag.innerHTML = golfTeamTotalCost; updateTotal(golfTeamTotalCost); } function updateTotal(wineTotalCost, dinnerTotalCost, golfTeamTotalCost) { var totalCost = (wineTotalCost + dinnerTotalCost + golfTeamTotalCost); orderTotalCost.innerHTML = totalCost } There seems to be a mismatch in arguments, you can't just add arguments and expect them to magically work, you have to actually call the function with those arguments, but in your case you should get the values directly in the updateTotal function instead. so along with sending them to updateWine/updateDinner/updateGolf I should send them from the oninput to one big function at the end? Edit: Or just do the updateWine/dinner/golf in the oninput itself and send out the individual totals from the oninput to a larger function http://jsfiddle.net/sSwvq/97/ Are you a wizard adeneo? Haha that's perfect. Why is it that you had to call the CostTags and etc again? In the final function is it pulling from what it already updated into the html?(So like the individual functions --> HTML --->total function -->HTML? I'm new to the DOM stuff. Including variable names in a parameter list (e.g. val3 in function updateGolf(val3)) is effectively the same as declaring them in the function body. Values are passed by their position in the call and matched by position to names in the parameter list, not by name. @Soccham You seem to have a serious misunderstanding about how function calling works, which is basic to practically all programming. I think you need to hit the books and learn the fundamentals. You don't need the arguments, as you can't really send all the arguments from three different event handlers at the same time anyway. I see now that I forgot to remove them in the fiddle, so it's possibly a little confusing, but this is more accurate -> http://jsfiddle.net/sSwvq/102/ I saw that, I also removed them from where the oninput functions call the updateTotal(). I feel like I understand it now, thank you for your help!
STACK_EXCHANGE
If you tap very quickly on an element, it causes two clicks to occur. This can most easily be recreated when tapping the Back button, as the previous page will mostly likely have a Back button in the same place. Thanks. Can you specify what device and version you're seeing this on and steps to reproduce. Alpha 4. HTC Incredible. Android with HTC Sense 2.2. I've seen this problem reported in the forum, as well. This is also happening on my iPad 2 and iPhone 4, both running the latest public version of iOS. FWIW, I have $.mobile.defaultTransition = 'none'; set. Removing that seems to prevent the bug from happening in iOS. It still happens in Android, though. Thanks for the follow-up, that explains why I can't seem to reproduce on our demo pages in iOS because we have the slide transition. It sort of makes sense because if there isn't a transition, the page swap may happen fast enough that the next page is there while some events are still bouncing around. Definitely related to our new event system. I checked in a fix for this see issue 1331: I tested with commit d591be5. Issue appears to be fixed. Thank you. I believe this bug is back on Android in Phonegap on jQuery Mobile B3 and latest. With transitions set to none, if I handle clicks on buttons and manually change pages with changePage, the click is also registered on the new page. I have put up a demo app at http://www.4shared.com/file/mI3Sauj0/AndroidTest.html You will see that if you click on the button, it takes you to page 2, then immediately back to page 1 because the button on the second page also gets the click. I have put the HTML and JS in a jsfiddle for easy viewing at http://jsfiddle.net/woztheproblem/Kf5hd/ (that page isn't functional though because it is meant for Phonegap. I have tested the demo app on the Android 2.2 emulator, but the issue itself has been seen in the app I am developing on several Android devices. Using transitions does resolve this issue, but as you know the transitions on Android are still kind of jumpy, so I'd prefer to use no transitions. Sorry for posting here when I now realize there are more recent issues related to this. Let me know if you'd like me to move/delete my posts. I do see from #1904 that if I switch to listening for click instead of vclick, the problem goes away. Is that the preferred solution to this? The discussion in #1925 isn't clear (to me at least) about whether using vclick for this should work. Thank you. Yes, if you are navigating or moving things around on the page, click will be safer on Android because the targets of the click can get confused. I am experiencing this bug on iPad 2. I have a menu that flies off when you tap it, and sub-menus move in to the general area where the top menu was. If you tap the top menu fast, a tap enters for a sub menu that moves into the location where you tapped and then a page transition, only associated with a sub-menu tap, occurs. I can privately send you a link and login permissions to a framed out example, if you would like. @GregRHT - This could be the same issue. If you click something that causes the screen to re-paint, there is the possibility that the even can seem to fire twice. It doesn't usually happen on iOS but that would be the theory. Are these just normal links? @GregRHT - Can you create a simplified test page using jsbin? We can't really work with a whole app, especially a p/w protected one for an issue. Template: I was able to come up with: I haven't had much chance to clean the code. This is my first jqm project, and I found myself doing some of the work on my own that I could have integrated into jqm's structure. I embedded the linking doc into the default page. Tapping on a menu item gathers the top menus, selected menu on top, and opens sub-menus. Tapping on the cluster returns back to previous state. In this case, only "Clinical Expertise" contains sub-menus. If you fast tap it, it will most often immediately link to "clinical expertise" or "clinical outcomes", depending on where you have tapped the top menu. (You are supposed to be able to tap the header to go back to home page, which is not working in this example)
OPCFW_CODE
Data Factory expression substring? Is there a function similar like right? Please help, How could I extract 2019-04-02 out of the following string with Azure data flow expression? ABC_DATASET-2019-04-02T02:10:03.5249248Z.parquet The first part of the string received as a ChildItem from a GetMetaData activity is dynamically. So in this case it is ABC_DATASET that is dynamic. Kind regards, D I will gently point out that it is impossible, since the string you wish to extract is not found within the sample provided :-) haha sorry @JoelCochran, almost weekend I guess ;) Aside from that, my first question would be if the string is always in this exact format. If so, use Derived Column with a substring to extract the value into a column. Assuming the string is always the same, the expression in a DerivedColumn would look like this: substring($stringToParse,13,10) where "$stringToParse" would reference your column or parameter value. Hi @JoelCochran, thanks for answering. The first part of the string (ABC_DATASET) is not always the exact length. The string is the output of a getMetadata activity (childItem). This is where I get lost, because I didn't found a data factory expression function like right or use substring (-10, 2) for example Definitely more complicated then. Can you at least guarantee that the format will always be {variabledata}-{timestamp}.parquet? There are several ways to approach this problem, and they are really dependent on the format of the string value. Each of these approaches uses Derived Column to either create a new column or replace the existing column's value in the Data Flow. Static format If the format is always the same, meaning the length of the sections is always the same, then substring is simplest: This will parse the string like so: Useful reminder: substring and array indexes in Data Flow are 1-based. Dynamic format If the format of the base string is dynamic, things get a tad trickier. For this answer, I will assume that the basic format of {variabledata}-{timestamp}.parquet is consistent, so we can use the hyphen as a base delineator. Derived Column has support for local variables, which is really useful when solving problems like this one. Let's start by creating a local variable to convert the string into an array based on the hyphen. This will lead to some other problems later since the string includes multiple hyphens thanks to the timestamp data, but we'll deal with that later. Inside the Derived Column Expression Builder, select "Locals": On the right side, click "New" to create a local variable. We'll name it and define it using a split expression: Press "OK" to save the local and go back to the Derived Column. Next, create another local variable for the yyyy portion of the date: The cool part of this is I am now referencing the local variable array that I created in the previous step. I'll follow this pattern to create a local variable for MM too: I'll do this one more time for the dd portion, but this time I have to do a bit more to get rid of all the extraneous data at the end of the string. Substring again turns out to be a good solution: Now that I have the components I need isolated as variables, we just reconstruct them using string interpolation in the Derived Column: Back in our data preview, we can see the results: Where else to go from here If these solutions don't address your problem, then you have to get creative. Here are some other functions that may help: regexSplit left right dropLeft dropRight
STACK_EXCHANGE
If you’re interested in snakes, then you’ve probably heard of pythons. Pythons are a family of non-venomous snakes that are known for their ability to constrict their prey. They belong to the family Pythonidae, which includes some of the largest snakes in the world. One of the most interesting things about pythons is that there are many different breeds. The python genus currently has ten recognized species, including the Burmese python, African rock python, and Indian python. Each breed has its own unique characteristics, such as size, color, and habitat. In this article, we’ll take a closer look at some of the different breeds of pythons and what makes them so fascinating. Pythons are often kept as pets, but they can also be found in the wild in warm climates around the world. As non-venomous constrictors, they use their powerful muscles to squeeze their prey until it suffocates. While some breeds of pythons can be quite large, they are generally considered to be friendly and docile when kept as pets. Whether you’re a snake enthusiast or just curious about these fascinating creatures, learning about the different breeds of pythons is sure to be an interesting experience. Geographical Distribution and Habitats Pythons are found in various parts of the world, including Asia, Africa, Southeast Asia, Australia, and the Pacific Islands. The Indian python (Python molurus) is native to the Indian subcontinent and can be found across various countries in this region. The reticulated python (Python reticulatus) is found in Southeast Asia, including the Philippines and Indonesia. The African rock python (Python sebae) is found in sub-Saharan Africa, while the carpet python (Morelia spilota) is found in Australia and New Guinea. Different python species have different habitat preferences, but most of them are found in forests, swamps, wetlands, and other areas with abundant vegetation. Some species, like the African rock python, can also be found in rocky areas. Pythons are also known to burrow in the ground and use termite mounds as shelter. Pythons are primarily terrestrial, but some species, like the olive python (Liasis olivaceus), are semi-aquatic and can be found near water bodies. The green tree python (Morelia viridis) is arboreal and spends most of its time in trees. Overall, pythons are adaptable and can survive in a variety of habitats, from rainforests to grasslands. They are also commonly found near human settlements, which can lead to conflicts with humans. It is important to remember that pythons are wild animals and should be treated with caution and respect. Classification and Species Pythons are a family of non-venomous snakes that are known for their ability to constrict their prey. There are many different species of pythons, each with their own unique characteristics. The python family, Pythonidae, is divided into two subfamilies: Pythoninae and Liasis. The subfamily Pythoninae includes the true pythons, which belong to the genus Python. There are ten recognized species of true pythons, including the reticulated python, Burmese python, ball python, Indian python, Myanmar short-tailed python, green tree python, African rock python, Timor python, Sumatran short-tailed python, and carpet python. These pythons are known for their large size and powerful constriction abilities. The subfamily Liasis includes several genera of pythons, including Morelia, Leiopython, Antaresia, Apodora, Aspidites, Bothrochilus, and Nyctophilopython. These pythons are generally smaller than the true pythons and have different physical characteristics and behaviors. For example, the rough-scaled python has a unique pattern of scales that helps it blend in with its surroundings, while the Angolan python is known for its aggressive behavior and tendency to ambush its prey. There are also several subspecies of pythons, such as the Bismarck ringed python, Royal python, Amethystine python, Malayopython, Spotted python, Pygmy python, Papuan olive python, and Black-headed python. Each of these subspecies has its own unique characteristics and behaviors. Overall, pythons are fascinating creatures with a wide range of physical and behavioral adaptations that allow them to thrive in a variety of environments. Whether you are interested in large, powerful pythons like the reticulated python and Burmese python or smaller, more colorful species like the green tree python and spotted python, there is a python out there for everyone. Behavior and Prey Pythons are known for their constricting abilities, which they use to subdue their prey. They are not venomous snakes and rely on their powerful muscles to squeeze their prey until it suffocates. Pythons are capable of swallowing prey whole, which is why they need to be careful when selecting their target. Pythons are opportunistic feeders and will eat a variety of prey depending on what is available. They typically target small mammals like rodents, but they are also known to eat birds, lizards, and even antelope. Pythons are skilled ambush predators and use their camouflage to blend in with their surroundings and wait for prey to come within striking distance. Pythons are primarily ground-dwelling snakes, but some species are excellent swimmers. Water pythons, for example, are known for their ability to swim and hunt in bodies of water. Pythons are also nocturnal hunters and are most active at night when their prey is also active. When hunting, pythons use their sense of smell to locate prey and then strike quickly to catch it. Once they have caught their prey, they will wrap their powerful bodies around it and constrict until it is dead. Pythons are capable of swallowing prey whole, which can take several hours or even days to complete. In conclusion, pythons are skilled hunters who use their constricting abilities to subdue their prey. They are opportunistic feeders and will eat a variety of prey depending on what is available. Pythons are primarily ground-dwelling snakes, but some species are excellent swimmers. They are also nocturnal hunters and use their sense of smell to locate prey. Pythons as Pets Pythons can make great pets for the right owner. They are fascinating creatures that come in a variety of sizes, colors, and patterns. Pythons belong to the reptile family, which means they are cold-blooded and require a heat source to regulate their body temperature. When it comes to length, pythons can range from a few feet to over 20 feet, depending on the species. This means that you should carefully consider which type of python would be best suited for your living situation. Some species, such as the ball python, are more commonly kept as pets due to their smaller size and docile nature. Pythons are oviparous, which means they lay eggs. Females will typically lay a clutch of eggs and then incubate them until they hatch. This is an important consideration for those who are thinking of breeding pythons, as it requires a significant amount of time and effort to care for the eggs and hatchlings. One thing to keep in mind when considering a python as a pet is their teeth. Pythons have sharp teeth that they use to catch and hold their prey. While they are not venomous, a bite from a python can still be painful and potentially dangerous. It is important to handle your python carefully and avoid any sudden movements that may startle them. Finally, it is worth noting that pythons are often confused with boas. While they may look similar, they are actually two different species. Boas have shorter tails and give birth to live young, while pythons have longer tails and lay eggs. Overall, pythons can make great pets for the right owner. With proper care and attention, they can live long and healthy lives. If you are considering a python as a pet, be sure to do your research and choose a species that is well-suited to your lifestyle and living situation.
OPCFW_CODE
Mark's note: Today's guest post is by Brian Buck, a fellow lean healthcare practitioner who regularly blogs at “Improve With Me.” Following my tendency to share the silly and the funny on Saturdays, Brian delivers some laughs and insights below. By Brian Buck: Inspired by the recent Lean Memes contest, I wanted to create one from the classic Monty Python Hospital Sketch from the movie “The Meaning Of Life“. Upon watching the scene again, I realized there are many Lean-related lessons that would be a better blog post than crammed into a miniseries of memes. I am a big believer that behind most jokes there is a lot of truth and that is why we laugh. As much as we wish hospitals are not really like this scene, there are a lot of things we can relate to: - Tools and technology that are used based on preference and not on clinical outcomes or safety: I was on the fence about picking on the machine that goes “PING” because they said it told them the baby was alive but earlier in the scene the Obstetrician asked to bring it out “in case the administrator comes.” If the machine was needed for safety, it was not part of the standard set-up for the Operating Room. It may be possible the machine adds no value to the patients nor makes the work easier for the providers. - Searching for the patient: While I have not heard of patients getting hidden in the same room with providers like this scene; I have seen clinic doctors waiting because patients are missing because of hold-ups in registration, the family was out of earshot when called to come to the room, or they were in another room entirely! - Poor service from talking down to patients: The Pythons tell the mom to do nothing because she is not qualified and to “leave it to us”. This kind of behavior usually makes a patient feel powerless and afraid to speak up. While this is horrible from a customer-service standpoint, it could lead to safety issues if the patient does not share key information because they are intimidated by the doctor. - Wasted time with financial wizardry instead of improving where value is created: The Administrator received a round of applause for leasing the machine from the company they sold it to so it shows up under the monthly budget instead of the capital budget. Instead of gaming the budgeting system, it would be a better use of the leader's time coaching staff how to use Kaizen to improve the hospital's service. What other waste or poor service do you see in this scene that you have experienced in a hospital? If you are looking for more Lean-related Monty Python fun, check out the two memes that were created! About Brian Buck: Brian is an internal consultant at a children's hospital. He blogs at http://improvewithme.com and can be found on Twitter ashttp://twitter.com/brianbuck. He also has an essay published in Matthew E May's forthcoming book The Laws of Subtraction: 6 Simple Rules for Winning in the Age of Excess Everything. Did you like this post? Make sure you don't miss a post or podcast — Subscribe to get notified about posts via email daily or weekly. Check out my latest book, The Mistakes That Make Us: Cultivating a Culture of Learning and Innovation:
OPCFW_CODE
In its eight year old history, the (WS-I) successfully established a number of interoperability guidelines (a.k.a. “Profiles”) in the industry. This week, SAP together with other WS-I member companies reached an important milestone: The completion of the WS-I Basic Security Profile 1.1. Read this blog for more background information and what it means practically for your integration projects. Almost three years ago, SAP already to the previous version of the . Now we basically did a similar test round for the successor , but with a different setup and test scenarios according to the latest version of the underlying Web Service security standard. Before I will take a closer look at the actual tests and the results, let me quickly revisit the background of WS-I and the Basic Security Profile. What exactly is WS-I and the Basic Security Profile? Web Service protocols like define a rich but at same time fairly complex framework in terms of additional XML elements and processing rules for SOAP-based communication. Although the specifications published by and other standard bodies try to be as accurate as possible, much effort is needed to achieve a common understanding among different implementations – and thus interpretations – of a standard. Having this in mind, it is not surprising that additional clarification on the specifications is needed to achieve interoperability across platforms, operating systems and programming languages. Here is an example: When the underlying Web Service security specification allows choosing from a variety of algorithms to encrypt data in a SOAP message, WS-I addresses such a potential interoperability issue usually by restricting the choice to just one possible algorithm. These additional constraints in order to improve interoperability are called “Conformance Requirements” in the profile documents. The above statement summarizes the mission of WS-I, an open industry organization governed by SAP, IBM, Microsoft and others. Its main deliverable are the interoperability profiles which are basically named groups of Web Services specifications at a specific version level, along with clarifications, refinements, interpretations and amplifications of those specifications for best interoperability. To date, WS-I has completed the work on the (which resolved more than 200 interoperability issues for core SOAP messaging), the covering guidelines for the serialization of the SOAP envelope, and the BSP 1.0 is the essential guide for ensuring secure, interoperable Web services based on the first of the OASIS WS-Security specification from April 2002. It also provides a strong foundation for its successor, BSP 1.1, which addresses all changes in the new work done by the OASIS WS-Security committee on the specification from February 2006. New test scenarios for BSP 1.1 In order to approve a WS-I profile such as BSP 1.1 as completed and “Final Material”, at least four WS-I members must successfully demonstrate interoperability based on the profile implementation in their platforms and a set of test scenarios defined by the . To prove interoperability for WS-Security 1.0 based on BSP 1.0, the Sample Application Working Group used a and developed a for it in order to show the profile’s applicability to “real world” interoperable Web services. Since WS-Security 1.1 introduces just a few new capabilities compared to the previous version of the standard, the Sample Application Working Group decided to follow a more lightweight approach using a simple echo-like Web service called “Message Service” to test the new features. These are: Signature Confirmation: WS-Security version 1.0 has no guidance on how to confirm to a Web service consumer that its request and signature has been processed successfully by the intended recipient and that the response was actually generated from the request it initiated in its unaltered form. With a new element Encrypted SOAP Headers: WS-Security 1.1 introduces the new element Thumbprint Security Token Reference: Digital signature and encryption in a SOAP message require a key to be specified. The <wsse:SecurityTokenReference> element provides an extensible mechanism for referencing the XML element containing the key in question, e.g. a . As an extension to the mechanisms already defined in WS-Security 1.0, BSP 1.1 compliant Web services must also be able to identity a public key certificate based on its unique thumbprint, a cryptographic checksum, which is a new referencing mechanism specified by WS-Security 1.1. In BSP 1.1, conformance requirements surrounding the new Signature Confirmation, Encrypted SOAP Headers and Thumbprint Security Tokens Reference were defined to support the WS-Security 1.1 specification. These new or revised conformance requirements served as the core basis to scope the BSP 1.1 test scenarios as follows: Encrypted Header Message Service Request and Response (Scenario 1): The Message Service consumer encrypts a SOAP header element and the SOAP Body. Signature Confirmation Message Service Request and Response (Scenario 2): The Message Service consumer signs the Timestamp, Body and then encrypts the Body. The Message Service provider confirms the request signature with the response and includes a signed Signature Confirmation element. Thumbprint Reference Scenario Message Service Request and Response (Scenario 3): The Message Service consumer references the encryption certificate using the Thumbprint reference mechanism. Signature Confirmation with Encrypted Signature Message Service Request and Response (optional Scenario 4): Similar to scenario 2, the Message Service consumer signs the Timestamp and Body but then encrypts in addition to the SOAP Body also the entire Signature. The detailed test scenario descriptions including examples for the request and response messages can be found in the publicly available Last week, all five vendors (IBM, Intel, Layer 7 Technologies, Microsoft and SAP) who participated in the BSP 1.1 tests have successfully passed all scenarios between each other. What do end users get out of the BSP 1.1 interoperability tests? The WS-I Sample Application Working Group’s main objective is to demonstrate and validate that the composition of the various Web services specifications that have been produced in the past will actually work. If your vendor has participated in this Working Group and produced an implementation of the BSP 1.0 Sample Application and BSP 1.1 Message Service scenarios for the platform that your applications need to run on, you can be sure that you will have less interoperability issues than one that doesn’t. This ultimately will save both time and money when trying to connect your applications with applications on other platforms. !https://weblogs.sdn.sap.com/weblogs/images/45978/figure1.jpg|height=444|alt=SAP BSP 1.1 Test Client|width=638|src=https://weblogs.sdn.sap.com/weblogs/images/45978/figure1.jpg|border=0!</body>
OPCFW_CODE
#ifndef ImageFilterY_hxx #define ImageFilterY_hxx #include "itkObjectFactory.h" #include "itkImageRegionIterator.h" namespace itk { template <typename TImage> void ImageFilter<TImage>::GenerateData() { InternalGaussianFilterPointer smoothingFilters[ImageDimension]; // Instantiate all filters for (unsigned int i = 0; i < ImageDimension; ++i) { smoothingFilters[i] = InternalGaussianFilterType::New(); smoothingFilters[i]->SetOrder(GaussianOrderEnum::ZeroOrder); smoothingFilters[i]->SetDirection(i); } // Connect all filters (start at 1 because 0th filter is connected to the input for (unsigned int i = 1; i < ImageDimension; ++i) { smoothingFilters[i]->SetInput(smoothingFilters[i - 1]->GetOutput()); } const typename TImage::ConstPointer inputImage(this->GetInput()); const typename TImage::RegionType region = inputImage->GetRequestedRegion(); smoothingFilters[0]->SetInput(inputImage); smoothingFilters[ImageDimension - 1]->Update(); // Copy the output from the last filter // this->GraftOutput( m_SmoothingFilters[ImageDimension-1]->GetOutput() ); this->GetOutput()->Graft(smoothingFilters[ImageDimension - 1]->GetOutput()); } } // namespace itk #endif
STACK_EDU
Review and correct all documentation From @endelwar on January 12, 2014 10:39 Documentation is out of date, so it should be verified and corrected when necessary Want to back this issue? Post a bounty on it! We accept bounties via Bountysource. Copied from original issue: mailwatch/1.2.0#22 A new branch has been created to contain MailWatch documentation which can be reached at http://docs.mailwatch.org/ A new website presenting MailWatch has been created From @Johanhen on February 11, 2015 12:34 Nice site.! It would be nice to see the gui of mailwatch itself being upgraded to some more modern look. Thank you all for the work on mailwatch, I still use it every day and it never has let me down. regards Op 11-02-15 om 12:34 schreef Manuel Dalla Lana: A new website http://mailwatch.org/ presenting MailWatch has been created — Reply to this email directly or view it on GitHub https://github.com/mailwatch/1.2.0/issues/22#issuecomment-73867815. From @atftb on February 11, 2015 21:31 Very nice. I note on the installing page that it’s developed on Debian and Ubuntu 12.04. We mostly use Debian here so that’s pretty sweet, but I’m wondering about MailScanner – do you just install that from source? IIRC Julian usually developed on CentOS and updated packages for the Debian users were pretty much lacking unless one rolled their own. I’m trying to figure out the best way to keep things up to date with the least amount of effort/confusion. Thanks… ...Kevin I want to stabilize the 1.2.0 release ASAP and then move to a refactoring of source code and user interface I usually install MailScanner from source; there used to be a debian package maintained by baruwa but last version is 4.84.5-4~wheezy on apt.baruwa.org From @Skywalker-11 on January 18, 2017 17:16 Optional steps for Postfix: sql file does not exist anymore and we should do the following to display the queues chown postfix.www-data /var/spool/postfix/incoming/ chown postfix.www-data /var/spool/postfix/hold chmod g+r /var/spool/postfix/hold chmod g+r /var/spool/postfix/incoming/ From @Skywalker-11 on January 18, 2017 17:27 The docs also missing that the webserver must have write access to mailscanner/temp directory good news for debian users: @jcbenton worked really hard and has packaged the new release of MailScanner (4.85.2-2) in .deb format! From @asuweb on January 18, 2017 17:55 All the docs need an overhaul / review (as per this thread). @endelwar - Is there any current way to contribute directly to documentation? There is a separate repository that contains code for docs.mailwatch.org: https://github.com/mailwatch/mailwatch-docs PR can be sent there and are very very welcome! From @stefaweb on January 18, 2017 17:31 May be the best is to use mtagroup in /etc/group: mtagroup:x:1001:clamav,Debian-exim,mail,www-data @remkolodder remember the OpenLDAP documentation (see #549) From @asuweb on January 19, 2017 9:11 Excellent. Thanks
GITHUB_ARCHIVE
Reducing the number of output neurons I am trying to train a neural network to control a characters speed in 2 dimensions. x and y between -1 and 1 m/sec. Currently I split the range into 0.1 m/sec intervals so I end up with 400 output neurons (20 x values * 20 y values) if I increase the accuracy to 0.01 I end up with 40k output neurons. Is there a way to reduce the number of output neurons? Could you please explain some more about the network you are designing? Why not use 2 output neurons (one for x and one for y) with continuous outputs that refer to the estimated speed for each dimension? @MatthewSpencer , What do mean by continuous output? Do you mean I treat it as a prediction problem instead of classification? Do I just reduce the output layer to two neurons feed the outputs directly to the engine? Greeness' answer is a strong example of what I was trying to ask. The answer below simplifies the problem by breaking it down to two outputs with an output range of -1 to 1. I assume you are treating the problem as a classification problem. In the training time, you have input X and output Y. Since you are training the neural network for classification, your expected output is always like: -1 -0.9 ... 0.3 0.4 0.5 ... 1.0 m/s Y1 = [0, 0, ..., 1, 0, 0, ..., 0] // speed x component Y2 = [0, 0, ..., 0, 0, 1, ..., 0] // speed y component Y = [Y1, Y2] That is: only one of the neurons outputs 1 for each of the speed component at x and y direction; all other neurons output 0 (in the example above, the expected output is 0.3m/s in x direction and 0.5m/s in y direction for this training instance). Actually this is probably easier to learn and has better prediction performance. But as you pointed out, it does not scale. I think you can also treat the problem as a regression problem. In your network, you have one neuron for each of the speed component. Your expected output is just: Y = [0.3, 0.5] // for the same training instance you have. To get an output range of -1 to 1, you have different options for the activation function in the output layer. For example, you can use f(x) = 2 * (Sigmoid(x) - 0.5) Sigmoid(x) = 1 / (1 + exp(-x)) Since sigmoid (x) is in (0,1), 2*(sigmoid(x) - 0.5) is in (-1,1). This change (replace multiple neurons in the output layer with two neurons) greatly decreases the complexity of the model so you might want to add more neurons in the middle layer to avoid under fitting.
STACK_EXCHANGE
I am using Win 10 x64. I have all my music on one dedicated hard drive with folders as 'artist - album' and the music files in each folder. All the folders are set up as 'Optimise this folder for: Music'. Explorer is not listing the tag data for files which have similar tags set such as 'Title'. The files with missing tag lists play fine. If I swap the tags by 'copy tags' 'paste tags' the files which don't display them still don't, while those that did display the new ones. Foobar 'File Integrity' reports no problems, and they display the tag data and play everywhere and I have tried. The tags are also present on other tag editors. It's just that for some reason Win 10 doesn't want to display the file data for every file. As I don't believe the files and tags are faulty, this must be an issue with Win 10. Does anyone know a work around for Win 10, or anything else, I could try? I am using Win 10 x64. As the problematic files are falc files, see e.g. here: There are some more threads about this problem. They all end up with: Windows problem if tag fields exceed a certain length. Also consider to update to the latest MP3tag version and rewrite the files as there is a problem with the picture position: CHG: VorbisComment block is now always written before Picture block for FLAC as a workaround for an issue in Windows. (#53554) Regarding the 'certain length requirement' I don't think that a track number '02' would fall outside the parameter when track '01' and track '03 display correctly. I have upgraded to version 3.07a. Would you please clarify what you mean by 'rewrite the files'. Press Ctrl-S to save. Updating MP3tag alone does not modify the file structure where the picture data may irritate the displaying program. So the file has to be re-written to write the new file structure. Thanks for the help. Now using version 3.08. Rewriting makes no difference. As the files and tags work everywhere I have tried them, I believe the issue may be Windows related. There were also reports of Windows having problems with excess padding in FLAC files. See this topic for a slightly technical way of removing the padding The image size may need to be reduced: 800x800 is ok: I use this as default as its the biggest size my cars will take. 3000x3000 is not ok: missed some image resizing, but why would you need such a big size anyway? Many, many thanks Florian for this workaround and for MP3Tag. This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
+For a complete Subversion changelog, see 'http://pyyaml.org/log/pyyaml'. +* The emitter learned to use an optional indentation indicator + for block scalar; thus scalars with leading whitespaces + could now be represented in a literal or folded style. +* The test suite is now included in the source distribution. + To run the tests, type 'python setup.py test'. +* Refactored the test suite: dropped unittest in favor of + a custom test appliance. +* Fixed the path resolver in CDumper. +* Forced an explicit document end indicator when there is + a possibility of parsing ambiguity. +* More setup.py improvements: the package should be usable + when any combination of setuptools, Pyrex and LibYAML +* Windows binary packages are built against LibYAML-0.1.2. +* Minor typos and corrections (Thank to Ingy dot Net +* setup.py checks whether LibYAML is installed and if so, builds + and installs LibYAML bindings. To force or disable installation + of LibYAML bindings, use '--with-libyaml' or '--without-libyaml' +* The source distribution includes compiled Pyrex sources so + building LibYAML bindings no longer requires Pyrex installed. +* 'yaml.load()' raises an exception if the input stream contains + more than one YAML document. +* Fixed exceptions produced by LibYAML bindings. +* Fixed a dot '.' character being recognized as !!float. +* Fixed Python 2.3 compatibility issue in constructing !!timestamp values. +* Windows binary packages are built against the LibYAML stable branch. +* Added attributes 'yaml.__version__' and 'yaml.__with_libyaml__'. +* Windows binary packages were built with LibYAML trunk. +* Fixed a bug that prevent processing a live stream of YAML documents in + timely manner (Thanks edward(at)sweetbytes(dot)net). +* Fixed a bug when the path in add_path_resolver contains boolean values + (Thanks jstroud(at)mbi(dot)ucla(dot)edu). +* Fixed loss of microsecond precision in timestamps + (Thanks edemaine(at)mit(dot)edu). +* Fixed loading an empty YAML stream. +* Allowed immutable subclasses of YAMLObject. +* Made the encoding of the unicode->str conversion explicit so that + the conversion does not depend on the default Python encoding. +* Forced emitting float values in a YAML compatible form. +* Include experimental LibYAML bindings. +* Fully support recursive structures. +* Sort dictionary keys. Mapping node values are now represented + as lists of pairs instead of dictionaries. No longer check + for duplicate mapping keys as it didn't work correctly anyway. +* Fix invalid output of single-quoted scalars in cases when a single + quote is not escaped when preceeded by whitespaces or line breaks. +* To make porting easier, rewrite Parser not using generators. +* Fix handling of unexpected block mapping values. +* Fix a bug in Representer.represent_object: copy_reg.dispatch_table + was not correctly handled. +* Fix a bug when a block scalar is incorrectly emitted in the simple +* Hold references to the objects being represented. +* Make Representer not try to guess !!pairs when a list is represented. +* Fix timestamp constructing and representing. +* Fix the 'N' plain scalar being incorrectly recognized as !!bool. +* Fix Python 2.5 compatibility issues. +* Fix numerous bugs in the float handling. +* Fix scanning some ill-formed documents. +* Fix win32 installer. Apparently bdist_wininst does not work well +* Fix a bug in add_path_resolver. +* Add the yaml-highlight example. Try to run on a color terminal: + `python yaml_hl.py <any_document.yaml`. +* Initial release. The version number reflects the codename + of the project (PyYAML 3000) and differenciates it from + the abandoned PyYaml module.
OPCFW_CODE
This post describes a scenario where you’ve added site columns and content-types to a modern SharePoint Online site, created a list and added content and once Search has indexed your content you’re expecting there to be search crawled properties available in the sites search settings pages. - Group connected team site - Microsoft Teams team site - Modern Communications site But when you go to the crawled (/_layouts/15/listcrawledproperties.aspx?level=sitecol) and managed properties (/_layouts/15/listmanagedproperties.aspx?level=sitecol) pages in Site Settings, you don’t see the crawled and auto-created managed properties that you expect. At this point you’re probably wracking your brains trying to figure out why, verifying that you’ve done the rights steps, questioning your skills and career choices, after all this is something you’ve done many times before 😱🤯😥. - Created site columns - +/- Created Content-types - Created a list or library - Added the site columns and/or content-types to the list/library - Added content items and populated column data/metadata - Waited for Search to index your content - Using the OOTB site or list/library Search features confirms that its indexed your content Well rest easy SharePointarians, turns out this is something of a known issue that has been reported and documented by other folks over the years; - Joanne C Klein: https://joannecklein.com/2019/02/08/crawled-managed-properties-and-modern-team-sites/ - Trevor Seward: https://thesharepointfarm.com/2018/11/crawled-properties-not-created-from-site-columns-in-modern-sites/ - Asish Padhy: https://asishpadhy.com/2018/11/06/fix-for-site-column-not-showing-up-in-search-crawled-properties-in-microsoft-team-sites/ The upshot from these posts is that, this issue only affects Modern group connected sites and teams connected sites, and the resolution is to ensure that you are explicitly added to the Site Collection Administrators group — for group/teams connected sites only the Group/Team Owners group is added to the site collection administrators, not the Owners individually. After adding yourself to the site collection administrators, the issue was resolved, you could once again see the crawled properties you expect and map them to managed properties all day long 🍻 Why is this important Well, the issue seems to be one of visibility of crawled properties through the UI, you see, the crawled properties are actually created, and so too are the automatically created managed properties for eligible field types — https://docs.microsoft.com/en-us/sharepoint/technical-reference/automatically-created-managed-properties-in-sharepoint So even if you can’t see them they’re still there are you can use the auto-created managed properties for Querying (search expressions) and Retrieval (showing in your search based solutions). But, these auto-created managed properties do not support Refinement or Sorting, Taxonomy fields don’t automatically get a managed property and Date fields get automatically mapped to a TEXT managed property which is not useful for filtering or sorting. So for these reasons (and more) you might want to map crawled properties to managed properties, RefinableString, RefinableDate etc, which you can’t do if you can’t see the crawled properties in the crawled properties UI. Technically it’s possible to hand craft the Search Schema import files, but the XML schema for this is not documented and is pretty opaque. Thats not the end of the story though, a short while ago (couple of weeks from the date of this post), I and my colleague also noticed this issue occurring on ordinary modern communications type sites — sites which were not group or teams connected at all. Both of our tenants were spread across both EU & UK datacentres, apart from that there was nothing odd or unique about the sites and columns we were using — a mix of Text, Choice, Taxonomy etc. Cue …wracking brains trying to figure out why, verifying the rights steps, questioning skills and career choices… So after much googling and reaching out on twitter (thanks https://twitter.com/mikaelsvenson) the issue was still not resolved and a support ticket was raised with Microsoft. Long story short (progressing MS support tickets can be painful at times 🤓) the following workaround suggested by MS Support resolved the issue for me; - The user account I’m using is explicitly added to the site collection administrators group - I’ve created site columns/content-types - Added content - Ensured search has indexed that content - Verified that search has created the crawled properties and auto-created managed properties by performing a POST Search Query using Postman First pick a content item that you know has been indexed Now Share the item using the Specific People method (I chose Allow Editing but I don’t know if that makes a difference) Now Share the item with yourself — weird I know Having done that, you’ll probably receive 2 emails as the Sharer and Share-ee Now go to the list settings and re-index the list or library Now you have to wait for search to re-index the content…. After a while, if you then return to the crawled properties settings page, magically the crawled properties appear 🎉🎉 For me this workaround only had to be done once, for only 1 piece of content, and this caused all of the crawled properties for all of my different types of content to appear — not just the crawled properties associated with the content item I shared. So now that the crawled properties are once again visible in the UI they can be mapped to managed properties, and your search schema can be exported or built into a PnP Template. - Photo by Ray Hennessy on Unsplash 2 thoughts on “SharePoint Online Search Crawled Properties Not Created, Not Showing or Not Available” I had an issue where I had 3 properties not displaying in the Search Schema. They were getting crawled successfully. At my wits end and some poor Microsoft support, I skeptically shared a file with myself, reindexed the library and BOOM, displaying in Search schema crawled properties. Thanks! Wow, I have just been through a support call with MS for the last 3 months where a lot of crawled properties are missing. It resulted in them manually creating close to 40 crawled properties for me so that I can use them in managed properties. They never suggested to try sharing the content with yourself. Something I will try in future if I strike this again. Thanks for sharing
OPCFW_CODE
XP doesn't see DVD I installed Win XP RC1 from the CD a couple days ago. Partway through the installation it stopped seeing the DVD drive - it asked for files from it, and wouldn't take yes for an answer. I completed the installation by skipping the files, but wound up with a less than fully functional installation: No IE, no help files, etc. The problem was obvious - there was no D: drive. XP didn't see the DVD drive, but why? The Add New Hardware Wizard struck out, not being able to find the drive at all to add it. Multiple reboots didn't change anything. Except when I rebooted to Safe Mode. Then the drive was there, but only then. The drive is a NEC DV-5700A (12x/40x), which XP didn't complain about during it's pre-installation compatibility check, and is on the Windows 2000 Hardware Compatibility List. The BIOS shows it as being installed as master drive all by itself on the second IDE channel, UDMA 2. And this same drive has been and continues to function flawlessly under Win 98SE. Since this is my only computer and not being able to figure out any solution, I reformatted the drive and went back to Win 98SE. But XP looked interesting, and I'd like to play with it. I'll want to try again soon, probably as as a dual boot, but what could have been the problem? Fong Kai 603 case, 300W Athlon-approved PS Thunderbird 800 CPU, Asus A7V motherboard 128mb PC-133 SDRAM IBM 30gb 7200 rpm HDD NEC DV-5700A 12x/40x DVD GeForce 256 (SDR) Guillemot Maxi Sound Fortissimo Motorola SM56 PCI modem (PoS I know, but it works well for me, oddly enough) [This message has been edited by Izdaari (edited 07-28-2001).] It seems Microsoft does know about this one. It took some searching, but I found this on their support site: Looks like I'm back in business! New Security Features Planned for Firefox 4 Another Laptop Theft Exposes 21K Patients' Data Oracle Hits to Road to Pitch Data Center Plans Microsoft Preps Array of Windows Patches Microsoft Nears IE9 Beta With Final Preview Simplified Analytics Improve CRM, BI Tools Android Passes RIM as Top Mobile OS in 2Q VMware Updates Hyperic System Management File Monitoring Key to Enterprise Security LinkedIn Snaps Up SaaS Player mSpoke
OPCFW_CODE
How to trade Bitcoin? BTC will worth $100,000.. Dream or reality? You probably read a lot of thing about the value of the bitcoin. Bitcoin is worthless. Bitcoin worth millions. No one know the future but you can know that the bitcoin price at this current date is : ~ $7200. Some expert are claiming that the Bitcoin value will rise a lot. Some others expert claim that bitcoin will goes back to the big zero... So before starting my point of view, I expose that nobody can be sure of the Bitcoin value of tomorrow but we can have some guess. How to know the bitcoin value? It's a simple question but you can find the value at the home page of Crypto News Blog it's the current value of the bitcoin; it mean that people are exchanging a whole bitcoin for this amount of dollars. How to know if it will go up or down? This is the 1 billion question ! From the bitcoin exchange charts we can have a small guess. That's how many traders make good profits. You should be aware that it's also a complex thing, which requires good knowledges, informations, and sometimes some luck ! There is the current Bitcoin Chart with some traces... You can see few lines that hit multiple times some bottom value... same for the higher prices. These are Resistance (bottom) and Support (top). From the chart we can guess what the value could goes to in the next weeks/months... Something like between $8,200 and $6,200. Another good thing to check is the tendance... The value is going up or down? Again, from the chart image, It seems Bitcoin value is going down. We can see the volume that could be a good sign that if the market is exchanging a lot of coins or not. At the moment it's pretty low comparing the history. Now we can multiply the datas with all of our tools : the charts with the tools (resistance and support, RSI, Volume, Bollinger, Fibonacci, ... could be good for getting some trading tips), the recent news about bitcoin, and for some people it's the worst thing to follow : your feeling. A good way to make some profit by trading, it's to define your buy value, sell value and stop value (the value -in loss- that you will sell if your trade is not going as you was thinking), then keep trading smalls pourcentage of your trading balance. So from the informations we got at the moment, I think the bitcoin will reach 6,200 USD... It's my guess, I can not be sure about this but I can not see any 100 thousand dollars soon.... Sorry dreamers !
OPCFW_CODE
Does Ruby on Rails Have Too Much Magic? We’ve all seen Mickey in Fantasia: Micky The Sorcerer’s Apprentice ##Does the “magic” in Rails that helps developers carry water from the well compromise security? Suppose we have a signup page, written using Ruby on Rails. Rails has some “magic” (a programming term for features a framework does for you behind the scenes) called mass-assignment, which relieves developers of the drudgery of having to pull data out of the HTTP request sent from the client, assign it to temporary variables, and then pass those variables further into the application. Mass-assignment saves you work, because you don’t have to set each value individually. Simply pass a hash to the new method, or assign_attributes=a hash value, to set the model’s attributes to the values in the hash. One has to wonder, how useful this is feature is, in comparison to how dangerous it could be in the wrong hands. Rails’ own security documentation goes to great lengths to warn about the dangers: If you’re using Rails and you want to be secure, you should be protecting against mass assignment. Basically, without declaring attr_protected, malicious users can set any column value in your database, including foreign keys and secure data. What is to stop someone from filling the params[:user] with malicious data in the URL, like Which will result in the hash equaling One intrepid, young GitHub user by the name of Egor Homakov decided to raise concerns about how powerful this feature could be, in the hands of junior developers who are attracted to Ruby on Rails, and who may not fully understand that you should Never. Ever. Trust User Input Let’s view at typical situation - middle level rails developer builds website for customer, w/o any special protections in model(Yeah! they don’t write it! I have asked few my friends - they dont!) Next, people use this website but if any of them has an idea that developer didnt specify attr_accesible- hacker can just add an http field in params … After execution of that POST the hacker owns the target The Rails Development Team Refuses to Respond Shockingly, a member of the Rails development team acknowledges that this is indeed a problem that has been discussed, then dismisses the whole problem with the wave of a hand. You are not discovering anything unknown, we already know this stuff and we like attr protection to work the way it is. We Like It The Way It Is Mr. Homakov clearly felt that this was a serious issue, and since the Rails developers believed that everyone who was using Ruby on Rails was aware of the “magic” and the implications, the only alternative was to demonstrate how serious an issue this was. A high profile Rails application that was vulnerable would need to be found. Perhaps one that would snap the Rails developers back to their senses. Why not GitHub? Indeed. Why not? Especially when the Rails codebase is hosted on Github! So Mr. Homakov decided to play a bit of a practical joke on the Rails team by actively exploiting the security hole in GitHub’s rails app, and added a commit to the Rails project. GitHub quickly recognized the problem, which is commendable, but larger questions remain. ####If It Happened To GitHub - It Can Happen To Anybody Again, why was this feature added to Ruby on Rails? Does convenience outweigh security? Why did the Rails developers dismiss this issue, claiming that they already know about it and it’s nothing new? For now, we’ll have to wait and see.
OPCFW_CODE
Understanding and predicting how human activities impact biodiversity is challenging given the often large number of species in any given location. Importantly, despite the increasing amount of data generated by the remote-sensing revolution currently underway in Ecology (e.g., photos from satellites and camera traps), the use and integration of this information with field data on biodiversity is limited by the absence of modeling methods that can integrate multiple data streams and that can properly account for the characteristics of the data generated by these sensors. The long-term goal of this project is to advance cyberinfrastructure by creating broadly applicable methods for biodiversity datasets and by training the next generation of quantitative environmental scientists. This project will focus on substantially improving Mixed Membership (MM) models. These models were originally developed for text-mining purposes but have been widely used for biodiversity research in a wide range of ecosystems. Unfortunately, the current formulation of these models still has important limitations. This project will develop improved MM models that can account for the characteristics of the data generated by these sensors, can integrate multiple sources of data, and enable biodiversity predictions to be made. Ultimately, these improved MM models will be critical to enhance our ability to quantify and predict impacts on biodiversity. This project will also increase the awareness of the impact of climate change on biodiversity among high-school teachers and students. Evaluating and forecasting how species composition has been and will be altered by anthropogenic stressors is key to sustaining biodiversity and ecosystem functioning, but existing methods to quantify biodiversity change have important limitations. Biodiversity data are highly multivariate (e.g., an assemblage can contain hundreds of species in tropical forests) but many of the dimension-reduction methods typically used to interpret these data often generate results that are not easily interpretable (e.g., nonmetric multidimensional scaling axis scores), rely on unrealistic assumptions (e.g., hard clustering of sites), and are ill suited for wildlife studies because they do not account for imperfect detection. Critically, many of these methods do not allow for formal inference and/or predictions to be made and these methods do not leverage multiple data streams. To circumvent these limitations, this project will develop methods to generate new insights on the drivers of spatial and temporal variation of biodiversity. The overall objective of this project is to significantly improve MM models for biodiversity research. The specific objectives of this project consist of a) creating MM models that can generate reliable inference and predictions, integrate disparate data streams, and account for detection issues; and b) disseminate and train scientists on the developed models; and increase awareness of the impact of climate change on biodiversity among high-school students while addressing important science, math, and statistics standards. The results of this project will be stored in the stable URL https://denisvalle.weebly.com/mm-models.html This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
OPCFW_CODE
Manager - Updated Dashboard Description We are re-building the manager dashboard (the "My Job Posters Page"). Jerbo's wireframe: https://xd.adobe.com/view/e79e0e9e-d1ae-4b0c-749d-9436424f1ae7-91af/ The first image below shows the wireframe we want to implement with a single Draft job poster. Acceptance criteria: Full page [x] New Draft button that links to the "Create Job Poster" page [x] Empty state for when there are no job posters [x] We don't have a way to archive so no need for 2 separate sections (current & archive), just leave as one section for now. @joshdrink , you decide if it should be titled at all. Draft State (see first image below) All new job posters start in the Draft state. [x] Edit icon on the left [x] Job Title is a link to the Job Poster Preview [x] Below Job Title: Created On (date) [x] Edit Job Poster button - Links to filled in "Create Job Poster" page [x] Edit Screening Plan - Links to screening plan builder for this job poster. Just use "Screening Plan" as text, do not add the word "edit" [x] Do not create the "Assessment Tools" button, we don't have anything to link to yet [x] Future Feature (no need to create it yet) Delete button for drafts [x] Send to review button - Open's up a dialogue with the options: Cancel and Send to Talent Cloud. [x] Send to talent cloud option in dialogue triggers an email to Talent Cloud with body text " Manager Name has submitted Job Title job poster for review." This action also changes the state of the job poster to "submitted" Submitted State (see second image below) Submitted state starts as soon as the "send to talent cloud" dialogue has been confirmed. It ends only if job poster is flagged as "published" and the "Open" date is reached. [x] Waiting icon on the left [x] Job Title is a link to the Job Poster Preview [x] Below Job Title: Created On (date) [x] Time since sent for review in middle (don't say to HR) [x] Edit Job Poster button - Links to filled in "Create Job Poster" page (we might need to remove this when we transition from concierge service to managers) [x] Screening Plan button - Links to screening plan builder for this job poster [x] Do not create the "Assessment Tools" button, we don't have anything to link to yet [x] Send to review button - greyed out Posted State (see third image below) Posted start when the poster is flagged as "published" and the "Open" date is reached. It ends when the "Closed" date is reached. [x] Waiting icon on the left [x] Job Title is a link to the live Job Poster [x] Below Job Title: Posted On (date) [x] Time until close in middle [x] Screening Plan button - Links to screening plan builder for this job poster [x] Do not create the "Assessment Tools" button, we don't have anything to link to yet [x] Review Applicants button - disabled Closed State (see last image below) Closed starts when the poster is flagged as "published" and "Closed" date is reached. For now it does not end. [x] Action Required icon on the left [x] Job Title is a link to the (archived) Job Poster [x] Below Job Title: Closed On (date) [x] Number of applicants in middle [ ] Clock counting up from Closed Date in middle [x] Screening Plan button - Links to screening plan builder for this job poster [x] Do not create the "Assessment Tools" button, we don't have anything to link to yet [ ] Review Applicants button - links to new Review Applicants page #687 Images Draft State (full page) Submitted State Posted State Closed State I would prefer if we keep the "Archived" section in the front-end, even if it's not currently plugged into anything in the backend. Required Backend Work: [ ] Route and Controller Action for send to Reviewer 'Post' Request Updates Job and sends email. [ ] Methods to calculate the state of the job post from other attributes (published_flag, opened_at, closed_at, review_requested_at) [ ] Migrate review_requested_at Time Stamp database [ ] Add .env Variable for review email. Notify deployers about the .env change. Testing [ ] Add factories for new Job Attributes [ ] Unit Tests for State calculated method [ ] Test Controller Action including sending email [ ] Cleaning up: [ ] Add Localization Strings Resolved by #707
GITHUB_ARCHIVE
[Android] Build failed undeclared identifier 'aligned_alloc' I'm trying to build codon from source on Termux, using clang. ~/.../codon-0.15.5/build $ clang --version clang version 15.0.7 Target: aarch64-unknown-linux-android24 Thread model: posix InstalledDir: /data/data/com.termux/files/usr/bin (Skipping...) [ 50%] Linking C static library libgc.a [ 50%] Built target gc [ 50%] Generating kmp_i18n_id.inc [ 50%] Generating kmp_i18n_default.inc [ 50%] Built target libomp-needed-headers [ 50%] Building CXX object _deps/openmp-build/runtime/src/CMakeFiles/omp.dir/kmp_alloc.cpp.o In file included from /data/data/com.termux/files/home/downloads/codon-0.15.5/build/_deps/openmp-src/runtime/src/kmp_alloc.cpp:13: In file included from /data/data/com.termux/files/home/downloads/codon-0.15.5/build/_deps/openmp-src/runtime/src/kmp.h:118: /data/data/com.termux/files/home/downloads/codon-0.15.5/build/_deps/openmp-src/runtime/src/kmp_barrier.h:111:51: error: use of undeclared identifier 'aligned_alloc' distributedBarrier *d = (distributedBarrier *)KMP_ALIGNED_ALLOCATE( ^ /data/data/com.termux/files/home/downloads/codon-0.15.5/build/_deps/openmp-src/runtime/src/kmp_barrier.h:24:47: note: expanded from macro 'KMP_ALIGNED_ALLOCATE' #define KMP_ALIGNED_ALLOCATE(size, alignment) aligned_alloc(alignment, size) ^ 1 error generated. make[2]: *** [_deps/openmp-build/runtime/src/CMakeFiles/omp.dir/build.make:76: _deps/openmp-build/runtime/src/CMakeFiles/omp.dir/kmp_alloc.cpp.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:2050: _deps/openmp-build/runtime/src/CMakeFiles/omp.dir/all] Error 2 make: *** [Makefile:146: all] Error 2 It's native Termux I'm using, not proot-distro linux. ~/.../codon-0.15.5/build $ neofetch --off u0_a202@localhost ----------------- OS: Android 11 aarch64 Host: Redmi 21091116AC Kernel: 4.14.186-perf-gd65dbc980d48 Uptime: 4 days, 10 hours, 49 mins Packages: 309 (dpkg), 1 (pkg) Shell: bash 5.2.15 CPU: MT6833P (8) @ 2.000GHz Memory: 3362MiB / 5635MiB Look like a known libmusl issue, as aligned_alloc is not implemented This is an issue with LLVM's OpenMP. If it can be compiled on Termux, then this should work; you will probably need a different set of flags. I will close this issue for now as it is upstream issue; if anybody has a correct set of OpenMP compilation flags for MUSL distros, please let us know and we will integrate it into our build script.
GITHUB_ARCHIVE
It is possible to view all the callbacks Paysera sent to your system and response your system provided. Here you can see the url that was used for a callback and your system response. You can use this url to trigger callback manually. Unique project number. Only activated projects can accept payments. Order number from your system. It is possible to indicate the user language (ISO 639-2/B: LIT, RUS, ENG, etc.). If Paysera does not support the selected language, the system will automatically choose a language according to the IP address or ENG language by default. Amount in cents the client has to pay. Payment currency (i.e USD, EUR, etc.) you want the client to pay in. If the selected currency cannot be accepted by a specific payment method, the system will convert it automatically to the acceptable currency, according to the currency rate of the day. Payamount and paycurrency answers will be sent to your website. Payment type. If provided, the payment will be made by the specified method (for example by using the specified bank). If not specified, the payer will be immediately provided with the payment types to choose from. You can get payment types in real time by using WebToPay library. Payer's country (LT, EE, LV, GB, PL, DE). All possible types of payment in that country are immediately indicated to the payer, after selecting a country. Payment purpose visible when making the payment. Payer's name received from the payment system. Sent only if the payment system provides such. Payer's surname received from the payment system. Sent only if the payment system provides such. 0 - Payment has not been executed 1 - Payment successful 2 - Payment order accepted, but not yet executed 3 - Additional payment information 4 - Payment was executed, but confirmation about received funds in bank won't be sent. The parameter, which allows to test the connection. The payment is not executed, but the result is returned immediately, as if the payment has been made. Country of the payment method. If the payment method is available in more than one country (international) – the parameter is not sent. The country is provided in the two-character (ISO 3166-1 alpha-2) format, e.g.: LT, PL, RU, EE. Country of the payer established by the IP address of the payer. The country is provided in two-character (ISO 3166-1 alpha-2) format, e.g.: LT, PL, RU, EE. Country of the payer established by the country of the payment method, and if the payment method is international – by the IP address of the payer. The country is provided in the two-character (ISO 3166-1 alpha-2) format, e.g.: LT, PL, RU, EE. Payer's email address is necessary. If the email address is not received, the client will be requested to enter it. Paysera system will inform the payer about the payment status by this address. Amount of the transfer in cents. It can differ, if it was converted to another currency. The transferred payment currency (i.e USD, EUR, etc.). It can differ from the one you requested, if the currency could not be accepted by the selected payment method. A version number of Paysera system specification (API). It is a request number, which we receive when the user presses on the logo of the bank. We transfer this request number to the link provided in the "callbackurl" field. Parameter, which checks if you get the answer from our server. This is the most reliable way to check it. Parameter, which checks if you get the answer from our server. This is not as reliable as _ss2 to check it. You can download script example from
OPCFW_CODE
#ifndef TPFINALAYDAI_MESSAGE_HPP #define TPFINALAYDAI_MESSAGE_HPP #include <string> #include <optional> #include <algor/List.hpp> #include <logger/Logger_impl/LogTypes.hpp> namespace algor { class GeometricObject; } namespace logger { class Logger; } namespace logger::__detail_Logger { class Message { const std::string * message = nullptr; std::optional<algor::List<algor::GeometricObject *>> attached_objects; FlagType flags; protected: Message() = default; explicit Message(FlagType flags, const std::string * message) : flags(flags), message(message) {} Message(FlagType flags, const std::string * message, algor::GeometricObject * object) : flags(flags), message(message), attached_objects({object}) {} Message(FlagType flags, const std::string * message, algor::List<algor::GeometricObject *> objects) : flags(flags), message(message), attached_objects(std::move(objects)) {} friend class logger::Logger; void setFlags(FlagType flags) { this->flags = flags; } void setMessage(const std::string * message) { this->message = message; } void setAttachedObjects(algor::List<algor::GeometricObject *> attachedObjects) { this->attached_objects = std::move(attachedObjects); } void free_memory(); public: virtual ~Message() = default; bool hasAttachments() const { return this->attached_objects.has_value(); } const auto & getAttachments() const { return *(this->attached_objects); } const auto & getMessage() const { return *(this->message); } FlagType getFlags() const { return this->flags; } }; class OwnerMessage : public Message { explicit OwnerMessage(Message const& message); friend class logger::Logger; public: ~OwnerMessage() override { this->free_memory(); } }; } #endif //TPFINALAYDAI_MESSAGE_HPP
STACK_EDU
Do the doors in this picture equal 48 houses? So even after reading through the wiki and looking on websites, I'm having a rough time understanding what villagers consider to be a house. I made a villager breeder and got one baby villager, and now I'm not getting anything. I did some quick arithmetic (9 doors*.35=3.15, I had one villager up by doors and three in breeder) and it seemed that the max villager population, so I added roughly 35 more doors. After doing math with the new doors added (48*.35=16.8) I wondered if the setup I had for the doors made them houses, and as mentioned before, I can't figure out how to know if they're houses or not. Do I have this set up wrong, or will the max villager population expand to 16 villagers? Link to image is dead. ... and because the link to the image is dead, this cannot be answered, so it should be closed as unclear. That may be true, but without a clear question, the answer has much less meaning to anyone other than the OP. Glad you made it work, but to answer your original question - on the picture I can count 16 valid doors - those are the middle doors on each side that are aligned with the block above the center villager - 4+4+5+3=16. To check if a door is valid, imagine a line perpendicular to the door that extends 5 blocks on each side of the door. The game counts how many blocks see the sky at each side of the door and compares the two numbers. In your setup, only the doors aligned with the block above the center villager are valid, because that block obstructs the sky. So on one side of each door (towards the villager) the game counts 4 blocks that can see the sky and on the other side (away from center) there are 5. To make all doors in your setup valid, you can add 4 more solid blocks to the block above the villager (making a plus sign). Is this what you meant? http://imgur.com/aZK4dFZ @ExoMute yep :) Nine villagers in the breeder, they were breeding so quickly and now nothing. I changed the door design, is it looking good? http://imgur.com/cFjNZ6f I know breeding time is random, but dang I'm one away from having the amount I want :/ I feel like the cobblestone and andesite 'ceiling' should be one block lower @ExoMute it seems you problem is no longer with the doors, but a separate issue - you may want to check other parts of the design of the breeding cell as well. Also keep in mind that sometimes it takes a lot of time for villagers to breed even if all conditions are met. I think I've already said this before and then went back on it, but I gave it time and now I have 10 villagers in the breeder and 6 in trading slots. I think it's all good now. And seeing as the doors have to be in sunlight, will they not breed at night? @ExoMute The game only checks if a block can see the sky, not whether the sun is out. It should work through the night as well, Hey your setup should work with time. The villager 'mating' rate is a lot different to other animals in the game. A villager has a random chance of creating 'babies' so it might take you longer/shorter than others. The doors you have setup should work just fine as its not a house villagers have to be in but a certain radius to a door. The more doors the better so if you leave your villagers in that machine they should reproduce just fine. :) When I threw some carrots in the breeder the hearts started flying, I now have 2-3 babies, so maybe only one villager was willing...¯_(ツ)_/¯
STACK_EXCHANGE
Get an On-Demand License for MATLAB Parallel Server If you have short-term needs for scaling, you may benefit from MATLAB Parallel Server™ on-demand licensing. Using on-demand licensing, you can scale on clusters and clouds and pay only for what you use. When using MATLAB Parallel Server on-demand, MathWorks® will charge your credit card monthly, in arrears, for the accrued worker-hours of your compute jobs. You are solely responsible for any additional billing for infrastructure and resources that you may incur. To determine if you are eligible to use on-demand licensing, refer to the requirements and availability section below. Alternatives to On-Demand Licensing Note that depending on your situation, an alternative might be better suited for your needs. For students and staff at an educational institution with a Campus-Wide License, MATLAB Parallel Server for Campus-Wide Licenses may already cover your needs and an on-demand license is not necessary. Check if your school has access. Annual and Perpetual If you do not have access to a Campus-Wide License and are not eligible for on-demand licensing, or have long-term needs for scaling, an annual or perpetual MATLAB Parallel Server license configured for online licensing may better fit your needs. Contact Sales for more information. Get an On-Demand License Immediately Through MATLAB and MathWorks Cloud Center If you are in the US or Canada, you can immediately get an on-demand license for MATLAB Parallel Server through the Cloud Center UI in MATLAB. MathWorks Cloud Center enables you to start MATLAB Parallel Server clusters on Amazon® Web Services (AWS®) using your AWS account and a MATLAB Parallel Server license set up for online licensing. You can access Cloud Center via the web app or using the Cloud Center UI in MATLAB. The clusters you start with Cloud Center can be accessed from your MATLAB session like any other cluster. The Cloud Center UI in MATLAB can automatically create a new MATLAB Parallel Server on-demand license for you, if you do not already have access to an online license. If you already have access to a MATLAB Parallel Server online license, the Cloud Center UI will automatically use that online license, instead of creating a new MATLAB Parallel Server on-demand license. To access the Cloud Center UI in MATLAB: - Click the Parallel menu on the MATLAB toolstrip, then click Create and Manage Clusters. - In the Cluster Profile Manager, select Create Cloud Cluster and complete the required steps. Alternative Method to Request an On-Demand License If you are unable to get an on-demand license through MATLAB and MathWorks Cloud Center and you meet the requirements below, then you can request an on-demand license for MATLAB Parallel Server by starting with the question below. This process will take multiple business days. Do you already have Parallel Computing Toolbox?
OPCFW_CODE
Find and Hire Freelance C++ Developers with O4W When Bjarne Stroustrup began developing C++ in 1979, he had the goal of creating an efficient, low-level programming language as well as a programming language with a high abstraction level. The first version of C++ built the core of what is today still a very powerful programming language. After just a few years and versions it was in use all over the world. It was ISO standardized in 1998 and has been in continuous development ever since. C++ is used especially in system programming. C++ developers: experts for complex tasks C++ has a number of advantages; the language is very well standardised. Well-written programs are run-time effective and there are compilers for all major computer platforms. However, there are drawbacks as well. C++ is considered to be very complex, error-prone, and demanding on a developer. Errors are avoidable, yet can be serious when they occur. The quality of the developer is therefore crucial. A good C++ developer can write professional and error-free code in a short amount of time, which a less competent C++ developer will take much longer and produce lesser quality code. Clients should therefore pay careful attention when selecting a knowledgeable C++ developer. Good advice is not expensive Outsourcing4work is committed to the objective of supporting clients with proven experts, especially when it comes to C++ developers. Thanks to our connection to the world’s largest Outsourcing-market in India, as well as our stable connections with local reputable partners, we have an almost inexhaustible supply of skilled employees. We, and our partners, select the most capable potential employees and compare their profile with your requirements. We recommend the best candidates to our clients for selection. Thus we ensure that we provide exclusively appropriate, experienced and motivated C++ developers for the project. And this advice is not expensive; in fact it’s free. With Outsourcing4work you only pay when the C++ developers actually work for you. Profit without risk In addition to the low costs, Outsourcing4work offers additional customer benefits and protects you as a client from various risks. - With our exact monitoring you are always informed on the current status of the C++ developer’s work. You also will only pay for productive work time. Breaks or downtime are not included in the invoice. - As a German company, our contracts with you are in German or English. Legal transparency can then be guaranteed. - Our project managers speak German and English (and some Hindi) and ensure a smooth cooperation between the client, Outsourcing4work and the C++ developers. - We take over any administrative obligations, as well as the support of the C++ developers hired by us. Our clients receive maximum flexibility without extra effort. We call this concept ‘Outsourcing made in Germany’ and believe that you will not find any comprehensive and cost-effective comparable offer in Europe. We look forward to getting to know you! As you can see, a C++ developer from Outsourcing4work is an attractive option. If you send us a brief description of your planned project or task, we can check if the developers are suitable for your needs. Free, with no obligation, within 24 hours. We promise.
OPCFW_CODE
I'm sure you can see how that would be helpful, so let's get started. To see them, you must logon to Windows 8 Desktop. Add task scheduler to mmc console Now you can manage the scheduled tasks on the machine. Hey guys can anyone tell me how I can stop windows from reloading what I had before I had shutdown the laptop? Press Windows Key and R key together and type shell:startup and click ok. I try your advice for the last time before I was going to throw. Follow the steps to update the hardware driver: a. To make this work, you must be signed in as an administrator. On these versions of Windows, you can simply open your Start menu, locate a shortcut to an application you want to start automatically, right-click it, and select Copy. Or another method : Stop Auto Reopen of Programs after Restart in Windows 10. Walter Glenn is the Editorial Director for How-To Geek and its sister sites. This was quite simple to do on previous versions of Windows, but on Windows 8 it appears to be an issue. Have you tried any troubleshooting steps to fix this issue? Please reply with the status of the issue so that we can confirm that the issue is fixed. This can make starting the program quicker, but will often slow down your startup process and may result in a lot of open windows. The period in front of. Right click and select Copy 9. Use a reputable antivirus program like Microsoft Security Essentials or Kaspersky, along with a malware removal program like Malwarebytes Antimalware malwarebytes. Hope you find this helpful! I'm and I've been playing with computers since I took a required programming class in 1976. Perform virus and malware scans if programs continue to load at boot. Summary Technical Level: Basic Applies to: Windows 8 The Startup folder in Windows makes it convenient to have programs start automatically when you start Windows. Everything else works fine so far that I know of. One approach is to right-click in the right-hand pane, beneath any pre-existing shortcuts, and click on New, and then Shortcut. That last metric is just a measurement of how long it takes the app to start. Right-click the File Explorer and you will see the shortcut to the Startup folder. No portion may be reproduced without my written permission. To do so, right-click on Name column and then click Command line to add Command line column to the extreme right. Next, locate the Startup folder under All Apps in the Start menu, right-click it, and select Paste to paste a copy of that shortcut. Hi, Before we start with the troubleshooting steps, I would like to know some information about this issue. Do you get any error code or error message while startup?. So I like to keep an eye on what is starting up every time. This brief tutorial is going to show you how to auto start programs in Windows 8 everytime you sign in to your desktop. He's written hundreds of articles for How-To Geek and edited thousands. You can now add any shortcuts you want to start when you start your computer. Please let us know the results and if you need further assistance. Do you get any error code or error message while startup? Finally, if you just can't find what you're looking for,! You can also just use this tool to run a command at login. This is a useful feature to avoid disabling crucial programs. In the above instructions, we show the steps to add an app, but you can also add files, folders, and shortcuts to websites. Of course I strongly recommend you -- there's a ton of information just waiting for you. When I open the start screen and scroll down to see the list all of my programs, as soon as I move my cursor, a program will open automatically from the list, and it seems to be at complete random. In , you had to dig into tools like Msconfig—which is powerful if a little clunky to use. To start enabling or disabling startup programs, open your task manager first. Using this setting you can change what programs automatically run when Windows 8 starts. You can either press Ctrl + Shift + Esc keys, right-click on Taskbar and click Task Manager, or type Task Manager in the Start screen and press Enter key to open the same. He's also written hundreds of white papers, articles, user manuals, and courseware over the years. Again, it's completely random which programs decide to open up as soon as I move my cursor on the App view in the start screen. This may or may not cause your computer's startup loading to become slow. Right click on the application you would like to have start automatically. I would suggest you to try the following methods and check if it works for you. Find the location of startup programs in Windows 10 Tip: You can access the Startup folder by typing Shell:startup in the Run command box. Anyway, it didn't help, and programs keeps opening at random. I want comments to be valuable for everyone, including those who come later and take the time to read. Then select Paste Shortcut 11. At least one special case: Task Manager One program some people like to run automatically is Task Manager. Sometimes it might be wished for to configure later a program to start up automatically, though.
OPCFW_CODE
Hi there!!! 👋 It’s the 17th day of the #100dayschallenge, and today I will discuss performance optimization in SRE. Performance optimization is improving the speed and efficiency of a system or application by identifying and addressing performance bottlenecks. It involves analyzing the system or application to identify areas slowing down the performance and implementing changes to improve it. So, I have planned the contents for next 100 days, and I will be posting one blog post each and everyday under the hashtag I hope you tag along and share valuable feedback as I grow my knowledge and share my findings. 🙌 Strategies for Performance Optimization Here are a few strategies SREs can utilize in their organizations: - Scaling Up (Horizontal Scaling): Scaling up involves adding more resources to a single server, such as upgrading the CPU or adding more memory or storage. This strategy is useful when a system is limited by its hardware resources. - Scaling Out (Vertical Scaling): Scaling out involves adding more servers to the system and distributing the workload across them. This strategy is useful when a system is limited by its processing capacity. - Caching: Caching involves storing frequently accessed data in memory or on disk to reduce the number of requests to the database or file system. This strategy can significantly improve the performance of read-heavy applications. - Load Balancing: Load balancing involves distributing the workload across multiple servers to improve performance and availability. This strategy can be used with scaling out to further improve performance. Performance Tuning Techniques - Code Optimization: Optimizing the code is one of the most effective ways to improve performance. This includes removing redundant code, using efficient algorithms and data structures, and minimizing the number of database queries and file I/O operations. - Memory Management: Proper memory management is crucial for optimizing performance. This includes optimizing memory usage, reducing memory leaks, and using efficient memory allocation techniques. - Database Tuning: Database performance can be improved by optimizing queries, indexing tables, and minimizing data redundancy. A cache layer like Redis or Memcached can also help improve database performance. - Network Optimization: Network performance can be improved by optimizing network protocols, minimizing network congestion, and using techniques like data compression and pipelining. - System Tuning: System performance can be improved by fine-tuning various system parameters such as kernel parameters, file system settings, and system resources like CPU and memory. - Load Testing: Load testing is a technique that simulates real-world usage and measures system performance under heavy loads. This can help identify performance bottlenecks and optimize the system accordingly. - Profiling: Profiling is the process of analyzing the performance of an application to identify performance issues. This includes identifying areas of the code that are taking too long to execute and optimizing those sections. - Parallelization: Parallelization is dividing a task into smaller, parallel tasks that can be executed simultaneously. This can improve performance by taking advantage of multiple CPU cores and reducing processing time. - Monitor Performance Metrics: This involves tracking performance metrics such as response time, CPU usage, memory usage, disk usage, and network latency. Monitoring these metrics allows SRE teams to identify performance issues early on and take corrective action before they become significant problems. - Optimize Code: Writing efficient and optimized code can significantly improve application performance. SRE teams should ensure that code is optimized for performance by reducing redundant code, minimizing the number of database queries, and optimizing loops and conditional statements. Code optimization should also include proper error handling and memory management. - Use Caching: By caching frequently accessed data in memory, SRE teams can significantly reduce the number of database queries and improve application response time. Caching can also help reduce network latency by reducing the number of requests that must be sent to the database. - Use Load Balancers: Load balancing is essential for distributing traffic across multiple servers, improving application performance, and reducing downtime. SRE teams should use load balancers to distribute traffic evenly across multiple servers, ensuring no single server is overloaded. - Optimize Database Performance: SRE teams should ensure that databases are optimized for performance by indexing, avoiding table scans, and optimizing SQL queries. Proper database schema design is also essential for performance optimization. - Use Content Delivery Networks (CDNs): CDNs cache content at various global points, reducing the time it takes for content to reach users. SRE teams should use CDNs to cache static content such as images, videos, and documents. - Optimize Network Performance: SRE teams should optimize network performance by minimizing latency, reducing network hops, and optimizing bandwidth. Network optimization techniques include using content compression, reducing the number of requests, and minimizing the size of requests and responses. - Test and Update Systems: SRE teams should regularly test and update systems to ensure they are optimized for performance. This includes performing load testing, stress testing, and other types of testing to identify performance issues. SRE teams should also update systems regularly with the latest software updates, security patches, and bug fixes. - Use different performance testing tools: Various performance testing tools are available, each with its own strengths and weaknesses. By using various tools, SRE teams can get a more complete picture of the performance of their systems and applications. - Work with the development team: The development team is responsible for designing and building the systems and applications. By working with the development team, SRE teams can ensure that performance is considered from the beginning of the development process. - Use automation: Automation can help you to save time and effort when performing performance testing and tuning. Various automation tools are available, each with its own strengths and weaknesses. SRE teams can automate your performance testing and tuning process by choosing the right tool. - Flame Graphs: Flame graphs are a visualization tool for analyzing and optimizing performance. They are used to graphically represent the performance of an application, showing the distribution of resources used by different parts of the application. Flame graphs are created by sampling a running application and generating a graphical representation of the call stack, where each call stack frame is represented as a colored box. - Valgrind: Valgrind is a popular profiling tool used in software development. It is a memory profiling and leak detection tool designed to identify memory leaks and other memory-related issues in applications. It is a powerful tool that can be used to analyze and optimize the performance of complex applications. - Gprof: Gprof is a profiling tool that is used to measure the performance of applications. It is a popular tool among developers because of its ease of use and versatility. Gprof works by instrumenting an application’s code to measure the time spent in each function and then generates a report that shows the time spent in each function. - Perf: Perf is a profiling tool built into the Linux operating system. It is a powerful and flexible tool that can be used to analyze and optimize the performance of applications. Perf works by measuring the system’s performance at different points in time and then generates reports showing the performance of other system parts. - DTrace: DTrace is a dynamic tracing tool used to analyze and optimize the performance of applications. It is a powerful tool that can be used to trace system calls, kernel functions, and other system events. DTrace is handy for identifying performance bottlenecks in complex applications. - Strace: Strace is a system called a tracer that is used to monitor and debug applications. It can be used to trace the system calls made by an application and identify any issues that may impact the application’s performance. - Apache JMeter: A Java-based load testing tool that can simulate heavy loads on a server, website, or network to measure performance and identify issues. - Geekbench: A cross-platform benchmarking tool that can measure the performance of a computer or mobile device’s CPU and GPU. - UnixBench: A benchmarking tool that can be used to measure the performance of a Unix-based system. It includes a test suite that measures CPU, file system, and memory performance. - FIO: A flexible I/O tester and benchmark tool that can measure the performance of disk I/O and file systems. - Sysbench: A benchmarking tool that can be used to measure the performance of CPU, memory, file system, and database performance. It supports a range of database engines, including MySQL, PostgreSQL, and SQLite. - Phoronix Test Suite: A cross-platform benchmarking tool that can measure the performance of hardware and software components. It includes a test suite that measures CPU, GPU, memory, and file system performance. It is important to continuously monitor the system and make adjustments as needed. Performance optimization is an ongoing process, and it requires constant attention to ensure that systems are running at peak performance. This can be achieved through automated monitoring tools that provide real-time insights into system performance. Performance tuning is essential in Site Reliability Engineering (SRE) because it helps achieve optimal application performance, stability, and availability. SREs use performance tools to understand application behavior and identify improvement areas. Performance tuning is significant in production environments, where SREs primarily focus on ensuring reliable services. - Automating Performance Tuning with Machine Learning - Performance Improvements - Google SRE - OpenAI ChatGPT - Google Bard - Perplexity AI Thank you for reading my blog post! 🙏 If you enjoyed it and would like to stay updated on my latest content and plans for next week, be sure to subscribe to my newsletter on Substack. 👇 Once a week, I’ll be sharing the latest weekly updates on my published articles, along with other news, content and resources. Enter your email below to subscribe and join the conversation for Free! ✍️ I am also writing on Medium. You can follow me here.
OPCFW_CODE
I (foolishly) bought two of these drives for a Duo two weeks ago without checking the compatibility list to upgrade from 2x500GB WD drives. Exact drive = WD1002FAEX-00Z3A0. Have tried hot swapping each drive in turn according to the instructions and neither work so far. Errors are the same for both: 'The disk attached to channel 1 could not be used. The most common reasons are RAID resync in progress, faulty drives, and disks that are too small to be added to the array.' 'A SATA reset has been performed on one or more of your disks that may have affected the RAID parity integrity. It is recommended that you perform a RAID volume resync from the RAID Settings tab ( accessible in the Volumes page => Volume tab in FrontView ). The resync process will run in the background, and you can continue to use the ReadyNAS in the meantime.' Blinking light error has been constant for each drive. There has been no sync progress (only 114GB stored on the 500GB drive on the Duo and left the first disk in situ for 48 hours). Is there anyone who knows what I can do to attempt to use these disks? Would a jumper make any difference? Any disk firmware upgrade that could make any difference? I'm interested in knowing the progress of this model too. Being new to ReadyNAS, I bought disks at the same time as my ReadyNAS unit (ultra 6 plus). I bought 4 of these 1002FEAX 1TB drives, and now I find they are not usable. This really surprised me, since I've used these as internal drives before and just thought they were standard types of good quality SATA drives. With my Mac Pro I just plugged them in and they worked. Why is that not possible with RAID? Is there some low-level stuff going on that makes for more issues with particular drives? It would be nice to see a full explanation given on the FAQ - I couldn't see one there. I can guess why there is a compatibility list (to protect users against data loss from untested models), but it would have been nice to have seen a large warning sign of some kind saying "Do NOT buy drives until you've bought the NAS!!". I only knew of the HCL's existence once I started reading the manual - and that was only once I had the product unpacked. This is a bit "cart before horse", and I think Netgear ought to consider making this issue more prominent on their web site, on the pages where potential customers are browsing. I'm pretty savvy about IT in general but new to RAID and NAS - and I missed it. Now I'm not sure whether to return the three drives I haven't opened for a refund or whether to wait a while in the hope that this model might be included soon... any advice on that, anyone? I would return the drives as soon as you can. There is no guarantee that the drives will pass the test when they are tested. One of the problems with drives in a Raid configuration is that they have to be synchronized for them to work efficiently. Some drives have features in their electronics that want to operate the drives spin down and features, especially the green drives. This in effect fights the Raid controller. While the drives in question are fairly new 6Gb/s drives and are not green drives (7200 RPM) they may pass when tested. They are a little on the higher end of the cost curve than other 1TB drives, even the WD drives. It also does not help that WD strongly discourages the use of consumer grade drives in a Raid environment, only supporting the more expensive enterprise drives, which have greater vibration stability. Hmm.. still have problems, even with a compatible drive (WD RE4 WD1003FBYX). It takes a couple of minutes showing the logo, then boots and says "No disks detected". ?? So I can't even get to square one here I'll try another slot... Last year I bougth two SAMSUNG HD103SJ but for 2 weeks ago I suddently get an harddisk crash and one of them where dead. I send the disk for service and due to the shortage of harddrives they gave me an Western Digital Black 1002FAEX. Nice drive but I was really dubtfull because the lack of compatibillity with disks. I told the service guy that and get the right to return it if I not broke the ESD bag. When I check the offical HCL for ReadyNAS Duo I can read that the 2TB version are supported. Will this mean that the 1TB version also is supported? Should you return the disk based on this thread? Did anyone have problems with two different brands Samsung and Western?
OPCFW_CODE
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. If you do not provide a fully qualified type name (the full namespace name along with the type name), C# generates compiler error CS0246. type with a using static directive, including Visual Basic modules. If F# top-level functions. Mar 16, 2012. How do I resolve the error "Compile Error: Can't find project or library?. when using a Microsoft Access or Excel document that integrates buttons or functions that need Visual Basic for Applications (VBA) or Macros to perform. The current fix is available from Microsoft Product Support Services, at 1-800-936-4900. This story, "Microsoft offers fix for Visual Basic 2005 compiler" was originally published by InfoWorld. How To Fix Overflow Error In Vba In software, a stack overflow occurs if the call stack pointer exceeds the stack bound. The call stack may consist of a limited amount of address space, often. How to Fix Runtime 6 Overflow Vba Errors Windows operating system misconfiguration is the main cause of Runtime 6 Overflow Vba error codes Therefore, we This is Microsoft Access, Office/VBA, and VB6 Modules and. – Complete List of Modules and Classes in Total Visual SourceBook for Microsoft Access, Office/VBA, and VB6 A Visual Basic Runtime or Compile error indicates the error lies in a global template (*.dot) or add-in (*.wll) located in one of the Startup folders. Brother Dcp197c Error Code 4f Error Code 52331 Wii Opensubtitles Connection Error I get an error message indicating I need "On2 VP6" codec. that codec wasn't downloaded, it kooks like you have problems with internet connection. Open Subtitles Player is our ALLPlayer but with their name and logo so both have the. ALLPlayer is the most popular program for watching Compile error in hidden module: This workbook. – Microsoft. – I've got the following error after installing office 2010. It's only shown once I want to open/close the Excel: Microsoft Visual Basic for Applications: Compile error. Hey, a Windows update the other day seems to be causing this error in AutoCAD. I think it is only a problem with the 2007 version of Office. "Compile error: The code in this project must be updated for use on 64-bit. You write a Microsoft Visual Basic for Applications (VBA) macro code that uses. Since their introduction in 2002, Microsoft’s. with better error messages, better compiler error messages and support within Visual Studio, and all round making F# fit into the.NET landscape as well as C# and Visual Basic already do. This soon-to-be seismic shift in the Microsoft development. and XML documentation, Visual Basic.NET has all the power of C#, plus some additional features like an always-on background compile that provides full real-time error. Visual Basic compiler errors occur when the compiler encounters problems in the code. The code causing the errors is marked with a wavy line underneath it in the Code. Jun 30, 2008 · I am trying to run/step thru my vb module and I am getting compile error – Can’t find project or library on the code below: StringNum =. Ncr 6450 Error Code Scotland – Scotland led the way with the manufacture and development of cash machines and this, according to Vice President and GM of NCR Corporate’s software business, Joe Gallagher, puts the country in a good place when it comes to. World Wide Web Access Statistics for www.ryomonet.co.jp Last updated: Wed, 23 Feb 2000 12:00:01 (GMT MSDN Magazine Issues and Downloads. Read the magazine online, download a formatted digital version of each issue, or grab sample code and apps. Feb 15, 2006. Visual Basic compiler is unable to recover from the following error: System Error &H8013141e& Save your work and restart Visual Studio To define the "TRACE" conditional compilation symbol in C#, add the /d:TRACE option to the compiler command line when you compile your code using a command line, or. will include Microsoft’s "Roslyn".Net compiler platform , ASP.NET v.Next (codenamed Project K) and support Apache Cordova tooling. With the coming release, the C# and Visual Basic compilers and the integrated development. Visual Basic compiler errors occur when the compiler encounters problems in the. Error Messages How to: Get Information about Visual Basic Compiler Errors. Shop Microsoft Programming Software Now. From the People Who Get IT. Microsoft will release. "[It] is about opening the compiler and making all that information available so [the developer] can harness all of this knowledge," he said. Roslyn is a compiler for C3 and Visual Basic with a set of APIs.
OPCFW_CODE
Virtual desktop no work hi guys, my virtual desktop is not working by clicking on them does not occur nothing any idea? vlw fwi, Holmes :) Virtual desktops work fine for me, what specifically is not working for you? i have two virtual desktop (1 2) i work in the first virtual when i click on the second 2 it does not change anything vlw fwi, Holmes :) I assume you mean the JWM pager? This works for me. What version of JWM? Does it fail even with the default configuration and no other program that might interfere running? I need more information to be able to reproduce this. version JWM 2.3.6 my configurations in https://github.com/kibojoe/desktop-settings-kibojoe/tree/master/shared/skel_new/.jwm i do not know what could be happening! Is this something that you just noticed broke recently or? Does it work with the default configuration? i compared my configuration with the default and found nothing wrong strange things are happening when i open the terminal virtual desktop works see images http://www.auplod.com/u/pluoda9a434.png http://www.auplod.com/u/lapudo9a435.png i took the test with the file manager is on virtual desktop 1 and clicking on 2 (it is empty) nothing happens the virtual desktop image is transferred to 2 see images http://www.auplod.com/u/puloda9a436.png http://www.auplod.com/u/pudloa9a438.png very strange... vlw fwi, Holmes :) Are you using a file manager that slaps icons onto the desktop? the desktop has no icons this also occurs with the browser (palemoon) vlw fwi, Holmes :) i'm going to review some things here i think i know what happened! vlw fwi, Holmes :) really is in trouble here one thing I noticed is that sometimes works and sometimes does not work for example, now restart the pc and the virtual desktop is not working vlw fwi, Holmes :) Just curious, do the keybindings Ctrl+Alt+Right Ctrl+Alt+Left still work? @yetanothergeek yes, is work! but the windows that are in 1 are also in 2 vlw fwi, Holmes :) Sounds likely that some application is telling JWM that it wants to be on all desktops. Not enough information here to say for sure. strange this... because sometimes the virtual desktop work? and sometimes not? do you have any problems with my settings? vlw fwi, Holmes :) hi all, in versions of Manjaro JWM the virtual desktop worked i do not know why it's not working now. i will have to remove it from the new version of Kibojoe Linux 17.09 vlw fwi, Holmes :) Does this happen with the default configuration included with JWM? Does it happen with an application I'm likely to have (xterm, perhaps)? What does xprop say when you run it and click a window that is showing up on multiple deskops? Does this happen with the default configuration included with JWM? the same thing Does it happen with an application I'm likely to have (xterm, perhaps)? xterm appears in both on multiple desktops What does xprop say when you run it and click a window that is showing up on multiple desktops? the result https://gist.github.com/kibojoe/9afca47ce769d7b2649dac4f0568a5bb i do not know what is happening! in previous versions of Manjaro JWM is working well vlw fwi, Holmes :) This doesn't happen for me with xterm. _NET_WM_STATE_STICKY is set on that window, so JWM thinks the window should be on all desktops. Unfortunately, I don't know how that property got set. JWM is capable of setting it as are other applications. xterm itself almost certainly wouldn't set it. If you run nothing but JWM and xterm, does it still happen? @joewing i found the error this sticky was enabled I disabled it and now it works That's good to hear, thanks for the update!
GITHUB_ARCHIVE
Lists are one of the core data structures in Elm. Elm supports lists on the syntactical level and the List core module has the usual basic utilities you would expect from a functional language. In this post, we take a look at lists in general and some of the useful functions from that module. About This Series This is the seventh post in a series of short and sweet blog posts about Elm . The stated goal of this series is to take you from “completely clueless about Elm” to “chief Elm guru”, step by step. If you have missed the previous episodes, you might want to check out the table of contents . A great way to follow along and immediately try the code of this episode (instead of just reading it, which would probably be quite boring) is Elm’s REPL (read-eval-print-loop). With Elm installed , just start elm-repl in the command line. You should see something like this: >elm-repl ---- elm repl 0.16.0 ----------------------------------------------------------- :help for help, :exit to exit, more at -------------------------------------------------------------------------------- Now any Elm expression that you type will be evaluated immediately and the result is printed back to you. You probably wouldn’t want to develop anything complicated in the REPL but it’s great for playing around with some basic code snippets. So each time you see some code in this episode, try it out in the REPL and tinker with it. Have fun! There are a number of ways to create lists in Elm. The most straight forward thing is to simply write down the elements, comma separated, between square brackets: aList = [1, 2, 3, 4] Result: [1,2,3,4] : List number This looks a lot like arrays in C-Style programming languages, but Lists in Elm do not support positional access (you can not simply read/set the element at index n). Elm also has Array module, that offers positional access. But only lists are directly supported by syntactical elements so most of the times you’ll be working with lists. As mentioned in the last episode, the actual type of a list always contains the type of its elements, that’s why the REPL infers the type List number here (you could also annotate this as List Int – number is a supertype of Of course you can build lists from any type of values, not just primitives like Int, as long as all elements have the same type. Here’s a list of tuples for you: [(1, 2, "three"), (4, 5, "six"), (7, 8, "nine")] However, the following would raise a type error, because the second tuple has a different type than the first. [(1, 2, "three"), (4, "five")] -- TYPE MISMATCH --------------------------------------------- repl-temp-000.elm The 1st and 2nd elements are different types of values. 3│ [(1, 2, "three"), (4, "five")] ^^^^^^^^^^^ The 1st element has this type: ( number, number', String ) But the 2nd is: ( number, String ) Hint: All elements should be the same type of value so that we can iterate through the list without running into unexpected values. When lists get longer you can and should split their definition over multiple lines. The style most people are used to (and which works fine in Elm) would probably look similar to this: aList = [(1, 2, "three"), (4, 5, "six"), (7, 8, "nine")] However, a lot of Elm code (including the code in Elm core and several community packages) use a different style where the comma is at the start of the line: aList : List (number, number, String) aList = [ (1, 2, "three") , (4, 5, "six") , (7, 8, "nine") ] Of course, this is simply a matter of taste but it probably helps to have seen this style once so you know what’s up here. (A remark for those of you who are following along with the REPL: Multiline expressions are possible in the REPL, though a bit of a hassle. End each line with a \ and start all lines except the first with a space. Or just skip the REPL tinkering for the multiline code snippets.) Another way to create lists is the dot notation. The following snippet creates a list of Ints from 1 to 10. [1,2,3,4,5,6,7,8,9,10] : List number Last but not the least, in addition to the syntactical constructs to create lists you can also use functions from the List module. List.repeat takes an integer n and one arbitrary value and returns a list with n copies of this value. List.repeat 4 "Elm" "Elm","Elm","Elm","Elm"] : List String While this section is called “List Manipulation” you can not actually manipulate an existing list. In Elm, everything is immutable. The functions to manipulate a list all create a new list and leave the original list unchanged. The prepend operator :: prepends an item to the start of the list: 1 :: [2, 3, 4] [1,2,3,4] : List number You can prepend multiple times in a row. Theoretically you could always start with an empty list and build your lists only by prepending elements: 1 :: 2 :: 3 :: 4 :: [1,2,3,4] : List number There is no operator to add a single element to the end of a list. You can however, append a list to another list: List.append [1, 2, 3] [4, 5] [1,2,3,4,5] : List number There is an infix operator ++ that is an alias for append: [1, 2, 3] ++ [4, 5] [1,2,3,4,5] : List number So to add a single element to the end of a list you usually just wrap it in a list and use `append`/`++`. like this: [1, 2, 3] ++ [1,2,3,4] : List number Another way to build up a single list from smaller lists is the concat function which takes a list of lists and concatenates all of the individual lists into one large list: List.concat [ ["one", "two", "three"], ["four", "five"], ["six"], ["seven", "eight", "nine"] ] ["one","two","three","four","five","six","seven","eight","nine"] : List String Classics of Functional Programming Now that we know a few different ways to build and manipulate lists, let’s have a look at some of the classical list functions that go beyond that. Where would functional programming be without a map function? Of course the List module has one. List.map takes a function and a list and applies the function to all elements in the list. The result is a new list in which each element is the result of the function, applied to the respective element in the original list. Here is an example (please execute import String in the REPL before trying the example): List.map (\ word -> String.length word) ["a", "ab", "abc"] [1,2,3] : List Int In this example, we applied the length function to all elements in the list of strings. List.filter is a similar evergreen. It takes a function and a list and removes all elements from the list for which the function returns False. The following snippet removes all negative numbers from the list. List.filter (\ n -> (n > 0)) [-1, 3, -2, 7] [3,7] : List number There are a lot more useful functions in the list module. Just to name a few: List.headretrieves the first element of a list. List.tailreturns a new list where the first element has been dropped. List.foldrreduce a list to a single element by combining the first two elements with a given function, then combining the result with the next element, and so on. Best check the List module’s API docs what else it has to offer. Finally, if the List module from core does not have what you need, check out the community package List.Extra for even more functional list goodness. This concludes the seventh episode of this blog post series on Elm. Continue with the next episode , on import statements in Elm. Your job at codecentric? More articles in this subject area Discover exciting further topics and let the codecentric world inspire you.
OPCFW_CODE
(818) 308-4607 Los Angeles, CA Maintain, optimize and troubleshoot your NLE Professional cloud workflow platform Simplified media management Kollaborate 3.0 is out now on the cloud, which is a major update to our workflow platform that both delivers new features and sets a foundation for future features to build upon. We're using machine learning technology to create transcripts of the spoken audio in your videos. Kollaborate will automatically highlight the current sentence as the video plays and you can click on sentences to jump to that point in the video. Kollaborate uses the transcript to create automatic captions below the video and you can even export them as a separate file in common formats like SRT or VTT. Best of all, this technology is completely self-contained on our servers. Privacy is extremely important to us so we wanted a solution that protected our users' data and did not share it with third-parties. We needed a solution that would also work for our self-hosted customers, some of whom host in environments with no external internet connection. While it would have been trivial from an engineering perspective to integrate with something like Google or Amazon's speech-to-text technology like our competitors do, those services do not meet this criteria. After a lot of investigation and some code contributions, we finally settled on Mozilla's DeepSpeech. This is a more complex solution to the problem but it gives us maximum flexibility and the ability to finely tailor the technology to fit our customers' specific use-cases. Our competitors charge extra for transcription and limit the number of hours per month you can transcribe. Using DeepSpeech allows us to make our transcription service free and unlimited. Transcribe as many files as you like for no extra cost and the only limitation is how quickly our servers can process the queue. Because our speech models are still being refined, we're calling this a beta so that customer expectations are aligned correctly. You can read more details about the situations the model performs best at here, but the short explanation is that content like podcasts and voiceovers will currently return the best results. That's not to say that the transcription feature can't be used in other situations, but the technology is likely to make more mistakes. Over time we will expand the number of situations in which it performs well. We're building our models on both open source data and data that more reflects our customers' use-cases. You can help improve transcription by correcting any mistakes and then clicking the Learn button. We don't use your data without permission so it is only used when the Learn button is clicked, and you can specify if we can use the entire file or just the sentences you corrected. The audio of the file then gets cut up into pieces, given a random filename and uses the transcript you provided for training. We don't share your original audio with anyone and once it's part of the model the audio can't be extracted back in its original form. Another way you can help is by contributing your voice to Mozilla's Common Voice project. This is a public domain speech dataset used by Kollaborate, Firefox and a variety of other projects to create open and privacy-conscious speech technology. All voice data is useful, but especially when it comes from women and non-Americans, who are currently underrepresented in Common Voice's dataset. We've made many changes to improve the user interface, the most significant of which is the vertical navigation bar which is designed to balance out the interface and place your content closer to the center of the screen. If you find yourself needing more horizontal space, hover over the Kollaborate logo at the top left and click the arrow that appears to collapse the navigation bar. The navigation bar will automatically collapse by itself on small displays or when you resize your browser window. While video is the most popular format used on the site, our customers also upload many other file types such as MS Word and PowerPoint documents. The only way to view these files in the past was to download them. Now Kollaborate has a specific Document file type that supports file extensions like PDF, DOC, DOCX, XLS, XLSX, PPT, PPTX, ODT, ODP. These files will now show thumbnails and be viewable directly in the browser once converted by our servers. Leaving a comment on a document now tags that comment with the current page number and clicking on a comment will immediately take you to that page. Columns in List view on the Files page can now be resized or dragged to change their order. You can also right-click to hide them or show additional columns. You can now, for example, show the number of comments a file has next to its name and sort the list by this field. Advanced Search has been completely overhauled to be more powerful. Use criteria like file size, type or width to narrow down your search. You can even locate files with specific words in their transcript. Images can now be zoomed and navigated with a lot more control than before. Annotations can be drawn over the image at any zoom level. Versions can be given custom names like "Rough Cut" or "Fine Cut". To do this, click the purple number next to the filename to view all of the versions, then right-click a version and select Rename Version. Even though you can leave comments at specific timecode positions and draw over a video, sometimes that isn't enough to get your point across, so you can now attach files to comments. So you can say "I want the color to look like this" and attach a photo, rather than trying to describe it. Kollaborate is an essential cloud workflow platform that allows you to share files with clients and team members while integrating with Digital Rebellion apps and services. To find out more, see the overview or register for the free trial.
OPCFW_CODE
Building a consumer healthcare app that connects patients, doctors, and health & wellness companies Design and build a successful MVP leveraging healthcare data to benefit consumers, doctors, and DTC home health companies MVP, React, DevOps, Web UI Design, Ruby/Rails Development, Python, and PostgreSQL MVP launched to users on time and on budget Frontrow Health’s mission is to revolutionize the consumer health industry The Frontrow Health platform sets out to allow consumers to monetize their own healthcare data through shopping for personalized home health products vetted by their doctor. Frontrow Health’s solution will enable the first doctor-assisted healthcare shopping experience for consumers. It also gives doctors a way to earn additional revenue, as they onboard their own patients and review direct-to-consumer health products and services. The platform’s goal was to create a three-way connection between consumers, physicians, and healthcare companies. Frontrow Health began looking for a partner to help design and build the most impactful experience for their target users and selected thoughtbot because of the proven experience in uncovering a sound product strategy, defining user needs, building the most impactful MVP, and working collaboratively to maximize Frontrow Health’s chances of both product and business success. Frontrow Health had a long list of desired features and understood that priorities would need to be defined to reach their launch date. For this reason, we kicked off with a multi-week Product Design Sprint, allowing us to conduct research to help shape the right feature set and the corresponding user experience, and foundational technology choices for Beta launch. Our research included conducting user interviews that helped to uncover audience needs, motivations, and day-to-day workflows. At the close of the Sprint, the most crucial pieces of the Beta application were confirmed as well as the relationships between the marketplace participants. Our roadmap and corresponding feature set was prioritized into three categories: Beta, MVP and Post-Launch. Moving forward with the design and development of the Beta product, thoughtbot provided additional support on the DevOps and Business front, including helping them hire their own internal product team and onboard them successfully. The thoughtbot DevOps team focused on integrating the main application with the product recommendation algorithm in necessary environments and from a business perspective, thoughtbot’s Product Strategists worked closely with Frontrow Health to finalize the brands and respective data that would be fully integrated and managed via a supporting admin tool. As our product took shape, the Frontrow Health team also grew. thoughtbot supported the hiring, onboarding and training of Frontrow health’s newest (& first!) in-house designer and developers. Beta launch and beyond After a successful Beta launch, thoughtbot and Frontrow Health will continue the same agile process in tackling fast-follow features and revisiting MVP & Post-Launch goals, using customer research, and industry analysis to make strategic decisions. As Frontrow Health grows, thoughtbot looks forward to watching their product evolve, and helping them grow a stellar product team in parallel.
OPCFW_CODE
“if you notace i don't capitalise anything... its just the way i type.. i mean im not typeing a paper for college.. so i reely don't care about proper typeing im a programmer, not a typist... the only time i care about capitalise or spelling is if it affects my varable names or something like that... besides that i don't care... “ I started the following on a VB Programmer Site. The first 7 are mine. The rest came from others. Real programmers don't know how to spell words which are not keywords in their favorite language. A smart human can decrypt misspelled words. Real programmers do not document their code. If it was tuff for them to write, why should it be easy for anybody else to understand? Real programmers don’t use 4 digit years. They use 5 digits, and are prepared for the year 10,000 & beyond. Real Programmers never work 9 to 5. If any Real Programmers are around at 9:00 am, its because they were up all night. Real programmers cannot do arithmetic. That is what computers are used for. Real programmers use the most obscure esoteric techniques possible. It is more important to impress other programmers than to get the application working sooner. Real programmers do not care about the real world. Programming is a goal in itself, not a means to an end. Real VB programmers never say "It's Impossible", they just say that they haven't done it yet! Real Programmers never make mistakes, they just point out that it hasn't been debugged yet! Real Programmers have no use for managers. Managers are a necessary evil. They exist only to deal with personnel bozos, bean counters, senior planners, and other mental defectives. Real Programmers don't believe in schedules. Planners make up schedules. Managers "firm up" schedules. Frightened coders strive to meet schedules. Real Programmers ignore schedules. Real programmers don't test their code, that's what users are for. Real game programmers write 1000 more lines of code if it makes the game 0.5 FPS faster on a computer that nobody uses any more. Reel programmers dew knot say it can knot bee dun. Buy the weigh, I no that this is awl write because my spell checker says sew. A programmer would miss his brother’s wedding to finish a program. A real programmer would miss his own wedding to finish a program. Real programmers use 10 lines of code to do something that can be done in one, just because it looks more impressive. Either that or use 1 line of code to do something they ought to use 10 for, just because it looks more complicated. Real programmers use binary machine code. If it's good enough for the computer, it's good enough for them. Real programmers don't make minor errors: They just say they are quirks of the system. Real programmers don't distribute beta versions, they do not want to be confused with Microsoft. Real programmers think Windows is lame. Real programmers think all the natural food groups are covered by pizza and coke. Real programmers don’t come to Internet Forums and discuss what real programmers do and don’t do. Real Programmers don't read manuals. Reliance on a reference manual is the hallmark of the novice and the coward. Real Programmers never write memos on paper. They send memos via computer mail networks. Real programmers don't have time for "normal" friends If I related to about 80% of those, does that mean I can call myself a real programmer yet? God, help me.....
OPCFW_CODE
how can i display a HeaderText in a GridView Horizontaly ? i know normaly it doesnt exist in .NET HeaderText Horizontaly but mabe a workaround with HTML or something. i need it horizontaly cause vertical take a lot of space in my GridView and i dont have so much. Thanks for replying, my GridView is in a Div with a Vertical Scroll, but i think it looks better when the user doesnt need to Scroll a long moment to see all the GridView, and my Table have a lot of columns and the best way is (if their a way like this) to put the headers Horizontally then the user dont need to scroll a lot. ok thank you and i will be happy if you find something! On the other hand, i am thinking if we can put an HTML code inside the (HeaderText:"") Example: <asp:BoundField DataField="Name" HeaderText= <HTML Code <i>NAME</i> HTML> and inside this HTML something like: do this Text Horizontaly. the Problem i am not very good in HTML and i am not sure if its exists something like that ( do Text Horizontaly) i know only we can change the Text to Bold like (<b></b>) or Italian like (<i></i>)... I'm still fighting with getting multiple lines in a dynamic gridview. Searching around led me to think that if I can change the data during the rowcreated handler this just might work. gv2.RowCreated += new GridViewRowEventHandler(gv2_RowCreated); protectedvoid gv2_RowCreated(object sender, GridViewRowEventArgs e) for (int i = 0; i < 5; i++) e.Row.Cells[i].Text.Replace(Environment.NewLine, "<br />"); Whenever I run the debugger, the new event handler line adds the handler, but never runs through the block of code. I'd imagine that during the DataBind, the rows are being created, so why isn't the system seeing this event? "You're damned if you do, and you're damned if you dont" - Bart Simpson Here's the code: public partial class Subpgs_Warehouse_ShippingCalendar : System.Web.UI.Page public Dictionary<string, string> content1 = new Dictionary<string, string>(); date = -6; date = -7; date = -8; date = -9; date = -10; date = -11; date = -12; int datelist = date; DataTable Table1 = new DataTable("last_week"); DataTable Table2 = new DataTable("this_week"); DataTable Table3 = new DataTable("next_week"); DataRow dr1 = Table1.NewRow(); for (int i = 0; i < 5; i++) Table1.Columns.Add(DateTime.Today.AddDays(date).DayOfWeek.ToString() + " " + DateTime.Today.AddDays(date).ToShortDateString(), typeof(string)); dr1[i] = content1[DateTime.Today.AddDays(date).ToShortDateString()]; dr1[i] = "No Data supplied"; Hey Guys, I got it working. It was actually a combination of both your responses. I did have to move everything to the Page_PreLoad() and also do a RowDataBound event handler instead of RowCreated. I also had to change the event handler to this after it dawned on me that I'm replacing but not specifying that's what I want to be in the text area! I developed a project Webapplication1 with one asp page Webform1.aspx. As you know when webapplication projects compiled it makes a dll for whole application. Compiled dll resides in Bin folder of webapplication1 project. I also developed a Web site project by the name of WebSite1. Include webapplication1's compiled dll reference in website1 project. Now I want to show webform1.aspx from website1, while webform1.aspx exist in webapplication1 project. I have created the instance of webapplication1.webform1 in website1 project but unable to load or show the form.
OPCFW_CODE
<?php // Get CPT info - render into JSON ?> <div class="wrap"> <h2>Map Locations</h2> <p>This button should be run if a new store location is added, removed or changes location info</p> <form method="post" action="<?php echo admin_url( '/options-general.php?page=locations.php'); ?>"> <?php wp_nonce_field('updatemap_action','updatemap_json'); ?> <p class="submit"> <input type="submit" class="button-primary" value="<?php _e('Update Listings') ?>" name="submitmap" /> </p> </form> </div> <?php if(isset($_POST['submitmap'])) { if ( !empty($_POST['submitmap']) && check_admin_referer( 'updatemap_action', 'updatemap_json' ) && current_user_can('activate_plugins')) { wsl_run_json_request(); }else{ wp_die('Security check fail'); } } // WP_Query arguments for locations // Writes to JSON file function wsl_run_json_request () { $args = array( 'posts_per_page' => -1, 'post_type' => 'centers', 'orderby' => 'ASC', ); $jsonquery = new WP_Query( $args ); $loc_array = array(); if ( $jsonquery->have_posts() ) { while ( $jsonquery->have_posts() ) { $jsonquery->the_post(); $wsl_id = $jsonquery->current_post + 1; $custom_fields = get_post_custom(); $wsl_shop = $custom_fields[ '_cmb_shopname' ][ 0 ]; $wsl_adr = $custom_fields[ '_cmb_address' ][ 0 ]; $wsl_city = $custom_fields[ '_cmb_city' ][ 0 ]; $wsl_tel = $custom_fields[ '_cmb_telephone' ][ 0 ]; $wsl_lat = $custom_fields[ '_cmb_lat' ][ 0 ]; $wsl_long = $custom_fields[ '_cmb_long' ][ 0 ]; $wsl_url = get_permalink(); $loc_array[ ] = array( 'id' => $wsl_id, 'name' => $wsl_shop, 'lat' => $wsl_lat, 'lng' => $wsl_long, 'address' => $wsl_adr, 'city' => $wsl_city, 'phone' => $wsl_tel, 'web' => $wsl_url ); //output on page for visual + check echo $wsl_shop . " &#10004;"; echo '<br>'; } //write to file used by jquery location finder $json_path = plugin_dir_path( __FILE__ ) . 'views/results.json'; file_put_contents( $json_path, json_encode( $loc_array ) ); } else { echo 'Sorry there was an error'; } // Restore original Post Data wp_reset_postdata(); }
STACK_EDU
Try using combinations of different expressions and see what you get. Contain dates that fall during the current week DatePart("ww", [SalesDate]) = DatePart("ww", Date()) and Year( [SalesDate]) = Year(Date()) Returns records of transactions that took place during the current week. Date is a function that returns the current date, and [BirthDate] refers to the BirthDate field in the underlying table. Enter your selection criteria on the Criteria line and the Or line, as needed. Source Do all devices go out at the same time in an EMP attack? Click to choose the next table or query on which you want to base your query. It's like me saying "I 've had (Iif(It's Friday, (5-4), (6/2)) cups of coffee today". Read it again then try it out - it does make sense eventually! Visit Website Each row represents an independent set of criteria. This example will find all records for contacts in towns other than London. Like [Prompt] & "*" Returns all records that begin with the value you enter. Behind the scenes: The query that populates the reports Like most good Access developers, I use queries to pull the data for my reports. This topic only covers select queries. On the next page of the wizard, select First row contains column headings, and then click Next. Are non-citizen Muslim professors and students going to be removed from US universities and subsequently deported by Trump's ban? Access Query Criteria Date Note: To specify more than two alternate criteria sets, use the rows below the Or row. CustomerID Company Address City StateOrProvince PostalCode CountryOrRegion Phone Contact BirthDate 1 Baldwin Museum of Science 1 Main St. Is there a distinction between 禁止 (jìnzhǐ) and 严禁 (yánjìn) which both mean "forbid"? For more information, see the articles listed in the See also section. London NS1 EW2 UK (171) 555-0125 Zoltan Harmuth 16-Jun-67 5 Fourth Coffee London W1J 8QB UK (171) 555-0165 Julian Price 09-Aug-71 6 Consolidated Messenger 3123 75th St. All Rights Reserved current community blog chat Super User Meta Super User your communities Sign up or log in to customize your list. How To Add Criteria In Access Query Each field in a table has a specific data type, such as Number, Text, or Date/Time. Retrieve Multiple Columns You can use an Access query to retrieve multiple columns of data. Access retrieves the columns you chose and displays the rows in the order you specified. This example will display all the records with entries for the current year in the Invoice Date field. http://superuser.com/questions/600806/access-query-for-positive-checkboxes Instead of choosing the tablename.* option on the Field line in Query Design view, choose the name of the field you want to retrieve. Types Of Queries In Access The sequence Like "?en" finds all three character field entries where the second and third characters are en. Access Query Tutorial Enter your selection criteria, if necessary (Not applicable in this example). Example: If you enter S, Access returns all records that begin with S. > [Prompt] Note: You can also use < (less than) ,<= (less than or equal to) http://webd360.com/access-query/access-query-confusion.html Type No to include records where the check box is not selected. Click the Run button. Copy the sample table provided in the previous section and paste it into the first cell of the first worksheet. Query Access Definition If you don't get the result you were expecting, read the grid a line at a time (which is what Access does) and see if it makes sense. Like "*Text" To match text ending with a particular letter or string type an asterisk followed by a letter or string of text. Repeat this step until you enter all field names. http://webd360.com/access-query/sum-function-in-access.html Is there any advantage to freshly ground salt? Open Tables or Queries in Query Design View A query can be based on tables or on other queries. Access Query Multiple Criteria more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Trying your suggestion now. –waysmoove Feb 5 '15 at 17:03 You are awesome!!!!! Only records that meet both criteria will be included in the result. If today's date is 2/2/2006, you see records for the year 2006. You can often get the same results by using mathematical operators such as greater than (>) and less than (<). Access Query Parameters If you supply the criteria >5 AND <3, any record where there is at least one value greater than 5 and one value less than 3 will match. And Not "Text" The Not expression can be used in combination with other expressions, when it becomes And Not followed by the text you want to exclude from your search. Under Tables/Queries, click the table that has the data that you want to use. To create a parameter query: Open a table or query in Query Design view. Enter the criteria on which you want to base your new table. For example, you want all records where the State is equal to "DE" or the Last Name is equal to Smith. Contain a date that belongs to next year Year([SalesDate]) = Year(Date()) + 1 Returns records of transactions with next year's date. This topic explains how to create a simple select query that searches the data in a single table. Switch to Datasheet view to see the results. If today's date is 2/2/2006, you see records for the period Jan 2, 2006. The Make Table dialog box appears. This means that you only have to define criteria for those fields you are interested in. Say for example i wanted to find staff with military experience, I want to be able to create a query that goes through all employees and only shows the entries that To save a query: Click the Save button on the Quick Access toolbar. I'm using the following type of syntax for the criteria. What can be said about this double sum? This example will display all the records that contain UK or USA or France in the Country field. Type the name you want to give the query and then click OK. By default, Access applies the name of the worksheet to your new table. Thanks. –BFWebAdmin Nov 29 '12 at 10:06 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up using Facebook Sign When you're switching between different comparison expressions in SQL using Iif (strictly, Iif is Access VBA, not SQL, but Access allows you to use VBA to an extent in queries), you Open the query in Design view. You can also use the Between operator to filter for a range of values, including the end points. Because the Age field evaluates to a number, it supports the Sum,Average, Count, Maximum, Minimum, Standard Deviation and Variance functions. Are PCIE slots coupled with CPU slots?
OPCFW_CODE
I tried to check the odoo 13 … Odoo Mobile: The new Odoo Mobile app for iOS provides access to all Odoo applications directly from your mobile phone. After this command, you can see the service is running on port 8069 sudo docker run -p 0.0.0.0:8069:8069 --name odoo --link db:db -t odoo . Every application in your Odoo … OE_PORT is the port where Odoo should run on, for example 8069. I tried to install a module that has a version 12 from my odoo 13 and do some removals like api multi and other stuff but the problem is this: "account.invoice_form does not exist" I don't understand either. 1 < strong > sudo adduser--system--home = / opt / odoo--group odoo < / strong > 3) Install … 14. Select Install macOS (or Install OS X) from the Utilities window, then click Continue and follow the onscreen instructions. To stop/start odoo service sudo docker start/stop odoo Installation. I just went through the setup on two systems, one is Mac OS X El Capitan 10.11.2 and another one is my primary OS - Ubuntu 15.04 (where things went much easier, but maybe it is just because I use Ubuntu … This contains all of the components you need – the … Steps for Odoo 13 installation on Ubuntu 1) Update apt source-lists. Then run those commands update-rc.d -f odoo … OE_VERSION is the Odoo version to install, for example 13.0 for Odoo V13. To install this module, you need to: Download the module and add it to your Odoo addons folder. After this command, you can see that the Odoo server is running on 0.0.0.0:8069. Bitnami Odoo Stack Virtual Machines Bitnami Virtual Machines contain a minimal Linux operating system with Odoo installed and configured. The much-awaited Odoo 12 got released at the onset of October without disappointing any of its end users. This user is the owner of all the tables … Run the following command to pull odoo and it creates an instance. 13. sudo service odoo-server stop. Remove all odoo files sudo rm -R /opt/odoo. Follow the below steps to remove Odoo from ubuntu: Stop Server odoo server. OpenERP All-In-One Installation¶. Odoo's unique value proposition is to … In short, openerp is the new user created in PostgreSQL for OpenERP. Each time a new release of OpenERP is made, OpenERP supplies a complete Windows auto-installer for it. Make this new user a superuser. If your Odoo system is a hook with startup. Now We can Remove Configuration Files sudo rm -f /etc/odoo-server.conf sudo rm -f /etc/odoo.conf. 1 < strong > sudo apt update < / strong > 2) Add a new system user name “Odoo” that will own and run the application. Optimized for interfaces on any iOS device, Odoo Mobile provides the next level of flexibility in your business management software. Learn more For more information about the createinstallmedia … Using the Bitnami Virtual Machine image requires hypervisor … Odoo is a suite of open source business apps that cover all your company needs: CRM, eCommerce, accounting, inventory, point of sale, project management, etc. Afterward, log on to your Odoo server and go to the Apps menu. One of the significant features of Odoo … IS_ENTERPRISE will install the Enterprise version on top of 13.0 if you set it to True, set it to False if you want the community version of Odoo 13. Aishwarya Dutta Movie List, Lemon Detox Recipetartan Vs Plaid Vs Gingham, Google Pixel 2 Troubleshooting, Not American Girl, Is Breakfast The Most Important Meal Of The Day Debate, Nurse Having Relationship With Patient, Booo Sabki Phategi, Avid Radiopharmaceuticals Email Format, Best Buy Millenia, Avalon Apartments Covid-19, Advice Meaning In Hindi, Rookie Blue Season 7 Episodes, Is Snake Gourd Poisonous, Cocoa Puffs Uk, Dr Babasaheb Ambedkar College Of Arts Science Commerce Mumbai Maharashtra, Real Estate Masters, Iowa High School Athletic Association, Lancaster University Office 365 Email, Shower Temperature Fluctuates Apartment, Australia Country Flag, Packaging Printing Machine, Hawaii Department Of Corrections, Coimbatore Metro Route, Okinawan Sweet Potato Slips, MATLAB Addpath Inside Function, Allergy Medicine Brands, Tombow Dual Brush Pen Review, Best Women's Driving Shoes, Stacey Solomon Impression,
OPCFW_CODE
Community documentation on setting up a local OSF instance I think it would be very useful to have a document with the best practices of people setting up a local OSF instance at their home institute. In the issue list I noticed several people have worked on this in the past (https://github.com/CenterForOpenScience/osf.io/issues/6248, https://github.com/CenterForOpenScience/osf.io/issues/7219, https://github.com/CenterForOpenScience/osf.io/issues/6255, https://github.com/CenterForOpenScience/osf.io/issues/7347, https://github.com/CenterForOpenScience/osf.io/issues/7805, https://github.com/CenterForOpenScience/osf.io/issues/8493). The Docker Compose document is a good starting point for a local installation. However, my experience is that it does not cover specific application configurations, like when to create a local.py and what values to change. For me it is also not clear how to setup working with files on a local installation. I guess this goes via Waterbutler, but any input or best practices on this topic would be highly appreciated. Also which storage option is used as default (Amazon S3, or any other object storage system like Swift, Ceph or Minio) @umardraz, @mfraezz, @yacchin1205, @sloria, @mattvw, @antonwats, @jpkeith4, @HiroyukiNaito, I would appreciate it if you can share you experiences with setting up a local OSF installation. Did you succeed to setup an instance? What was the most difficult part? Is it still in use? Unfortunately, we don't have a comprehensive documentation for deploying the OSF. We would like to but don't have the resources right now to support users' deployments. That said, if your institute has requirements that data be stored in their region, your best bet is to use one of our hosted storage regions, or connect your Amazon S3 or ownCloud to your OSF projects. I think that OSF itself on the github especially made for developing local environment. Fortunately, I could build on a single server with different domain name of each container service, and surveying to build kubernetes environment still. However, I still have some of difficulties, because our infrastructure environments are in a proxy. It obstructs often accessing among containers. In addition, Ember.js services such as Preprints, Registries and also My Quick Files are still not function on our environment. To fix this, a lot of tough survey myself both tech side and OSF specifications must we need. In addition, enormous help of OSF experts might needed. Through exerience of this, I would like to push some correction of code for building environment someday as a push requiests on github. @BRosenblatt and @HiroyukiNaito, thanks for your answers. I understand that you need people to maintain documentation. It would be very helpful of this would happen in the future. The OSF is a great open-source platform. It would be great if it can be deployed across the globe. Currently for that good documentation is missing. I got the main interface running, but did not look into other services like preprints and registries. Too bad you haven got that up and running. I am facing difficulties to configure the default OSFStorage. I managed to connect a owncloud/nextcloud addon and upload/download files. In the osf.io website the default storage provider is amazon I think. For local use I am not sure what values to change, I guess it is something in addons/osfstorage/settings/local.py. For testing purposes I would like to store the files on the local filesystem, but also it would be good to know what config values to set to connect it to amazon. Also I had issues with network connectivity. I set my own local domain name in /etc/hosts and had to change the WATERBUTLER_URL= in .docker-compose.env to this domain in order to get the addons working. If we get things working we can maybe give it the documentation a start with our experiences. That is my intention for opening this bug report. I would also like to use OSF on premise. We are not allowed to share files with the public and not allowed to store files anywhere else than on our servers. If we, as a community, could develop a documentation on how to setup OSF on premise, that would be awesome. Are there any shared ressources, yet? I would just like to leave a +1 on this. Hosting OSF locally, and maybe even federating it with other installations would be a total game changer, as many institutions would like to manage their projects but are not willing or allowed to store anything (including metadata of information of project that are not visible to the public yet) outside their premises. I have a half baked dream of spinning up software related to Dataverse (you can deposit data from OSF into Dataverse) using the Kubernetes config in https://github.com/IQSS/dataverse-kubernetes . I'm coming at the from the Dataverse developer perspective of wanting to ensure that integrations are tested regularly. Right now I think we rely on users to tell us if we broke something. 😄 I got this idea of spinning up related software in our ecosystem in Kubernetes from @craig-willis who created https://www.workbench.nationaldataservice.org which is described at http://www.nationaldataservice.org/platform/workbench.html and has "specs" various software (including Dataverse, CKAN, Globus, Jupyter, etc.) at https://github.com/nds-org/ndslabs-specs I have no idea if anyone has any time to work on any of this though. It's just a thought. I agree that docs are crucial. Maybe someone from the OSF community could apply for https://developers.google.com/season-of-docs/ Hi, @RightInTwo !!! 👋 I haven't tried it, but would using the osf helm-charts (https://github.com/CenterForOpenScience/helm-charts) make a custom installation easier? I would also be interested in this feature. Let's at least collect our success stories of local installations somewhere. Any news? My organisation would be interested in some pointers.... @CaptainSifff if your organization is interested in using a personal OSF instance, I suggest they look at the implementation for RDM-osf.io. That's essentially the only large group that has forked our project that is being actively used and is open source. If you'd like to use just a portion of our functionality I'd recommend integrating with our services as is, such as WaterButler and modular-file-renderer, or forking individual services. I'd also strongly recommend using our new Oauth2 capabilities to utilize our file storage and REST API. We are somewhat cagey about writing precise instructions for setting up osf.io because the data model and external integrations we are writing change frequently and we simply don't have the resources to walk people through the process individually, but if you are representing an institution and want more details please contact us at<EMAIL_ADDRESS>and we and talk about what is best for your use case. Mention that you had a conversation with John Tordoff and name the organization you represent and will answer your questions to the best of our ability.
GITHUB_ARCHIVE
As you might know, the autosave indicator in Business Central is shown on the right side of the card on screen and changes values when the computer communicates with the server and saves the data. The indicator can display Saving or Saved depending on current state. In case a data validation error appears, it would also display Not saved. For example, Tag Archives: Save Dynamics 365 Business Central: Save report dataset to Excel from the request page (Report Inspector) As you might know, in Business Central and NAV windows client, we can view and save the report dataset from the preview page, similar to the page inspector. For the development of complex reports, it is very important to be able to analyze the data before print. This would be very useful for creating and debugging RDLC reports. Dynamics 365 Business Central SaaS: save a file to an SFTP server (the Logic App way) Yesterday I’ve provided a solution for saving a file generated directly from a Dynamics 365 Business Central SaaS tenant to an SFTP server by using Azure Functions. I’ve to admit that this is my preferred way because it gives me more freedom, scalability and adaptability. But obviously, that’s not the only possible way to do so. How To: Use Advance Option to Save Report Setting While I was going along with one of our customers, there was a requirement that they need to Save setting for a specific Filter selection. I went through Dynamics NAV to figure out how to make this solution. By Going to Report Setting from the search. Procedure to save a record as a report-PDF document in the particular path in Dynamics NAV In this blog, I will be giving the procedure to save the selected Posted Sales Invoice as a PDF document in a particular mentioned path. Any record which has a related report can be saved as a PDF document. How to Save Commonly Used Filters in Dynamics NAV Often when we work in Dynamics NAV, we use the same filters and settings over and over again. You might have a customer list that you pull to review every month, or a set of invoices you need to print to PDF weekly, or a set of GL accounts in our Chart of Accounts that you frequently use, and for all of them you have to enter the filters each time. NAV 2017- Save report settings to make running a report with different filters fast Microsoft Dynamics NAV 2017 introduces another way to get your work done faster with the ability to save settings for reports. This means that if you run a report with different filter settings in it, you can save those settings and retrieve them when you print the report. No more having to fill in the settings you want on the report every time you run the report.
OPCFW_CODE
Error and warning when plotting graph Using Julia 0.3 (1 day old), I got the following message when runing plot(g): WARNING: writesto(cmd,args...) is deprecated, use open(cmd,"w",args...) instead. ERROR: could not spawn `neato -Tx11`: no such file or directory (ENOENT) I'm having the following error when trying to run the first example code julia> using Graphs INFO: Recompiling stale cache file /home/kaslu/.julia/lib/v0.4/DataStructures.ji for module DataStructures. julia> g = simple_graph(3) Directed Graph (3 vertices, 0 edges) julia> add_edge!(g, 1, 2) edge [1]: 1 -- 2 julia> add_edge!(g, 3, 2) edge [2]: 3 -- 2 julia> add_edge!(g, 3, 1) edge [3]: 3 -- 1 julia> plot(g) ERROR: could not spawn `neato -Tx11`: no such file or directory (ENOENT) Is it related to this issue? My Julia version is julia> versioninfo() Julia Version 0.4.5 Commit 2ac304d (2016-03-18 00:58 UTC) Platform Info: System: Linux (x86_64-unknown-linux-gnu) CPU: Intel(R) Core(TM) i3-3110M CPU @ 2.40GHz WORD_SIZE: 64 BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Sandybridge) LAPACK: libopenblas LIBM: libm LLVM: libLLVM-3.3 I'm having exactly the same error as @kaslusimoes using the example code the linked in the post above. Here is my version info: julia> versioninfo() Julia Version 0.4.5 Commit 2ac304d (2016-03-18 00:58 UTC) Platform Info: System: Darwin (x86_64-apple-darwin13.4.0) CPU: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz WORD_SIZE: 64 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell) LAPACK: libopenblas64_ LIBM: libopenlibm LLVM: libLLVM-3.3 Does anyone have a fix? @paulstey it seems to me that installing GraphViz, such as @pozorvlak said, solves the problem I guess we could try to make a pull request changing the default error behaviour and explaining this situation whenever one encounters such issue. I'll try it myself later @kaslusimoes yes, that did seem to be the issue. But now I'm getting a different error message concerning x11. julia> plot(g) Format: "x11" not recognized. Use one of: bmp canon cgimage cmap cmapx cmapx_np dot eps exr fig gif gv icns ico imap imap_np ismap jp2 jpe jpeg jpg pct pdf pic pict plain plain-ext png pov ps ps2 psd sgi svg svgz tga tif tiff tk vml vmlz xdot xdot1.2 xdot1.4 ERROR: write: broken pipe (EPIPE) in yieldto at /Applications/Julia-0.4.5.app/Contents/Resources/julia/lib/julia/sys.dylib in wait at /Applications/Julia-0.4.5.app/Contents/Resources/julia/lib/julia/sys.dylib in stream_wait at /Applications/Julia-0.4.5.app/Contents/Resources/julia/lib/julia/sys.dylib in uv_write at stream.jl:962 in buffer_or_write at stream.jl:972 in write at stream.jl:1011 in write at ascii.jl:99 in to_dot at /Users/pstey/.julia/v0.4/Graphs/src/dot.jl:26 in plot at /Users/pstey/.julia/v0.4/Graphs/src/dot.jl:92 Did you have this issue? Sorry, no idea =/ See #172 Should this issue be resolved now? If I try to plot a graph, I get ERROR: could not spawn neato -Tx11: no such file or directory (ENOENT) Installing GraphViz fails for me but that is a different issue. We could make the warning clearer, I guess, but that's already pretty penetrable (and Googleable). I'd be in favour of closing this, but I'm no longer a maintainer - thoughts, everyone?
GITHUB_ARCHIVE
Version control systems are an essential tool in software development, allowing developers to collaborate on projects, track changes in code, and even roll back changes if needed. Git is one of the most popular version control systems in use today and is widely used in software development projects. Git is a distributed version control system, which means that it can be used in both centralized and decentralized ways. A centralized version control system stores all changes in a centralized location, while a distributed version control system stores changes in a distributed manner, with each copy of the code being tracked independently. This makes it easier for developers to collaborate on projects, as each user can have a local copy of the code, and any changes they make can be shared with the rest of the team. Git also allows developers to easily track changes in their code, and to quickly roll back any changes that may cause problems. It also helps ensure that any changes made to the code are up–to–date and that all developers are working from the same version. Git is also a popular choice for open–source projects, as it is easy to set up and use, and is widely supported. This makes it a great choice for developers who are looking to collaborate on open–source projects, as they can easily share their code with the rest of the team. Overall, Git is a powerful and widely used version control system that can be used for a variety of purposes. If you‘re looking to get started with version control, then Git is a great option to consider. What is version control? The value of version control in the success of high-performing development and DevOps teams The process of monitoring and controlling modifications to computer programs is known as “version control” or “source control.” A version control system is a software tool used by development teams to track and keep track of all of the different versions of their code. Due to the increased speed of modern development environments, software development teams greatly benefit from using version control systems to streamline their processes and save time. DevOps teams can benefit greatly from them since they shorten the time it takes to create new features and improve the rate at which those features are successfully deployed. All code changes are tracked in a unique database by the version control program. In the event of a blunder, the development team can roll back time and examine the code’s evolution to determine where the problem originated and how to solve it with as little fuss as possible. The source code is the project’s crown jewel, or most valuable asset, and must be safeguarded. The source code is a treasure trove of information that most software teams have spent countless hours learning and perfecting about the issue area. The use of version control safeguards code from both catastrophic failure and the gradual deterioration caused by human mistakes and unintended consequences. As a team, software engineers are constantly adding new features and improving old ones in the form of source code. A file tree is a hierarchical folder structure used to store and manage a software project’s, app’s, or component’s source code. It’s not uncommon for many team members to make modifications to the same file tree at the same time, with one developer working on a new feature and another fixing an issue completely unrelated to the first. By keeping tabs on every change made by every member of the team, version control aids in finding solutions to these sorts of issues and reducing the likelihood of conflicts among members’ ongoing efforts. It’s possible for concurrently working developers to make changes to different parts of the software that are incompatible with one another. To avoid slowing down the rest of the team, this issue needs to be identified and fixed in a methodical fashion. In addition, new software should never be relied upon before it has been thoroughly tested, as every change to the code can result in unexpected errors. So, until a new version is complete, testing and development continue simultaneously. In a perfect world, a version control system would not dictate a specific development process, but rather would accommodate different methods. It would be ideal if it could be used regardless of the developer’s preferred OS or tool set. Instead of the annoying and cumbersome process of file locking—giving the green light to one developer at the risk of halting the development of others—great version control systems promote a seamless and continuous flow of changes to the code. Without version control, software teams frequently run into issues, such as not knowing which changes have been made available to users or accidentally making changes that are incompatible between two unrelated pieces of work and then having to spend time untangling and reworking them. If you’re a developer who hasn’t used version control before, you could have a story about adding a “final” or “latest” version to a file and then having to deal with a new final version. Maybe you’ve used comments to hide chunks of code because you’re afraid of deleting them and then realizing you need them later. The use of version control can help with these issues. Today’s successful software development teams cannot function without using version control tools. Those software professionals who are used to using a powerful version control system on team projects are likely to see the immense benefit version control provides even on smaller projects they work on alone. After becoming acclimated to the many advantages of version control systems, many programmers refuse to consider completing any project, software or otherwise, without one. You should also be able to manage and monitor your codebase, including version control systems such as Git. You should also be able to understand the importance of visual design. You should be familiar with a number of third-party libraries. — akinpedia.com 🇳🇬 (@Akinpedia) December 7, 2022 The advantages of using a version control system Effective software and DevOps teams always use version control software. As the size of a software development team increases, version control ensures that the team’s productivity and responsiveness remain high. There has been a lot of development in the field of version control systems (VCS) over the past few decades, and while some are superior to others, the standard has risen overall. Software configuration management systems (VCS) are also known as source code management (SCM) and revision control systems (RCS) (Revision Control System). Git is one of the most well-known version control systems currently available. More on the DVCS classification system and how Git fits into it will follow. Git, like many of today’s other popular VCS systems, is available to the public at no cost. The following are the main advantages of using version control, and they apply regardless of the system chosen. 1. An exhaustive log of all of the files’ long-term modifications. This includes every adjustment that has been made by a large number of people over time. Any and all alterations, such as adding, removing, or altering files, are considered to be changes. Various version control systems have varying degrees of efficiency when it comes to relocating and renaming files. The author, date, and notes on why each change was made should also be included in this log. If you need to address an issue in an older version of the software, having access to the full history is essential for determining its cause and determining whether or not a certain change introduced the problem. Anything that isn’t the most recent version of the software is called an “older version” if the software is still actively being developed. 2. Splitting and joining into new entities. It goes without saying that team members should work in parallel, but even solo developers can reap the benefits of working on separate streams of updates. Using VCS technologies, developers may ensure that changes made to different “branches” do not clash by creating a “branch,” which creates a new independent stream of work and provides the ability to merge it back together. Many software development groups use a technique known as “branching” to organize their work. When deciding how to make advantage of VCS’s branching and merging features, teams have a wide range of workflows from which to select. 3. Traceability. The ability to log each modification to the program, link it to bug tracking and project management tools like Jira and annotate each modification with a note explaining the rationale behind the modification can aid in not just forensics but also in determining the underlying cause of a problem. With the annotated history of the code at their disposal, developers can make modifications that are accurate and harmonic, in accordance with the intended long-term design of the system, when reading the code, trying to understand what it is doing and why it is so created. This is especially critical when attempting to accurately predict future development work and is often essential when dealing with legacy code. Though it’s technically feasible to avoid version control altogether, any serious software development team would be foolish to do so. So the question is not whether to use version control but which version control system to use. - Questions and Answers for Database Interviews to Aid Your Preparation - Phantom X2 Pro Portrait Camera Phone Using the First Pop-out - Apple Expands iCloud Backup End-to-End Encryption - What Are the Differences Between Day Trading and Swing Trading?
OPCFW_CODE
Frequency result of FFT for data that does not start at t=0 I know there are already a lot of questions about frequency bins in FFT. However I have one that doesn't really fit to the ones I read. I have time dependent data where the time does not start at t=0 but later. The question I have is how I define the frequency bins of the result in this case. The common convention as I understood it is to assign a value of k/tMax to the k-th bin, which makes sense if t0=0. But if I use this approach on my data the result of a FFT->iFFT is phase shifted to the original data. The same problem arises then of course in the iFFT part. Is it required to pad the data to the left with zeros in order to fill it up to t=0 or is there another way? Also I compared my results to results I got with the Origin software for the same data and the binning is slightly different. My maximum time in the data is 1.9792. I calculate for the first bin a frequency of 0.50525 but Origin gets a value of 0.49736. Isn't it just 1/1.9792=0.50525? If you do a FFT on some data segment then you consider this data segment as part of a periodic function. Assume you have a segment of $N$ samples in an index segment $[a:b-1]$ starting at time index $a=jN+k$ and ending at $b-1$ where $b=a+N=(j+1)N+k$. added: which means $j=a \;div\; N$ and $k=a\; mod\;N=a-jN$. Then the amplitudes of the spectrum with correct phase are obtained by considering the segment $[0:N-1]$ of the periodically continued segment. In practical terms this means that you split the segment before index $(j+1)N$ and join the parts in the reverse order, $[(j+1)N:(j+1)N+k-1]=[(j+1)N:b-1]$ mapped to $[0:k-1]$ first and $[jN+k:(j+1)N-1]=[a:(j+1)N-1]$ mapped to $[k:N-1]$ after that. Always considering the sample to be a periodic function is a good point. That explains the result being a phase shift. I got a little confused after that: Could you explain what j and k are? Thank you. $j$ and $k$ are quotient and remainder of the integer division of the starting index $a$ by the period $N$.
STACK_EXCHANGE
Unicode & IETF ken.whistler at sap.com Tue Aug 12 00:48:12 CEST 2014 > The other irony about this is that, if you want consistent and > easily predictable behavior, you should be asking for exactly > whet we thought we were promised -- no new precomposed > characters under normal circumstances unless there were > characters missing from the combining sequences that could > otherwise be used to form them and, if any exception was made to > add one anyway, it should decompose back to the relevant > combining sequence. And the *other* other irony about this is that that is exactly what you have gotten! The relevant data file to watch for this is: which carries all the information about the "funny cases" for The last time an exception of the type you are talking about -- a new precomposed character being added, for which a canonical combining decomposition had to be added simultaneously to "undo" the encoding as a single code point, so that the NFC form was decomposed, rather than composed, was for U+2ADC FORKING. That went into Unicode 3.2, in March, *2002*, 12+ years ago. The claim in the Unicode Standard for the cases like U+08A1 beh-with-hamza and U+A794 c-with-palatal-hook, is that these are *ATOMIC* characters, and not actually precomposed characters. Therefore they do not fall afoul of the stability issue for the formal definition of Unicode normalization. They are encoded because they are *not* equivalent to what might naturally be taken as a pre-existing combining sequence. This is an old, old, old, old issue for the Unicode Standard. The line has to be drawn somewhere, and it is not at all self-evident, contrary to what some people seem to think. Cases like adding acute accents to letters clearly fall on one side of the line. Cases like making new letters by erasing parts of existing letters or turning pieces around clearly fall on the other side of the line. (Meditate, perhaps, on U+1D09 LATIN SMALL LETTER TURNED I.) Cases which make new letters by adding ostensibly systematic diacritics which then fuse unpredictably into the forms of the letters are the middle ground, where the UTC had to place a stake in the ground and decide whether to treat them all as atomic units, despite their historic derivative complexity, or to try to deal with them as ligated display units based on encoded And just to make things more complicated, the encoding for Arabic was/is systematically distinct from that for Latin. For Latin, the *default* is that if the diacritic is visually separated from the base letter, then a precomposition is presumed. However, for Arabic, because of the early history of how Arabic was encoded on computers, the line is drawn differently: any skeleton + ijam (diacritic) combination is encoded atomically, and is not treated as a sequence. The known exceptions to that principle are well documented and historically motivated. This is noticeably messy, of course, both because writing systems are messy, and because the history of computer character encoding itself is messy and full of hacks. However, two points should stand out for the takeaway here: 1. It is not possible to have a completely internally consistent diacritic encoding story for Latin (and scripts like it). Quit trying to have that pony! 2. It is not possible to have a completely consistent model for how diacritics are handled between Latin and Arabic. Quit trying to have that pony, too! What we have in Unicode instead is a perfectly serviceable donkey that can move your cart to market. More information about the Idna-update
OPCFW_CODE
Is it good practice to generally make heavyweight classes non-copyable? I have a Shape class containing potentially many vertices, and I was contemplating making copy-constructor/copy-assignment private to prevent accidental needless copying of my heavyweight class (for example, passing by value instead of by reference). To make a copy of Shape, one would have to deliberately call a "clone" or "duplicate" method. Is this good practice? I wonder why STL containers don't use this approach, as I rarely want to pass them by value. Restricting your users isn't always a good idea. Just documenting that copying may be expensive is enough. If a user really wants to copy, then using the native syntax of C++ by providing a copy constructor is a much cleaner approach. Therefore, I think the real answer depends on the context. Perhaps the real class you're writing (not the imaginary Shape) shouldn't be copied, perhaps it should. But as a general approach, I certainly can't say that one should discourage users from copying large objects by forcing them to use explicit method calls. IMHO, providing a copy constructor and assignment operator or not depend more of what your class modelizes than the cost of copying. If your class represent values, that is if passing an object or a copy of the object doesn't make a difference, then provide them (and provide the equality operator also) If your class isn't, that is if you think that object of the class have an identity and a state (one also speak of entities), don't. If a copy make sense, provide it with a clone or copy member. There are sometimes classes you can't easily classify. Containers are in that position. It is meaninfull the consider them as entities and pass them only by reference and have special operations to make a copy when needed. You can also consider them simply as agregation of values and so copying makes sense. The STL was designed around value types. And as everything is a value, it makes sense for containers to be so. That allows things like map<int, list<> > which are usefull. (Remember, you can't put nocopyable classes in an STL container). Generally, you do not make classes non-copyable just because they are heavy (you had shown a good example STL). You make them non-copyable when they connected to some non-copyable resource like socket, file, lock or they are not designed to be copied at all (for example have some internal structures that can be hardly deep copied). However, in your case your object is copyable so leave it as this. Small note about clone() -- it is used as polymorphic copy constructor -- it has different meaning and used differently. Most programmers are already aware of the cost of copying various objects, and know how to avoid copies, using techniques such as pass by reference. Note the STL's vector, string, map, list etc. could all be variously considered 'heavyweight' objects (especially something like a vector with 10,000 elements!). Those classes all still provide copy constructors and assignment operators, so if you know what you're doing (such as making a std::list of vectors), you can copy them when necessary. So if it's useful, provide them anyway, but be sure to document they are expensive operations. I like the examples of STL containers. Depending on your needs... If you want to ensure that a copy won't happen by mistake, and making a copy would cause a severe bottleneck or simply doesn't make sense, then this is good practice. Compiling errors are better than performance investigations. If you are not sure how your class will be used, and are unsure if it's a good idea or not then it is not good practice. Most of the time you would not limit your class in this way.
STACK_EXCHANGE
// XXX: Use a ref-counted gene Pool. // A population is then only a collection of pool item ids. // This prevents from duplicating potential large genes. #![feature(num_bits_bytes)] extern crate bit_vec; extern crate rand; extern crate simple_parallel; use std::cmp::{PartialOrd, Ordering}; use simple_parallel::Pool; pub use prob::{Probability, ProbabilityValue}; pub mod bit_string; pub mod mo; pub mod nsga2; pub mod crossover; pub mod selection; pub mod prob; /// Maximizes the fitness value as objective. #[derive(Copy, Clone, Debug, Default, PartialEq)] pub struct MaxFitness<T: PartialOrd + Clone + Send + Default>(pub T); impl<T: PartialOrd + Send + Clone + Default> PartialOrd for MaxFitness<T> { #[inline] fn partial_cmp(&self, other: &MaxFitness<T>) -> Option<Ordering> { self.0.partial_cmp(&other.0).map(|i| i.reverse()) } } /// Minimizes the fitness value as objective. #[derive(Copy, Clone, Debug, Default, PartialEq, PartialOrd)] pub struct MinFitness<T: PartialOrd + Clone + Send + Default>(pub T); /// Represents an individual in a Population. pub trait Individual: Clone+Send { } /// Manages a population of (unrated) individuals. #[derive(Clone, Debug)] pub struct UnratedPopulation<I: Individual> { population: Vec<I>, } /// Manages a population of rated individuals. #[derive(Clone, Debug)] pub struct RatedPopulation<I: Individual, F: PartialOrd> { rated_population: Vec<(I, F)>, } impl<I: Individual> UnratedPopulation<I> { pub fn new() -> UnratedPopulation<I> { UnratedPopulation { population: Vec::new() } } pub fn with_capacity(capa: usize) -> UnratedPopulation<I> { UnratedPopulation { population: Vec::with_capacity(capa) } } #[inline(always)] pub fn len(&self) -> usize { self.population.len() } #[inline] pub fn get(&self, idx: usize) -> &I { &self.population[idx] } pub fn add(&mut self, ind: I) { self.population.push(ind); } /// Evaluates the whole population, i.e. determines the fitness of /// each `individual` (unless already calculated). /// Returns the rated population. pub fn rate<E, F>(self, evaluator: &E) -> RatedPopulation<I, F> where E: Evaluator<I, F>, F: PartialOrd { let len = self.population.len(); let rated_population: Vec<(I, F)> = self.population .into_iter() .map(|ind| { let fitness = evaluator.fitness(&ind); (ind, fitness) }) .collect(); debug_assert!(rated_population.len() == len); RatedPopulation { rated_population: rated_population } } /// Evaluate the population in parallel using the threadpool `pool`. pub fn rate_in_parallel<E, F>(self, evaluator: &E, pool: &mut Pool, chunk_size: usize) -> RatedPopulation<I, F> where E: Evaluator<I, F>, F: PartialOrd + Send + Default { let len = self.population.len(); let mut rated_population: Vec<(I, F)> = self.population .into_iter() .map(|ind| { let fitness = F::default(); (ind, fitness) }) .collect(); pool.for_(rated_population.chunks_mut(chunk_size), |chunk| { for &mut (ref ind, ref mut fitness) in chunk.iter_mut() { *fitness = evaluator.fitness(ind); } }); debug_assert!(rated_population.len() == len); RatedPopulation { rated_population: rated_population } } } impl<I: Individual, F: PartialOrd> RatedPopulation<I, F> { pub fn new() -> RatedPopulation<I, F> { RatedPopulation { rated_population: Vec::new() } } pub fn with_capacity(capa: usize) -> RatedPopulation<I, F> { RatedPopulation { rated_population: Vec::with_capacity(capa) } } #[inline(always)] pub fn len(&self) -> usize { self.rated_population.len() } pub fn add(&mut self, ind: I, fitness: F) { self.rated_population.push((ind, fitness)); } #[inline] pub fn get(&self, idx: usize) -> &I { &self.rated_population[idx].0 } #[inline] pub fn get_individual(&self, idx: usize) -> &I { self.get(idx) } #[inline] pub fn get_fitness(&self, idx: usize) -> &F { &self.rated_population[idx].1 } fn extend_with(&mut self, p: RatedPopulation<I, F>) { self.rated_population.extend(p.rated_population); } #[inline] pub fn fitter_than(&self, i1: usize, i2: usize) -> bool { self.get_fitness(i1) < self.get_fitness(i2) } /// Return index of individual with best fitness. pub fn fittest(&self) -> usize { assert!(self.len() > 0); let mut fittest = 0; for i in 1..self.rated_population.len() { if self.rated_population[i].1 < self.rated_population[fittest].1 { fittest = i; } } return fittest; } } /// Evaluates the fitness of an Individual. pub trait Evaluator<I:Individual, F:PartialOrd>: Sync { fn fitness(&self, individual: &I) -> F; } pub enum VariationMethod { Crossover, Mutation, Reproduction, } /// Mates two individual, producing one child. pub trait OpCrossover1<I: Individual> { fn crossover1(&mut self, parent1: &I, parent2: &I) -> I; } /// Mates two individual, producing two children. pub trait OpCrossover<I: Individual> { fn crossover(&mut self, parent1: &I, parent2: &I) -> (I, I); } /// Mutates an individual. pub trait OpMutate<I: Individual> { fn mutate(&mut self, ind: &I) -> I; } /// Selects a variation method to use. pub trait OpVariation { fn variation(&mut self) -> VariationMethod; } /// Selects a random individual from the population. pub trait OpSelectRandomIndividual<I: Individual, F: PartialOrd> { fn select_random_individual<'a>(&mut self, population: &'a RatedPopulation<I, F>) -> usize; // IndividualIndex } /// Produce new generation through selection of \mu individuals from population. pub trait OpSelect<I: Individual, F: PartialOrd> { fn select(&mut self, population: &RatedPopulation<I, F>, mu: usize) -> RatedPopulation<I, F>; } pub fn variation_or<I, F, T>(toolbox: &mut T, population: &RatedPopulation<I, F>, lambda: usize) -> (UnratedPopulation<I>, RatedPopulation<I, F>) where I: Individual, F: PartialOrd + Clone, T: OpCrossover1<I> + OpMutate<I> + OpVariation + OpSelectRandomIndividual<I, F> { // We assume that most offspring is unrated and only a small amount of already rated offspring. let mut unrated_offspring = UnratedPopulation::with_capacity(lambda); let mut rated_offspring = RatedPopulation::new(); // Each step produces exactly one child. for _ in 0..lambda { let method = toolbox.variation(); match method { VariationMethod::Crossover => { // select two individuals and mate them. // only the first offspring is used, the second is thrown away. let parent1_idx = toolbox.select_random_individual(population); let parent2_idx = toolbox.select_random_individual(population); let child1 = toolbox.crossover1(population.get(parent1_idx), population.get(parent2_idx)); unrated_offspring.add(child1); } VariationMethod::Mutation => { // select a single individual and mutate it. let ind_idx = toolbox.select_random_individual(population); let child = toolbox.mutate(population.get(ind_idx)); unrated_offspring.add(child); } VariationMethod::Reproduction => { let ind_idx = toolbox.select_random_individual(population); rated_offspring.add(population.get_individual(ind_idx).clone(), population.get_fitness(ind_idx).clone()); } } } return (unrated_offspring, rated_offspring); } // The (\mu + \lambda) algorithm. // From `population`, \lambda offspring is produced, through either // mutation, crossover or random reproduction. // For the next generation, \mu individuals are selected from the \mu + \lambda // (parents and offspring). #[inline] pub fn ea_mu_plus_lambda<I,F,T,E,S>(toolbox: &mut T, evaluator: &E, mut population: RatedPopulation<I,F>, mu: usize, lambda: usize, num_generations: usize, stat: S, numthreads: usize, chunksize: usize) -> RatedPopulation<I,F> where I: Individual, F: PartialOrd+Clone+Send+Default, T: OpCrossover1<I> + OpMutate<I> + OpVariation + OpSelectRandomIndividual<I,F> + OpSelect<I,F>, E: Evaluator<I,F>+Sync, S: Fn(usize, usize, &RatedPopulation<I,F>) { let mut pool = simple_parallel::Pool::new(numthreads); // let nevals = population.rate_in_parallel(evaluator, &mut pool, chunksize); // stat(0, nevals, &population); for gen in 0..num_generations { // evaluate population. make sure that every individual has been rated. let (unrated_offspring, rated_offspring) = variation_or(toolbox, &population, lambda); let nevals = unrated_offspring.len(); population.extend_with(rated_offspring); population.extend_with(unrated_offspring.rate_in_parallel(evaluator, &mut pool, chunksize)); // select from offspring the `best` individuals let p = toolbox.select(&population, mu); stat(gen + 1, nevals, &p); population = p; } return population; }
STACK_EDU
Traditionally, Beaker's design processes have been fairly closed. If you weren't part of the Beaker team, participating directly in conversations on IRC or following the right bugs in Bugzilla or code reviews on Gerrit, major changes in Beaker's capabilities could come as a complete surprise. When release announcements were made (and they generally weren't announced on this list), they consisted of a series of bug numbers and titles, without a clear explanation of what the new release might mean for *users* of an upgraded Beaker installation. We don't think this is a healthy situation for the project to be in, and so we're taking a number of steps to help change that in the run up to Beaker 1.0 (currently expected in mid-to-late March). 1. We have now published a technical roadmap at We aim for this to provide a high level view of what the Beaker team see as Beaker's major current shortcomings, as well as some insight into what we're actively working on. We value external contributions to any part of the system, especially those that align with this road map. 2. Proposals for major new features and architectural changes in Beaker will now be published at http://beaker-project.org/dev/ I've been a CPython core developer for several years, and one of the things I think we get fairly right is the Python Enhancement Proposal process for discussing and reviewing significant changes to the language and standard library. By introducing a similar process for Beaker, we hope to stimulate early feedback from our users and other developers, resulting in more appropriate final designs. If you have an idea for a design proposal (especially if it's related to a topic already on the technical road map) please send it to this list (beaker-devel(a)lists.fedorahosted.org) for review and discussion. 3. Future Beaker releases will be accompanied by user and admin oriented release notes (not just a list of references to developer oriented Again influenced by CPython's development processes (in this case, the "What's New in Python X.Y?" guide that accompanies each major release), we hope that these enhanced release notes will help Beaker users become aware of the interesting new capabilities provided in each release, as well as which significant bug fixes allow things that were previously Another change that was made some time ago, but perhaps not widely advertised, is the provision of a public "Developer Guide" at , as well as the web service API documentation at http://beaker-project.org/server-api/ Note, all of the links above are accessible from the "For Developers" section on the main Beaker help page at http://beaker-project.org/help.html Red Hat Infrastructure Engineering & Development, Brisbane Python Applications Team Lead Beaker Development Lead (http://beaker-project.org/ PulpDist Development Lead (http://pulpdist.readthedocs.org
OPCFW_CODE
Data Quality & Information Governance Conference Event - San Diego, US | 11-15 June, 2018 The Data Governance and Information Quality Conference (DGIQ) is the world’s largest event dedicated entirely to data governance and information quality. Semarchy was proud to be a Platinum Sponsor of this event featuring six tracks and more than 35 case studies. We led the conversation around how an Intelligent Data Hub addresses all of the capabilities in one single platform. Delegates had the opportunity to see that among a plethora of technology solutions, Semarchy stands alone in the ability to solve for most of the use cases that were discussed at the Summit with our xDM platform. Missed the Speaking Session? Best Practices Session: PIMCO CSI: Data Management – Or, what we learned on the way to CRM with Data Governance Sam Zamora, Master Data Architect at PIMCO, joined DGIQ to share his success story. Sam talked about the system he was able to build at PIMCO, to master their contact, organization, and client office detail along with product, location and reference data, wrapped up with a bit of intelligent algorithms and batch processes to build an elegant solution. In this best practices session, Sam detailed his experiences work with the xDM Intelligent Data Hub from Semarchy to solve for governance, quality, enrichment and MDM in parallel. How to get business execs on board for seemingly “boring” data projects How to make MDM & Data Governance invisible, and at the same time ever-present How to need to get your data situated in such a way as to take advantage of recent advances in data science, machine learning, etc. Perspectives Session : Pragmatic GDPR Single View of Person Working with clients across Europe, the Semarchy Team has learned a great deal about implementing a fast-path construct of addressing the data requirements related to GDPR. While no single piece of software, not consulting engagement is a “silver bullet,” a smart blend of people, processes, and technology can solve for the well-understood use cases around portability, access, rectification and the ever-popular erasure, or “right to be forgotten.” This session, led by Semarchy CMO Michael Hiskey and Senior Customer Success Consultant Robin Peel, focused on practical experience from GDPR implementations, and explored some solutions to help DPOs, data controllers, and processors address data subject access requests and associated compliance requirements. Lasting change is possible and affords a myriad of unintended benefits from increased intimacy with the people data in your organization. Why GDPR applies even to organizations that generally operate outside of the EU Best practices from European organizations that have been out in front of the deadline How to get executives, and peer department heads on board for smooth compliance planning
OPCFW_CODE
Imagery captured with Parrot Sequoia camera is automatically recognized by the software since the camera is in Pix4Dmapper camera database. Processing Sequoia multispectral imagery 1. Create a new project: Step 2. Creating a Project. 2. Import all discrete band imagery (green, red, rededge and NIR imagery) from the flight. This includes the images of the calibration target. 3. Choose the Ag Multispectral processing template: Processing options. 4. The radiometric calibration target images can be automatically detected and used by the software in certain conditions. For more information: Radiometric calibration target. 5. If the radiometric calibration target data is not detected automatically, this process can still be done manually. For more information: Radiometric calibration target. 6. Select the radiometric correction options corresponding to the device configuration you used to fly. For more information: Radiometric corrections. 7. Start processing. Processing Sequoia RGB images 1. Create a new project: Step 2. Creating a Project. 2. Import all the RGB images in the same project. 3. If the flight plan is linear, ensure the Linear Rolling Shutter model is selected: How to correct for the Rolling Shutter Effect. 4. Choose the Ag RGB processing template: Processing Options Default Templates. 5. Start processing. Processing Sequoia+ with its targetless radiometric calibration The usage of a radiometric calibration target is not necessary with the targetless radiometric calibration of Parrot's Sequoia+. However, in order to use the targetless radiometric calibration of Sequoia+, the file sequoia_therm.dat is required. This file is generated automatically for each flight, is unique to each Sequoia+ and can be found for each set of images in the folder where the images are stored during the acquisition mission. To process a project with Sequoia+: 1. Create a new Pix4Dmapper project: Step 2. Creating a Project. 2. Import your multispectral images, i.e. Green, Red, Red Edge, and NIR. 3. Click Next. 4. Select the Ag Multispectral processing template: Processing Options Default Templates. 5. Select the radiometric correction according to the lighting conditions when the images were captured: Menu Process > Processing Options... > 3. DSM, Orthomosaic and Index > Index Calculator. 5.1. Camera, Sun Irradiance and Sun angle for clear skies 5.2. Camera and Sun Irradiance for overcast skies. 6. Start processing. Article feedback (for troubleshooting, post here instead) I have the following situation: my project is about riparian forest research along a river using the SEQUOIA multi-spectral camera + sunshine sensor (on eBee classic). I divided the study area into blocks, and within each blocks I had several flight missions - mainly as the drone's battery endured. So, the steps were: radiometric calibration > starting flight n1 > auto landing once battery law > changing battery > radiometric calibration again > launching flight n2 (resuming the previous mission) and so on. As a result, I have blocks with 3,4 and even 5 flights (each with radiometric calibration imagery). Processing the project with the Pix4D I have the following questions: For multi-spectral processing project: For RGB processing: 3. Can I process all flights of the same day together? (as no multi-spectral calibration happening here). Or, better to process flights close to each-other (in terms of time)?? Awaiting for your reply. Thank you for your support. 1. That would be the correct way of processing. However, if you want very accurate radiometric correction, you can also process each flight dataset separately with the target for that flight and then merge in QGIS. 2. It all depends on the sun and weather conditions. The sunlight might vary even in a small amount of time. However, the DLS sensor is recording that already, so if you see there is no drastic change of sunlight, you can use your workflow. 3. You can process the RGB images together, but the images from each flight might be detected as a separate camera. Sometimes, the camera serial number written in the image exif tags changes and the software will create several camera models according to that or when exterior conditions (weather, exposure) vary greatly during the flight. In any case, we do not expect this to negatively influence the reconstruction of your project. However, when having large multiples flight projects, we recommended to process them separately and then merge them. For merging projects, please follow the instructions here: https://support.pix4d.com/hc/en-us/articles/202558529#gsc.tab=0 Please make sure when designing the different image acquisition plans, that: Thank you for your reply. I processed flight of the same block (and close to each other) together. but, as I want to have more precision, I am going to process each flights separately. However, as the report (after processing step 1) shows, the edges of the processed area has less image overlap and such areas are shown in red in the report. I suppose this means the edge area is less reliable and invalid for further calculation (and for that reason I plan flights so that it covers actually a bit larger area than I need). Processing flights separately results in separate area for each flight surrounded by the red (less image overlap) area. Comparing it with the area resulted after processing all flights together, I suppose red (i.e. invalid data) area is larger. I am attaching the illustration to make it clear what I mentioned above. The first image shows the situation when 3 flight were processed together and rest images are separately processed flights (n 1,2,3). When processed all flights together, areas where 1 borders 2 and 2 borders with 3 have enough image overlap (i.e. good quality data). However, the same areas are shown in red when these flight processed separately (where they are "edges"). Is this aspect worth any concern? Does this mean that processing flights separately results in less valid data for further processing? Actually they are not invalid data, they will have value in each pixel. It just means there was less overlap in those areas, due to which one matching keypoint has been found in a very few images, so I would say the reconstruction would not be as accurate as the middle but it still would be usable. If there are Nan values you will just see them as holes in the map and also on this overlap graph. From the next time if you want more radiometric accuracy and want to process the flights separately, I would recommend keeping extra flight lines in all directions. Also, if the light conditions did not change as much, you can use only one panel (the middle one) for the three flights together as the DLS sensor is already measuring the small amount of changes accurately. You will have to check for yourself, which method is giving more accurate data in your case.
OPCFW_CODE
Novel–The Cursed Prince–The Cursed Prince Chapter 693 – Full-filling pear plate He loved each secondly he used together with her, and this man was grateful to the lifestyle they develop jointly. Mars always regarded Emmelyn as his settlement for life a cursed existence for 27 decades. “Effectively, for example, I actually feel whole each and every time you might be inside me, simply because you’re stuffing me up if we have sex,” Emmelyn claimed bluntly. “So… that’s complete and filling. Whole-filling. Appropriate?” Emmelyn cast her glance around them and realized her husband was ideal. There seemed to be not really individual spirit may very well be seen now, that was peculiar for the reason that usually, another person would always hover around her man, a knight, a minister, employees, or even the noble butler. “You feel so?” Clara looked over Edgar together with her massive round view, filled with anxiety. “Can you imagine if she doesn’t?” “I have no idea,” Emmelyn admitted. “What are you wanting?” Emmelyn laughed also. She just spoke whatever arrived at her brain and was happy her man found it hilarious. The queen is in a really good state of mind nowadays after she achieved Clara and discovered out she would marry Edgar very soon. People were resting in bed furniture, cuddling, taking pleasure in comfort and ease in each other’s forearms. Mars sensed all his tiredness and stress from operating a land had been eliminated once he could wrap his biceps and triceps around Emmelyn stomach and that he could experience her gentle body, her delicate breathing, and her pleasant smell around him. “Effectively, can you see everyone around on this page?” Mars inquired her. best subjects to study in high school “The place is most people?” Emmelyn questioned, curious about. “Have you inform them to visit? When?” Edgar enable Clara’s fretting hand go and nodded at his footman who quickly established the carriage doorstep for his fiancee. He considered her having a sugary smile. “Get in. We will have my mother and father soon.” Mars grinned and pecked her pouting lips. “No, we’re not.” the safety match “Properly… my individuals are clever. Every time they saw you key in, they immediately kept to present us privacy,” Mars explained. “They may not go back unless you make or I summon those to get into.” “I do know perfect?” Emmelyn tracked his pectoral and spoke sweetly. “I enjoy becoming married for you personally. I really enjoy our daily life with each other, given that all aspects are behind us. On a daily basis with you and our little ones beats just before.” “I am just tense,” Clara whispered. “Would your mother similar to me…?” “Well, for 1, I experience entire every time you are inside me, mainly because you’re satisfying me up when we have sex,” Emmelyn claimed bluntly. “So… that’s whole and filling up. Full-filling up. Proper?” “Effectively, for just one, I feel whole each and every time that you are inside me, for the reason that you’re stuffing me up once we have sexual intercourse,” Emmelyn claimed bluntly. “So… that’s 100 % and filling. Total-filling up. Correct?” “In which is anyone?” Emmelyn questioned, thinking. “Do you inform them to be? When?” Experiments in Government and the Essentials of the Constitution Emmelyn automatically twisted her forearms around his neck and scolded him, “Gosh… don’t big surprise me!” the head voice and other problems 2018 Emmelyn laughed as well. She just spoke whatever got to her brain and was grateful her husband thought it was interesting. The princess is at a really good disposition right now after she achieved Clara and found out she would get married Edgar very soon. Emmelyn cast her glimpse around them and understood her hubby was right. There had been no one spirit could possibly be witnessed now, that had been bizarre due to the fact normally, a person would always hover around her spouse, a knight, a minister, team, or perhaps the noble butler. “Acceptable… carry on your storyline about how precisely you shared with Kira that our sex life is fulfilling and awesome,” Mars claimed with a grin. “What’s so rewarding?” fame and fortune dress Emmelyn cast her look around them and discovered her husband was right. There was clearly not much of a solo soul could be found now, that was peculiar simply because commonly, someone would always hover around her partner, a knight, a minister, staff, and even the noble butler. from company slave to the prince of darkness “Absolutely!” Edgar mentioned completely. “She will appreciate you.” He liked every next he devoted together with her, and that he was happy to the living they make with each other. Mars always viewed as Emmelyn as his payment for existing a cursed existence for 27 several years. “Given that you are beautiful and I will never have enough individuals,” Mars said significantly. “I wish to view you everywhere.” Novel–The Cursed Prince–The Cursed Prince
OPCFW_CODE
Panel not expanding while user is touching the screen. I want to be 100% sure that panel slides to EXPANDED state no matter what. However when user is in middle of a listview/viewpager scroll and possibly other touch interactions, the panel does not appear. this is basically what happens: onPanelStateChanged() called with: previousState = [COLLAPSED], newState = [DRAGGING] onPanelSlide() called with: slideOffset = [1.0] onPanelStateChanged() called with: previousState = [DRAGGING], newState = [COLLAPSED] starts dragging and collapses instantly. I can work around this with disabling touch events. By overriding dispatchTouchEvent(MotionEvent ev)to return true (without calling super) before setting setPanelState(Panelstate.EXPANDED) and until onPanelStateChanged()newState is EXPANDED. however this seems super hacky and unreliable. Maybe you guys have more ideas how to tackle it? I also got this error while scrolling a listview. I set a timer to auto-expand the panel after a fixed time. But when the timer end while I'm scrolling the listview, the Panel is not expanded and gets the same state as you showed. I had the same issue because I was disabling the user from interacting with the panel and setting the state programmatically. If the user clicked anywhere on the root layout while the panel was moving, the panel would get stuck and stop expanding or collapsing. I was able to use @mtyy's suggestion and override the dispatchTouchEvent(MotionEvent ev) and only returning the super if the panel state was dragging. @Override public boolean dispatchTouchEvent(MotionEvent ev) { return this.getPanelState() == PanelState.DRAGGING || super.dispatchTouchEvent(ev); } In my case, when drawer is collapsing or expanding, a tap on the screen will stop the animation. My workaround, put a view (visibility:gone) above the sliding layout and apply the following actions: `slidingLayout.addPanelSlideListener(new SlidingUpPanelLayout.PanelSlideListener() { @Override public void onPanelSlide(View panel, float slideOffset) { } @Override public void onPanelStateChanged(View panel, SlidingUpPanelLayout.PanelState prevState, SlidingUpPanelLayout.PanelState newState) { if( ( prevState == SlidingUpPanelLayout.PanelState.COLLAPSED || prevState == SlidingUpPanelLayout.PanelState.EXPANDED ) && newState == SlidingUpPanelLayout.PanelState.DRAGGING ) { clickInterceptor.setVisibility(View.VISIBLE); } else if( prevState == SlidingUpPanelLayout.PanelState.DRAGGING && ( newState == SlidingUpPanelLayout.PanelState.COLLAPSED || newState == SlidingUpPanelLayout.PanelState.EXPANDED ) ) { clickInterceptor.setVisibility(View.GONE); } } });` Anyone with a knowledge of a nicer solution - let me know. We encountered this issue today. I was able to fix it by modifying SlidingUpPanelLayout#dispatchTouchEvent() to only call ViewDragHelper#abort() if ViewDragHelper is in STATE_IDLE. I've created PR #917 to fix this issue. I also ran into this issue. This happens when we tap outside the panel (non-panel area) when the panel is sliding. Here, the newState is set to ANCHORED. So we can force it to slide down to COLLAPSED. This would be natural for the user as he had already tapped outside the panel which means that he wanted to touch the area behind it. @Override public void onPanelStateChanged(View panel, SlidingUpPanelLayout.PanelState previousState, SlidingUpPanelLayout.PanelState newState) { switch (newState) { case EXPANDED: // your logic break; case COLLAPSED: // your logic break; case ANCHORED: // this is when we tap outside the panel when it is sliding. Here, we will force it to slide down to COLLAPSED. slidingUpPanelLayout.setPanelState(SlidingUpPanelLayout.PanelState.COLLAPSED); break; } } To escape the aftermath of these unintended touches on objects (buttons, etc) while you are not ready to act on those touches, use setFadeOnClickListener so all the touches are forwarded to our SlidingUpPanelLayout where we will not do anything about these touches.
GITHUB_ARCHIVE
Ever since Windows 10, Anniversary Update 2018, it has been possible to install a Linux system on Windows. Follow the following instructions to install Linux/bash on Windows 10. Note that If you only want to use the ready-made grammatical analysers (as explained on the Linguistic analysis page. this documentation is relevant when you want to participate in building and developing the grammatical tools yourself. Then return here. To access Windows files from the linux window, do ls /mnt/ and navigate from there. A good idea would be to make an alias in the .profile file of your linux home folder, e.g. something along the lines of: alias lgtech = "pushd /mnt/c/Users/YourUserName/Documents/lgtech" … where YourUserName should be replaced with just that. The path starts with /mnt/, you should check that the rest of the path is what you want. lgtech will bring you directly to the relevant folder. You then may want to install all language technology files here. The good thing with installing them here and not under the home directory is that you can access the files with Windows programs as well (but remember to use UTF-8 encoding!) Then follow the instructions for Linux to get the things you need for participating in the development of language technology tools. Rembember that if you only want to use the tools, you may stop here and instead just download the analysers, see the page on linguistic analysis You need a number of tools for the build chain. We assume you have installed Ubuntu on your Windows machine. If you installed some other Linux version, look at its documentation for how to install programs like the ones below): sudo apt-get install autoconf automake libtool libsaxonb-java python3-pip \ python3-lxml python3-bs4 python3-html5lib libxml-twig-perl antiword xsltproc \ poppler-utils wget python3-svn wv python3-feedparser subversion openjdk-11-jdk cmake \ python3-tidylib python3-yaml libxml-libxml-perl libtext-brew-perl You need tools to convert your linguistic source code (lexicons, morphology, phonology, syntax, etc.) into usefull tools like analysers, generators, hyphenators and spellers. To get that, run these two commands in the terminal (e.g. after having written wget https://apertium.projectjj.com/apt/install-nightly.sh -O - | sudo bash sudo apt-get -f install apertium-all-dev This downloads a shell script (1), makes it executable (2), and runs it (3). The shell script in turn will download and install prebuilt binaries for programs for morphology, syntax and machine translation: Rerun with regular intervals, e.g. once a year, to get the latest updates. hfst is our default compiler, and it builds all our tools. It is open source, and it is needed for turning your morphology and lexicon into spellcheckers and other useful programs. The following two programs are not needed, we just refer to them since the source code is compatible with them. If you don’t know whether you need them, just skip them. In order to participate in the development work, you need an editor, a program for editing text files. Here are some candidates: Any other editor handling UTF-8 should be fine as well.
OPCFW_CODE
Recently I tried Vite and I was blown away by how fast it was. I re-checked if I started the app correctly because I couldn't belive that it started the dev server under So, here's a short article on what is Vite and how can we create a new react project with it. What is Vite? Vite is build tool created by Evan You (creator of Vue), it provides faster developement experience with instant server start, super fast Hot Module Replacement (HMR) and out of the box support for TypeScript as well. Blazing fast TypeScript with Webpack and ESBuild If you'd like to learn more about esbuild setup with Webpack 5 Create a new project Let's create a new project with Vite yarn create vite And we have our vite project 🎉! ├── index.html ├── package.json ├── src │ ├── App.css │ ├── App.tsx │ ├── favicon.svg │ ├── index.css │ ├── logo.svg │ ├── main.tsx │ └── vite-env.d.ts ├── tsconfig.json └── vite.config.ts Let's start our dev server cd vite-project yarn install yarn dev Vite uses rollup to build and optimize static assets. Let's build our project We have our static assets ready to be served! Top comments (17) Speed is really the only truly measurable UX enhancement. At a certain threshold it's not necessarily the most important thing, but what is more important than time. Anything that can keep me in my flow by not blocking me to wait can be the difference between effective and ineffective capacity to accomplish the task. So I'm all about this, and generally feel like it's coming from a good place of solving a distinct problem. Is it possible to switch from CRA to Vite? I have done it many times, it's a pretty easy migration. Though all the CRAs that i have transferred to Vite have no custom Webpack configs The only custom thing I did was add CRACO and Tailwind CSS. I just started learning React, its been a few weeks by now, I'm reading a bit here and a bit there on my spare time, writing a bit of code in my breaks... this kind of stuff. Vite has caught my attention, but I don't want to add another layer of complexity on something that is already confusing. I see that you are maybe using craro to enable Tailwind's watch mode etc.., Vite already works out of the box with Tailwind, so i think Vite is better for you in that regard. CRA is pretty established in the React tooling ecosystem, compared to Vite, so unless you are working on a critical kind of project, Vite is the way to go. Nothing critical =) just my portfolio, that I'm using as a excuse to learn React. I'll give it a go then! Thanks for your input! Hey Renan, yes it's pretty easy to migrate to Vite with few changes but if you're new to React ecosystem CRA might be a better choice Hi, thanks for the reply! One question, though: why CRA would be better? Is there anything special about it I should learn before looking for Vite? Hey, CRA is quite stable and most widely used..so If you face any issue there's a good chance that there is a github issue on the CRA's repo and it has either been solved or worked upon actively. Vite is relatively quite new compared to CRA, you can definely learn Vite..but in the end these tools are just wrappers around bundlers like Webpack, Rollup, esbuild. You can use any of CRA, Vite etc whichever you feel confident with. Hope this helps! Yes, it does! I currently have no issues with CRA, but everywhere I see talking about Vite praises its speed. I'm not exactly in need of any speed improvement as I'm just writing a simple portfolio with maybe 4 pages, I don't see why I would need faster compile times when my compile times are already fast enough. But since I'm quite new to the confusing world of front-end development (and on the even more confusing world of tooling!) I kinda want to learn everything I possibly can. Makes sense. Good luck with your project! Vite uses ESbuild for development and rollup for production bundle Vite is my favourite with React and TypeScript! I have a very old thinkpad x220 with HDD and everything run up in around 1-2s. Just imagine how fast with newer laptop & SSD :D Yes me too, thank you I fixed the issue
OPCFW_CODE
UI Canvas judder related to manual render path Using a UI Canvas in world space and attached to a Tracked camera results in very notable judder (or an effect that looks like judder). It seems to be a side-effect of the manual render path. A sample unity binary has been included that shows the effect and can toggle between the current manual camera render and the normal camera render pipelines. Even when the normal rendering pipeline is used, there are jumps in the UI text. It looks like this could be related to time warp. Thoughts? Sample binary https://drive.google.com/file/d/0B5TmckqIm686VXNscnUyWkZJWjg/view?usp=sharing Sample project https://drive.google.com/file/d/0B5TmckqIm686S21aUUJ4Z2NkekU/view?usp=sharing This project has only one change to the OSVR Unity SDK. It is a change to VRSurface.Render(). This change is not a proposed fix but is meant to demonstrate the issue. //Render the camera public void Render() { Camera.targetTexture = RenderToTexture; if (!Camera.enabled) { Camera.Render(); } } Haven't dug into this yet, but are you seeing the same issue with the worldspace gui in https://github.com/OSVR/Unity-VR-Samples ? Attaching a UI to a tracked camera is bad practice in VR; it's recommended to build the UI into the environment like in the Unity VR Sample. We do want to be able to overlay information HUD-style, but the approach should be similar to SteamVR's compositor, where RenderManager will have a separate layer for UI (I think this on the roadmap, but not being worked on until some other pieces fall into place). Attaching UI elements to a tracked camera can be bad practice, but this is not true 100% of the time. This bug needs to be fixed because the current rendering pipeline delivers unexpected results. Interesting, I was expecting the artifacts would be due to timewarping. A compositor is needed so that the layer with the 3d scene is timewarped but the UI layer is not timewarped. However, turning off timewarp doesn't affect this. It looks more like the gui is drawn twice (https://github.com/OSVR/Unity-VR-Samples/issues/6). It even shows up in the preview window, not just the headset. Chasing this a littler further, it appears this is also an issue in SteamVR (http://steamcommunity.com/app/358720/discussions/0/451848855027493382/), and looks like it's due to a bug in Unity, introduced in Unity 5.2.3. From Unity 5.2.3 changelog: (710195) - Fixed issue where UI would not get rendered for disabled cameras when manually calling .Render(). I tried the VRSamples project in Unity 5.2.2 and can confirm, the GUI is not double-drawn. Re-opening the project in later versions of Unity has the double GUI issue. Apparently this not been fixed as of the Unity 5.4 beta. This is fixed in Unity 5.3.5. I've updated Unity-VR-Samples and no longer see the double gui reticle that I see in unity 5.3.4. I tested @chase-cobb's OSVR Judder project and no longer see double rendered gui on the manual rendering path -- now there is no apparent difference between manual and auto rendering in that example. @DuFF14 can you tell me how to disable TimeWarp in unity. I read a lot about that, buth found no explanation regarding this. Maybe it will help me with another problem I'm experiencing right now. @sroettgermann Timewarp is enabled/disabled in the "renderManagerConfig" section of an OSVR server configuration file, along with a bunch of other rendering settings that are useful in tweaking performance and testing: https://github.com/sensics/OSVR-RenderManager/blob/master/doc/renderManagerConfig.md#timewarp https://github.com/OSVR/OSVR-Core/blob/master/apps/sample-configs/osvr_server_config.renderManager.HDKv2.0.direct.json#L38 Thank you @DuFF14. I will go through the links and do some testing. Are there also any render settings regarding TimeWarp which can be set in Unity directly?
GITHUB_ARCHIVE
The Impact of Canonical and Non-canonical El Niño and the Atlantic Meridional Mode on Atlantic Tropical Cyclones - View All This project examines the roles of sea surface temperature (SST) variations in the tropical Atlantic and Pacific on Atlantic Tropical Cyclones (TCs, including hurricanes). The SST variations considered are those associated with two types of El Nino events, the traditional East Pacific (EP) events in which the largest SST warming occurs in the equatorial EP, and the non-traditional Central Pacific (CP) El Nino events in which the maximum warming occurs along the equator near the dateline. In addition, the work will consider the impact of SST anomalies associated with the Atlantic Meridional Mode (AMM), a meridional dipole pattern with warming north of the equator accompanied by cooling south of the equator and vice versa. Previous research associates La Nina events with enhanced Atlantic TC activity while El Nino events are associated with suppression of TCs in the Atlantic, and in particular the PIs estimate that the probability of one or more major hurricanes making landfall on the U.S. coast is 23% during El Nino compared to 63% during La Nina. However, the extent to which these relationships hold for both CP and EP El Nino events has not been determined, and the few studies that have been performed show conflicting results. The AMM is also believed to have an impact on Atlantic TCs, but the mechanism of this influence is not well understood, in part because the AMM modulates several environmental factors that cooperate in their influence on Atlantic TCs. The research tests three specific hypotheses, the first of which posits that the primary influence of the AMM on Atlantic TCs is through the cross-equatorial SST gradient associated with the SST anomalies, which modulates the circulation and thermodynamics of the overlying atmosphere. The second hypothesis is that the geographic difference between CP and EP El Nino events is of less importance than the strength of the SST anomalies, thus CP El Ninos are expected to have a smaller influence than EP events simply because the CP events tend to be weaker. the third hypothesis is that there is constructive interference between La Nina events and positive AMM excursions, so that the combination of two is extremely supportive of Atlantic TCs, while a similar destructive interference between El Nino and negative AMM results in near-average Atlantic TC activity. The bulk of the work for the project consists of numerical experiments using the Weather Research and Forecasting (WRF) model, configured as a Tropical Channel Model (TCM), meaning a re-entrant domain in the zonal dimension with meridional boundaries at 30S and 50N. The work has broader societal impacts due to the potential value of better understanding of relationships between Atlantic TCs and SST variability associated with El Nino and the AMM, which could be used to anticipate how active the Atlantic hurricane season will be in a given year. In addition, the work supports an early-career scientist from a traditionally underrepresented group.
OPCFW_CODE
import io from dataclasses import dataclass from kaitaistruct import KaitaiStream from OpenSSL import SSL from mitmproxy import connection from mitmproxy.contrib.kaitaistruct import dtls_client_hello from mitmproxy.contrib.kaitaistruct import tls_client_hello from mitmproxy.net import check from mitmproxy.proxy import context class ClientHello: """ A TLS ClientHello is the first message sent by the client when initiating TLS. """ _raw_bytes: bytes def __init__(self, raw_client_hello: bytes, dtls: bool = False): """Create a TLS ClientHello object from raw bytes.""" self._raw_bytes = raw_client_hello if dtls: self._client_hello = dtls_client_hello.DtlsClientHello( KaitaiStream(io.BytesIO(raw_client_hello)) ) else: self._client_hello = tls_client_hello.TlsClientHello( KaitaiStream(io.BytesIO(raw_client_hello)) ) def raw_bytes(self, wrap_in_record: bool = True) -> bytes: """ The raw ClientHello bytes as seen on the wire. If `wrap_in_record` is True, the ClientHello will be wrapped in a synthetic TLS record (`0x160303 + len(chm) + 0x01 + len(ch)`), which is the format expected by some tools. The synthetic record assumes TLS version (`0x0303`), which may be different from what has been sent over the wire. JA3 hashes are unaffected by this as they only use the TLS version from the ClientHello data structure. A future implementation may return not just the exact ClientHello, but also the exact record(s) as seen on the wire. """ if isinstance(self._client_hello, dtls_client_hello.DtlsClientHello): raise NotImplementedError if wrap_in_record: return ( # record layer b"\x16\x03\x03" + (len(self._raw_bytes) + 4).to_bytes(2, byteorder="big") + # handshake header b"\x01" + len(self._raw_bytes).to_bytes(3, byteorder="big") + # ClientHello as defined in https://datatracker.ietf.org/doc/html/rfc8446#section-4.1.2. self._raw_bytes ) else: return self._raw_bytes @property def cipher_suites(self) -> list[int]: """The cipher suites offered by the client (as raw ints).""" return self._client_hello.cipher_suites.cipher_suites @property def sni(self) -> str | None: """ The [Server Name Indication](https://en.wikipedia.org/wiki/Server_Name_Indication), which indicates which hostname the client wants to connect to. """ if ext := getattr(self._client_hello, "extensions", None): for extension in ext.extensions: is_valid_sni_extension = ( extension.type == 0x00 and len(extension.body.server_names) == 1 and extension.body.server_names[0].name_type == 0 and check.is_valid_host(extension.body.server_names[0].host_name) ) if is_valid_sni_extension: return extension.body.server_names[0].host_name.decode("ascii") return None @property def alpn_protocols(self) -> list[bytes]: """ The application layer protocols offered by the client as part of the [ALPN](https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation) TLS extension. """ if ext := getattr(self._client_hello, "extensions", None): for extension in ext.extensions: if extension.type == 0x10: return list(x.name for x in extension.body.alpn_protocols) return [] @property def extensions(self) -> list[tuple[int, bytes]]: """The raw list of extensions in the form of `(extension_type, raw_bytes)` tuples.""" ret = [] if ext := getattr(self._client_hello, "extensions", None): for extension in ext.extensions: body = getattr(extension, "_raw_body", extension.body) ret.append((extension.type, body)) return ret def __repr__(self): return f"ClientHello(sni: {self.sni}, alpn_protocols: {self.alpn_protocols})" @dataclass class ClientHelloData: """ Event data for `tls_clienthello` event hooks. """ context: context.Context """The context object for this connection.""" client_hello: ClientHello """The entire parsed TLS ClientHello.""" ignore_connection: bool = False """ If set to `True`, do not intercept this connection and forward encrypted contents unmodified. """ establish_server_tls_first: bool = False """ If set to `True`, pause this handshake and establish TLS with an upstream server first. This makes it possible to process the server certificate when generating an interception certificate. """ @dataclass class TlsData: """ Event data for `tls_start_client`, `tls_start_server`, and `tls_handshake` event hooks. """ conn: connection.Connection """The affected connection.""" context: context.Context """The context object for this connection.""" ssl_conn: SSL.Connection | None = None """ The associated pyOpenSSL `SSL.Connection` object. This will be set by an addon in the `tls_start_*` event hooks. """ is_dtls: bool = False """ If set to `True`, indicates that it is a DTLS event. """
STACK_EDU
# while var = method() # code # end # ==> # _while_var_X = method() # var = while_var_X # while _while_var_X: # code # _while_var_X = method() # var = while_var_X require_relative 'def' $while_var_id = 0 class WhileNode # Handles assignment inside condition def get_result statements.expect_class StatementListNode statements.get_result # if the whole condition is inside parens, remove them if condition.is_a? BeginNode condition.expect_len 1 self.condition = condition.child end unless condition.is_a? ExpressionIsAlsoExpressionInPython # find current statement list statements_list, statement_list_child = current_statement_list # before while # _while_var_X = method() # var = while_var_X condition_expr = nil while_var_mask_result do condition_expr = condition.get_result statements_list.late_insert before: statement_list_child, node: condition_expr end # while _while_var_X: self.condition = while_var.deep_copy # add again on the end of statement list # _while_var_X = method() # var = while_var_X while_var_mask_result do for node in condition_expr statements.add_child node.deep_copy end end end return self end def while_var @while_var ||= LocalVariableNode.new ruby_node, [SimpleName.new(while_var_name)] end def while_var_name @while_var_name ||= "_while_var_#{$while_var_id += 1}" end def while_var_mask_result backup = $result_name, $result_var $result_name, $result_var = while_var_name, while_var yield $result_name, $result_var = backup end end
STACK_EDU
This paper describes a set of techniques for improving access to Virtual Reality Modeling Language (VRML) environments for people with disabilities. These range from simple textual additions to the VRML file to scripts which aid in the creation of more accessible worlds. We also propose an initial set of guidelines authors can use to improve VRML accessibility. VRML, virtual environments, navigational aids, accessibility, audio feedback, data access, speech input, user interfaces. In the introduction of a special section on Computers and People with Disabilities in "Communications of the ACM" the authors point out: "When one looks at the data, it is surprising to see how large a segment of the population of the United States has some form of disability. In 1990 the National Science Foundation formed a Task Force on Persons with Disabilities to determine how NSF could best promote programs in this area. As this report points out, over half a million Americans are legally blind - this means that their visual acuity, with correcting lenses, is no more than 20/200 in the better eye. Furthermore, of the approximately 5,000,000 scientists and engineers in the U.S., it is estimated that as many as 100,000 have some form of physical disability." In general, a modest amount of additional work must be done in order to make the VRML world accessible. However, it is our experience that in making the worlds more accessible for people with disabilities, the worlds will be more usable by all. For example, in the course of conducting this research, we came upon the description field of the Anchor node. Adding the descriptions, for our VRML Miter Saw, solved a long standing annoyance of not clearly seeing what object the cursor was on. The additional text, now part of the description field, is displayed by the browser. The improved accessibility of our VRML world improved the user interface for all people. 3. BACKGROUND AND RELATED WORK There are many types of disabilities, indeed it has been said that "we are all disabled, it is just a matter of degree." Visual impairments, hearing loss or deafness, motor control impairments, speech impediments and cognitive disabilities are all problems for those afflicted. Different media types such as audio, graphics, and video can improve access to information depending on the particular disability. Graphical User Interfaces, the GUIs of the mid 1980s and 90s, have shifted the user's interaction with computers from a primarily text-oriented experience to a point-and-click experience. This shift, along with improved ease of use, has also erected new barriers between people with disabilities and the computer. There have been a variety of devices and software techniques that have been developed to improve access. The concept of auditory icons has been pursued for over ten years . Devices which are used to make PCs more accessible range from speech synthesizers and screen readers to magnification software and Braille output devices. A thorough collection of disabilities resources can be found at the WebABLE! web site. There is some concern over access to VRML for people with disabilities. However, it is quite scarce, with the most notable exception being the work of Treviranus and Serflek at the University of Toronto's Adaptive Technology Resource Centre (ARTC) and its web page "Accessibility and VRML" . Many of the concepts such as aural introductions, and the use of embedded textual descriptions were addressed by their work. 4. TAXONOMY OF ACCESS TECHNIQUES We propose the following VRML mechanisms as a starting point for improving access. These mechanisms fall into three categories: textual descriptions, audio cues and spoken descriptions, and keyboard input facilitation. All of these mechanisms use the inherent capabilities of the VRML specification to make the VRML world more accessible. 4.1 Textual Descriptions VRML world with WorldInfo description for object (note figure altered for printing purposes) 4.2 Audio Cues and Spoken Descriptions Audio provides a set of rich capabilities to improve access to a VRML world. The three types of audio we examine here are: ambient background music, spoken descriptions, and speech synthesis. Ambient music can play and change as the user moves from one room to another, providing a subtle yet intuitive location cue. Spoken descriptions of objects can play as the viewer moves close to an object. Speech synthesizers can "read" embedded text. Given the availability of a speech synthesizer, text from Anchor node descriptions or WorldInfo nodes can be spoken. (We demonstrate this with our speakWorldInfo utility described in the section VRML Access Utilities.) Internet accessible speech synthesizers such as the accessibility Labs Text-to-Speech system provide easy access to speech synthesizers. One under-utilized capability is the description field in the AudioClip node. AudioClips contain the pointer to actual sound files and in addition the node contains a description field which can be used as a textual description of the sound. Unfortunately VRML browsers currently implemented do not take advantage of this information. An overhead view of line with proximity sensors. An overhead view of line with proximity sensors. Spatialized audio with and proximity sensors. 4.3 Keyboard Input Facilitation Keyboard mappings, the ability to perform application functions simply by using a keyboard, rather than a mouse, is an important enabling technology. VRML browsers provide some aid in this domain albeit minimal. A common keyboard equivalence is to map the PageUp and PageDown keys to allow users to step to the next or previous viewpoint. Viewpoints, however, must be defined as part of the world an all to infrequent occurrence. In addition to viewpoint selection, the arrow keys can be used to rotate the object, when in examiner mode, and to travel, when in walk mode. The specific examples cited above are for CosmoPlayer; each VRML browser behaves slightly different. Consistent keyboard mappings and their subsequent behavioral effects in the VRML world can provide an important accessibility capability for a VRML browser. 5. AUTHORING GUIDELINES As we have shown, there are several ways to make VRML worlds accessible by the visually and physically impaired. The addition of embedded text, sounds and assistive devices such as a speech recognition systems all contribute to more accessible virtual worlds. Web designers wishing to make their VRML worlds more accessible should: 6. VRML ACCESS UTILITIES We have developed several utilities to assist in the creation of accessible VRML worlds. (The source code for all of these are freely available to the public on our web site ) They are: showVP, addSndToVrml, and speakWorldInfo. Following are descriptions of each utility: SYNOPSIS: showVP input.wrl SYNOPSIS: addSndToVrml mapFile input.wrl Steps for adding proximity triggered sound files. SYNOPSIS: speakWorldInfo input.wrl We have created two examples of accessible VRML worlds. One, the Audible Assembly Line, is representative of an environment intended for "walk" mode. The other environment, The Talking Miter Saw, is intended for "examiner" mode. Both words are available at the OVRT web site . In the case of the miter saw, object descriptions appear on the browser's window because of the description field of the Anchor node. The name of the part being selected is spoken by passing the string to a speech synthesizer. 8. CONCLUSIONS AND FUTURE WORK While we have discussed and illustrated how to create accessible worlds through the addition of audio content, VRML browsers should also be capable of accepting additional audio information. For example, the Anchor node in the VRML2 specification, contains a "parameter" field, intended for use by the VRML or HTML browser. The parameter field, an MFString, contains keyword=value pairs. One could easily define, a spokenText=text, pair which would instruct the browser, upon selection of the Anchor node, to speak the text. A more thorough discussion of browser issues is in the Serflek and Treviranus paper cited previously, and points out issues such as keyboard equivalences, and alternative input devices. We would like to thank Mike Paciello of the Yuri Rubinsky Insight Foundation (www.yuri.org) for his encouragement and support for these concepts. Thanks to Sharon Laskowski for her ruthless editing which improved this paper ten-fold. The authors would also like to acknowledge the continued support of the NIST Systems Integration for Manufacturing Applications (SIMA) Program for making this research possible. Gilnert, E. and York, B. Introduction to Special Section on Computers and People with Disabilities in Communications of the ACM Vol. 35, No. 5, 1992. VRML. VRML 2.0 Specification ISO/IEC CD 14772, 1996. Carl Brown "Assistive Technology Computers and Persons with Disabilities" CACM May 1992, Vol. 35, No. 5. Sandy Ressler. Approaches using virtual environments with mosaic. In The Second International WWW Conference'94 Mosaic and the Web, volume 2, pages 853-860, 1994. Mars Pathfinder VRML models Gaver, W.W. (1986) Auditory Icons: Using sound in computer interfaces. Human-Computer Interaction. 2,167-177. Chris Serflek, Jutta Treviranus "VRML: Shouldn't Virtual Ramps be Easier to Build. Tanenblatt, Bell Labs Text-to-Speech System web site, URL: http://www.bell-labs.com/project/tts/. Dragon Systems, Dragon Dictate Personal Edition, URL: http://www.dragonsys.com/. Ressler, The Open Virtual Reality Testbed.
OPCFW_CODE
I was wondering if it was possible to extend existing nodes with new functionality, where forking a node and sending a pull request for added functionality is not always a preferred option, nor would forking and keeping your own updated version of the node be. While this is quite generic and might sound vague, I've use cases where currently I'm just writing custom nodes for every situation, every specific website... And it's not very efficient as most of the interaction I need I can do through existing nodes. For example, when parsing HTML or XML, most of the time I only need a specific part of it. An often given suggestion on the forums is to convert it to a js object through the XML node (internally through xml2js), then call a JSONata expression on it to get the information you need out again. Coming from a (black box) test engineering background, I'm not exactly looking too forward to that solution. In my automations, I often deal(t) with sites under third party control where I don't have access to the source code as half of it is generated and every new iteration attributes or order of elements change around (including up and down the tree). For me when dealing with these situations, XPath is my preferred solution. I've (had) to deal with web applications so unstable that to create a stable test automation (that will survive a couple iterations of development/rolling out to production without having to change the code over and over), a text search on the page had to be run to find connected elements and go up and down through descendants/ascendants to find the correct elements/information. For one in particular, I wrote the automation 3 years ago, and with the client rolling out updates every 2 weeks, that particular test (a very detailed search form where running the test manually takes 2-3 hours) is still working without any changes to my code. One of my use cases involves getting information from specific webpages of grocery stores in my area. Each of these stores use their own kind of website setup. Some use (internal) JSON based APIs that populate the page, and thus getting info out is as easy as connecting to that JSON API instead. Others add everything on the server level into the webpage, so parsing the entire page is needed. This use case is based on my worsening mobility, and the difficulty to get my needed groceries every day. Since most of the stores in the area support delivery above minimum amounts ordered, it's a viable solution. Because of my worsening health in general, I won't be able to fiddle with flows on the bad days, and that's exactly when things always go wrong. So a solution that is as-stable-as-possible is preferred. Instead of writing custom nodes for all these specific cases, there are a couple options for me, including 1-liner function nodes connected to switch and change nodes, with the code from those function nodes calling libraries made available through settings.js, or executing local scripts that send the output back to Node-RED. If something needs to be changed to a flow like that, the result is quickly becoming maintainability hell. At this point the best solution feels like being able to add rule types/formatting to the core Coming from an OOP background, the ability to have custom nodes being an extension of an existing node with specific functionality added/changed to it sounds, frankly, amazing. But looking at the core design of NR and nodes, I have not a clue if this can even be implemented at all, let alone without becoming a huge breaking change for existing nodes.
OPCFW_CODE
Welcome to WebmasterWorld Guest from 18.104.22.168 Can I just connect two D-Link routers together or will this not work? After a bit of apprehension, I really like the D-Link router since it was so much easier to setup - but will have to get another Lynksys if there are no options to expand. Here is my understanding (if it's completely off base, please someone else help HyperGeek): You need a switch, which you would then plug in to one of the ports on the router. You then plug any extra devices in to the switch. If you have two routers, you should be able to connect them together and then turn off the routing functions on the second router. This will allow the second router to operate as a switch I suspect this is what your Linksys was letting you do. (Edited to make it more clear.) (1) just connect them together, maybe the ports are autosensing. Most that I've seen are not. (2) connect an uplink port on one to a normal port on the other. (3) if there isn't an uplink port, then connect two normal ports together using a crossover cable. The DI-604 model actually has a 4-port switch built in. Routers that you get for home networks are generally a router and a switch in one box. Otherwise people would have to buy a switch as well. So does this mean I can just plug another one up to it using one of the four ports? Yes, and I believe it will still work even if both routers are routing. But the second one just needs to function as a switch, and you should make sure that's what it's doing. (Consult the manual, or Google, or D-Link I guess.) It sounds like (from another post) that you just have to plug it in. Hope we helped. I was actually just reading about this same situation recently. I'm definitely not an expert though. 2000 $ for 2 megs of ram 1800 $ for a 20 megs HD 550 $ for a 128 k floppy drive 249 $ for a dual button joystick and 10 $ for a 800 k disquette? At that time, a loaf of sliced bread was 35 cents and I paid 125 $ a month for a dencent 6 room appartment. getting more grey hairs... no wonder I slide off topic... ;) My first PC was a 386, 20 MHz from Gateway, 8 megs RAM, 65 meg hard drive (anyone remember RLL drives?)... $2800 in 1990. 1988 - 286/12, 32Meg RAM, 40Meg HD, 9600 baud modem The full-tilt boogie machine for all of your BBS and ANSI designing needs. I agree with Cap. The two routers will work but there would be (2) networks if both are functioning as a router. I have an older LinkSys but can turn of the "Gateway" aspect but leaves the router function which may work if it is the internal and if you also turn off the DHCP(dumb it down). I don't have a D-Link but was able to configure a Belkin Wireless Router to act as an Access Point which turns it to a Hub/switch. With it acting as a router I can get outside access but not access to local network without going to the outside and then back in. Might have gotten it to work but really didn't take the time. As said some switches/routers have AutoSensing witch allows you to connect the two without a Crossover Cable, but they are available at most computer stores or someone with the knowledge could make one. Extra Techi info: Crossover cable = end1(whiteOrange, Orange, WhiteGreen, Blue, WhiteBlue, Green, whiteBrown, Brown) -> end2(WhiteGreen, Green, WhiteOrange, Blue, WhiteBlue, Orange, WhiteBrown, Brown) in other words, swap Orange and Green. Hope you can get it working.
OPCFW_CODE
The UK Government’s National Technical Authority for Physical and Personnel Protective Security. CPNI’s role is to protect UK national security. We help to reduce the vulnerability of the UK to a variety of threats such as Terrorism, Espionage and Sabotage. CPNI works with partners in government, police, industry and academia to reduce the vulnerability of the national infrastructure. Contact us for general, non-recruitment related enquires and to provide feedback about this website. An overview of how taking a security-minded approach will help your organisation. Implementing effective protective security measures will help to protect your organisation from threats. Our advice & guidance outlines the steps to take when implementing protective security measures. We provide information and resources to support the promotion of security awareness across your organisation. CPNI has developed a series of security awareness campaigns, designed to provide organisations with a complete range of materials they need. The CPNI blog provides thought leadership, latest news and updates on protective security. This digital learning will provide you with a solid foundation in this subject area and can act as a springboard to CPNI’s extensive guidance on helping your business effectively manage insider risk. 'Think Before You Link' will help you to protect yourself, your colleagues, and your organisation from the harmful impact of online malicious profiles. This chapter lists impact rated vehicle security barriers (VSB) that have been tested to publicly available vehicle impact test standards. Due care should be taken when selecting and specifying VSBs. Please also read further HVM guidance. Perimeter Barriers | Modular Manufacturer: Block Axess (Group Klözmann) Standard tested to: IWA 14-1:2013 Performance rating: V/3500[N1]/48/12.35 Street Furniture | Planters Manufacturer: Quick Block Ltd Performance rating: V/1500[M1]/48/90:0.9 Performance rating: V/1500[M1]/48/90:1.7 Performance rating: V/1500[M1]/48/90:4.2 Manufacturer: Logic Manufacture Bespoke Performance rating: V/1500[M1]/48/90:0.4 Manufacturer: Bailey Streetscene Ltd Performance rating: V/2500[N1G]/48/90:0.0 Gates | Swing Manufacturer: Cova Security Gates Limited Blockers | Retractable Manufacturer: Eagle Automation Ltd Performance rating: V/7200[N2A]/80/90:0.6 Street Furniture | Cycle rack Street Furniture | Other Performance rating: V/3500[N1]/48/90:2.8 Performance rating: V/2500[N1G]/64/90:5.6 Street Furniture | Railing Manufacturer: Asset & Frontline Security Systems Ltd Performance rating: V/1500[M1]/48/90:0.0 Street Furniture | Seating Performance rating: V/1500[M1]/48/90:5.5 Manufacturer: Safetyflex Barriers Performance rating: V/7200[N2A]/64/90:6.1
OPCFW_CODE
NASSCOM has identified that engineering students in India need to gain experience to improve their employability. AICTE in its 2018 model curriculum recommends that every student obtain 14-20 credits (equivalent to 320-480 productive hours) of internships in the duration of their 4 year engineering course. Students are encouraged to participate in internship programs in blocks of 6-8 weeks from their 1st year engineering course. However, students and colleges are constrained with capacity at organizations, varying academic schedules and learning skills. Student Remote Internship Program (SRIP) by Software Engineering Research Center (SERC), IIIT Hyderabad addresses mentioned constraints by providing opportunities to students to work on live open source projects. SRIP is designed to bridge the gap existing between academic learning and the application of that learning in industry settings. Students would be mentored to use industry relevant technologies while contributing to open source repositories. Apart from offering the flexibility to work any time and anywhere, SRIP is also expected to improve self-confidence and problem solving skills in students making them industry Software applications are emerging from properitary and monolithic architecture to open source API based architecture. Open source applications have contributions from developers across the world and round the clock on GitHub, GitLab and other repositories. You may work on some of these during internship - Programming Languages - Python, Java and Others - Database - MySQL, MariaDB, MongoDB, etc. - Operating System - Ubuntu and other Linux platforms - Methodologies - Product Quality Metrics, Waterfall, Agile and DevOps SRIP's students will collaborate with mentors and repository owners across the globe that would help them learn working practices leading to better understanding of requirements on employment. Some of the behavioral traits that are expected improve with SRIP are - Communication Skills - Working as a Team - Time Management - Problem Solving - Self Confidence and Proactiveness - May 11th,2019 - May 12th, 2019 - Bootcamp at IIIT Hyderabad Campus - May 13th,2019 - July 28th, 2019 - Contribution to SRIP projects - May 25th, 2019 - May 26th, 2019 - Bootcamp at IIIT Hyderabad Campus - May 27th,2019 - Aug 9th, 2019 - Contribution to SRIP projects - Students will get guidance from 10:00 am to 10:00 pm, Mon-Fri and 10:00 am to 2:00 pm, Sat on the collaboration platform - Students will be evaluated on a weekly basis and will have access to view their scoreboard. - Six weeks (240 hours) is the maximum effort a student is expected to expend during the period of internship. - Certificate from SERC, IIITH will be awarded to the students only on successful completion of the internship program. - Online Registration (closed) - Selection of Interns - Payment of internship fee - Students will be selected on ‘first come first served’ basis. Last date for Application Registration Last date for Payment of Internship Fees - May 6th, 2019 (Closed) - INR 6,000 + GST as applicable. The online payment details will be shared in the acceptance (shortlisted) mail. - Applications will be processed in the order of application received. - Please note the fees are for incidental expenses and only cover the cost in running these exclusive programs. - SRIP is a remote internship with in-person interaction only during bootcamp. No accomodation or any kind of support for accomodation will be provided during the bootcamp or thereafter. Selected interns are expected to make their own arrangements for the stay during bootcamp. - Registration to the program by itself does not guarantee internship. Internship will be based on first come first serve basis and successful payment. - Students will not be allowed to carry the internship program beyond the specified last date of their respective batches. - Six weeks (240 hours) is the maximum effort a student will expend during the period of internship. For example, for the Batch I, 240 hours of total effort (Coding and/or Testing) needs to be expended by July 28th 2019. For Batch II, 240 hours needs to be expended by August 9th 2019. - Once enrolled, no refund will be provided for any reason. Also no requests for concession are entertained. - Certificate will be awarded to the interns only on successful completion of the internship. - Students will have the opportunity to contribute to various live open source projects which use diverse technologies. - Online training material will be made available to students. - Students are excepted to become proficient in technologies of the projects they get assigned. - While applying students are expected to mention their interest on various technologies or domains. The program will consider these preferences as much as possible while allocating the open source projects. - Mentors will be available during the program to guide students. However, students are expected to strive to solve the problems on their own as best as as they can before contacting mentors. - SRIP will publish scoreboards regularly which will help interns to know their progress at various phases of the program. - Students who are self driven will gain maximum out of internship. Please focus on the internship with consistent interest and motivation. - The program is responsible only for the conduct of remote internships. No laptops or desktops or any other gadgets will be provided or responsible for any gadget losses CSE, University College of Engineering, Osmania University "It is a wonderful start for freshers like me and provides experience to excel in their core domain. Aligning the skills acquired through this I have developed a portal for students to submit their assignments and was successful in it. Glad, as I was able to apply my knowledge practically” IT, Vasavi College of Engineering “All round development is the perfect description of this internship, starting from learning to be adaptive, managing time, teamwork to developing debugging skills, simulating CP scheduling, everything gave a tremendous kick start to my career ” “One amazing learning experience! Managing academics, Internship and personal life helped me become good at multi-tasking and that’s my biggest take-away”
OPCFW_CODE
Why is my "mood cue" not working? I am new to Arduino, and I am working through the "Arduino Projects book," with the Arduino Uno. I am working on project 5, "the mood cue." It is basically a potentiometer-controlled servo. I am certain that I have the code for it correct, and certain that I have all the wiring correct. I have the potentiometer value and the angle at which the potentiometer is turned printed on the serial monitor. I upload the code, and the serial monitor begins printing correctly, changing when I twist the potentiometer. But the servo doesn't move. It lets off a faint buzzing sound, and that is it. Does anyone know what the problem is? I don't know what I have done wrong. Do you think my servo is broken? Here is my code: #include <Servo.h> Servo myServo; int const potPin = A0; int potVal; int angle; void setup() { myServo. attach(9); Serial.begin (9600); } void loop() { potVal = analogRead(potPin); Serial.print ("potVal:"); Serial.print (potVal); angle = map(potVal, 0, 1023, 0, 179); Serial.print ("angle:"); Serial.print (angle); myServo.write (angle); delay(15); } Here is my wiring: file:///C:/Users/matth_000/Downloads/Arduino%20PotServ.jpg Maybe you have your servo wired wrong. Please post a photograph of your setup. Buzzing, non-moving motors usually means not enough voltage and/or current. file:///C:/Users/matth_000/Downloads/Arduino%20PotServ.jpg is not gonna help much... The classic way of wiring up motors, including servos, is to provide them with separate power. The image below should clarify this: The batteries (in this case) have the sole function of providing power to the servo motor. The "data" pin however goes to the Arduino. The Arduino itself is powered by the USB cable in this example. The important point is the blue wire - the shared ground wire. Both the Arduino and the servo (and batteries) must have a common ground. If you're using the components that came in the starter kit, the servo is wired differently in the book than it is in the actual kit. Make sure the red wire on the servo is connected to the power, the black to the ground and the white to the data. The specs are also written on the side of the servo motor. In the project book, the wires went into these rows on the breadboard: GND 5V Signal Whereas for me that meant it was: GND Signal 5V Hope this helps! I'm new to arduino and I had the same problem of the servo buzzing but not moving. I found the reason was the servo wires came in a different order than what was written in the book. After correctly connecting the 5v, GND, and Signal by moving a few wires on the bread board things worked as planed. Hoped this helps. At first, I had similar behavior with the servo motor in this project. The servo buzzed and either did not move or made very slight erratic movements. The problem in my case was that I had wrongly placed the wire that is supposed to connect the potentiometer to pin A0. Here's how my board looked after I corrected the problem:
STACK_EXCHANGE
It is not difficult to find questions about evaluating limits without the use of l'Hôpital's rule. As long as the function is differentiable, not directly reducible and tends towards an intermediate form, why would anyone want to avoid such a useful tool? Personally, I'm against any calculus technique that can be applied without (much) thinking. These days calculus (in North America, at least) is taught in a way that people can get high grades without having the slightest idea of what a derivative or an integral is. In most classes I teach I ask what an integral is, and very rarely do I get satisfactory answers, even from good students. Part of the problem is the lack of basic skills: most students are hopeless when dealing with inequalities, which prevents you from both explaining the definition of limit and doing things like Taylor polynomials. The way I was taught calculus a million years ago, was to use Taylor (as opposed to L'Hôpital) for limits. Using Taylor approximations to find the limit allows you to have some understanding of what is going on, in particular in the sense that you are not only finding the limit but also estimating the rate of convergence. This is essential if you are doing numerical analysis, and good knowledge in any case. This conveys more information, makes you think instead of blindly applying a formula, and avoids mistakes like the frequent one of applying L'Hôpital when it is not applicable. In some circumstances questions like "How do I do X without Y" are genuinely intellectual exercises in working without powertools, but in other circumstances they seem more like "I have an aversion to thinking about Y so let's just do it another way." My impression is that the first group is by far the bigger group in general, but for l'Hopital's rule specifically, it might be a mix. Anyhow, this idea of not relying on a single route to a solution can be viewed as a positive development in the student's development :) Many students, when finishing a problem through whatever means, would just conclude "Welp, good thing I never have to think about that ever again! No chance that any portion of that problem would ever help out in a future problem because all math problems are totally disconnected and don't relate to each other or reality. It's not as if there are similar problems where the same approach won't work, requiring me to find an alternate path." Ok they wouldn't think all of this consciously, but really that's how it seems they think sometimes... Anyhow, the positive upshot is that a student who is used to/recognizes the value of finding alternate solutions will be more flexible in the long run. This is a bit of a meta answer, but let me explain a bit of why a course might not cover l'Hôpital's rule. When I teach calculus I've skipped l'Hôpital for two reasons. First, understanding when using it is and isn't circular is much more difficult than anything else covered in a Calc 1 class. Gerry gave a great example of a subtle circularity, but there are others. Since students won't be able to understand when they can and can't use it, they shouldn't use it at all. Second, in my experience, learning l'Hôpital's rule causes students to forget everything else they ever learned about limits. In particular, many students will apply it to limits which are not indeterminate! Thus teaching l'Hôpital's rule causes more unlearning than learning, so I'd prefer to spend that time teaching another topic instead. Now you might wonder why a class that never taught l'Hôpital's rule would have students who knew l'Hôpital's rule. When I've taught calculus usually a substantial portion of the class has taken a high school calculus class where they were taught l'Hôpital's rule but don't understand it. So I then have to give a brief explanation of why I'm not teaching it and why they shouldn't use it on the problems in the class. Mainly because when you first study limits, you are not introduced to L'Hôpital's rule. Most limit questions come from beginners, who have not even studied derivatives. When the time comes to study L'Hôpital's, your interest in limits are generally boiled away. Hence, it is quite natural to see people asking limit questions without L'Hôpital's [perhaps because I have been through that stage]. And also, because many people see it as a challenge to find the limit without advanced techniques. I'm pretty sure I've asked such a question, actually. The simple reason is that I was trying to solve an exercise, which was recommended to do in connection to a class which preceded the one where L'Hopital's rule was introduced. So I concluded that while the exercise was probably solvable using L'Hopital's rule (which I wasn't familiar with at that point), it was most likely intended to be solved by other means. And those 'other means' were what I was interested in, and not the limit itself.
OPCFW_CODE
Unexpected Token ... Library Affected: *workbox-sw, workbox-build, workbox-cli Node 7 nothing works on node 7.. /workbox-cli/node_modules/@hapi/hoek/lib/deep-equal.js:17 options = { prototype: true, ...options }; ^^^ SyntaxError: Unexpected token ... at Object.exports.runInThisContext (vm.js:78:16) at Module._compile (module.js:543:28) at Object.Module._extensions..js (module.js:580:10) at Module.load (module.js:488:32) at tryModuleLoad (module.js:447:12) at Function.Module._load (module.js:439:3) at Module.require (module.js:498:17) at require (internal/module.js:20:19) at Object. (/usr/local/lib/node_modules/workbox-cli/node_modules/@hapi/hoek/lib/index.js:9:19) at Module._compile (module.js:571:32) This sounds similar to https://github.com/GoogleChrome/workbox/issues/2061 The underlying code comes from the dependency on https://github.com/hapijs/hoek, which doesn't list a specific minimum required version of node. That being said, as per https://node.green, that syntax should be supported in node 7, so I'm not sure why you're running into issues—can you ensure that you're on the latest node 7.x minor release, or alternatively, try upgrading to a more recent version of node? Don’t think node has full support for the spread operator until v8. Specifically node 7 doesn’t handle spreading properties of an object. Can’t upgrade node i’m just trying to upgrade from swPrecache on some legacy apps. The package.json suggests this project works on node 6+ maybe u should update that if you don’t support / test on those versions. Apologies that you're running into this—I'm thinking it's likely due to https://github.com/GoogleChrome/workbox/issues/2043, which made it into Workbox v4.3.1. Does switching to Workbox v4.3.0 resolve the issue for you? Given that Node v8 is the earliest version that is still in the maintenance window, I believe that we'll be bumping things up to that as the required minimum version that we support when Workbox v5 is released. Switching our Travis CI environment to explicitly test against both the earliest and the latest node releases is a good idea. Nope 4.3.0 didn’t solve it. Looks like you need to use joi v12 if you want to support node 6+ https://github.com/hapijs/joi/issues/1802 Don’t really understand why a project that basically generates a text file needs to drop support for old node versions. Okay, thanks for digging into things. https://github.com/GoogleChrome/workbox/pull/1959 was where we upgraded from joi v11 (in order to address the security issue reported at https://github.com/GoogleChrome/workbox/issues/1958). I was not able to find anything in the joi release notes at the time about changes to the minimum required node version, and apologies for the inconvenience that this has caused. (See https://github.com/GoogleChrome/workbox/issues/2094 for the Workbox v5 plan.) Workbox v4.1.0 seems to be the one that was cut right before #1959 was merged, so that should still use joi v11. appreciate your help on this. I tried a bunch of different versions and only workbox 3 seems to include joi 11 and works on node 7. Same problem with 4.1.0. i downgrade to 3.6.3 to make it work, but it seems to break bgsync feature. We're up to Workbox v5 now, which requires node 8 or higher. I'm closing this issue as it's no longer actionable.
GITHUB_ARCHIVE
Translation plan and progress tracking (version 1.0) How to help with translation Select a page with :grey_question: not assigned status Write a comment in this issue like I am taking page-name.md Create fork of this repo Be sure that you are making a translation based on lang-ru branch Create a pull request to this repo with name [WIP] page-name.md even if/while translation is not completed yet (for everyone to know that you are really working on it, or if you will leave it - someone could use your starting point and continue, so your work would not be lost in any case) Refer to previous docs version translation, maybe the docs article you are working on was already translated and you can find some text parts. When you finish a translation rename a pull request and repo [WIP] prefix Be sure that your branch is up-to-date with latest state of this repo's lang-ru and pull request could be merged without conflicts Make a comment here like Finished page-name.md <link to pull request> Someone will review your translation and I'll accept a pull request Review Even if you are not very good in English to translate by yourself, or have no time etc. You still can contribute. If you see a :see_no_evil: review needed label on page. It means that translation is completed, but some native speaker should review it in Russian. To track down bad wording, mistypes or anything that "looks weird" for a Russian native-speaker. If you found something - ideally make s fix pull request, or at least create an issue with back links to "error" lines. If you have reviewed some doc and it looks fine for you - write here in comments to mark it as done :+1: Priorities Pages are ordered in the table as they are ordered in docs, so it is supposed to translate them in the same order. You are free to select any page, but it's better to go one-by-one in the list /source/guide Page Translator Reviewer Status installation.md :grey_question: not assigned index.md :grey_question: not assigned overview.md :grey_question: not assigned instance.md :grey_question: not assigned syntax.md :grey_question: not assigned computed.md :grey_question: not assigned class-and-style.md :grey_question: not assigned conditional.md :grey_question: not assigned list.md :grey_question: not assigned events.md :grey_question: not assigned forms.md :grey_question: not assigned transitions.md :grey_question: not assigned components.md :grey_question: not assigned reactivity.md :grey_question: not assigned custom-directive.md :grey_question: not assigned custom-filter.md :grey_question: not assigned mixins.md :grey_question: not assigned plugins.md :grey_question: not assigned application.md :grey_question: not assigned comparison.md :grey_question: not assigned /source/api Page Translator Reviewer Status index.md :grey_question: not assigned /source/examples Page Translator Reviewer Status all pages description :grey_question: not assigned /themes/vue/layout/*.ejs Page Translator Reviewer Status all pages description @iJackUA :construction: work in progress Setup gh-pages site for 1.0 http://vuejs-ru.github.io/vuejs.org/ @iJackUA :construction: work in progress Legend Emoji Status :+1: done :eyes: in review :see_no_evil: review needed :construction: work in progress :grey_question: not assigned So what we can salvage from the pre-1.0 translated docs? @simplesmiler Hard question. I tend to consider everything as translation from the scratch. Just consider that some parts are already translated and while working on 1.0 docs, for example "computed.md" refer to 0.12's "computed.md" - maybe some parts could be "reused". But as a general rule it seems to be hard (maybe I am wrong, let's start and then we'll see). Updated lang-ru branch with actual docs state (not translated). Previous progress transferred to<EMAIL_ADDRESS>In nearest days hope to bring back CircleCI for automatic docs site rebuilding. And to transfer back all Hexo parser features - like markdown plugin with abbr support etc. To try to push it to main Vue docs repo.
GITHUB_ARCHIVE
package cologappengine import ( "io/ioutil" "net/http" "github.com/comail/colog" "golang.org/x/net/context" "google.golang.org/appengine" "google.golang.org/appengine/log" ) // NewCologAppEngine creates new colog logger for AppEngine. func NewCologAppEngine(w http.ResponseWriter, r *http.Request, prefix string, flag int, lvMap LevelMap) *colog.CoLog { l := colog.NewCoLog(w, prefix, flag) l.SetOutput(ioutil.Discard) h := &cologAppEngineHook{ ctx: appengine.NewContext(r), formatter: &colog.StdFormatter{}, } if lvMap != nil { h.lvMap = lvMap } else { h.lvMap = defaultCologAppEngineLevelMap } h.lvs = levelMapKeys(h.lvMap) l.AddHook(h) return l } func levelMapKeys(m LevelMap) []colog.Level { keys := make([]colog.Level, len(m)) i := 0 for k := range m { keys[i] = k i++ } return keys } // AppEngineLogLevel represens severity level in AppEngine type AppEngineLogLevel uint8 const ( // AppEngineLDebug represents debug severity level in AppEngine AppEngineLDebug = iota // AppEngineLInfo represents info severity level in AppEngine AppEngineLInfo // AppEngineLWarning represents warning severity level in AppEngine AppEngineLWarning // AppEngineLError represents error severity level in AppEngine AppEngineLError // AppEngineLCritical represents critical severity level in AppEngine AppEngineLCritical ) // LevelMap is convert map CoLog -> AppEngine log type LevelMap map[colog.Level]AppEngineLogLevel var defaultCologAppEngineLevelMap = map[colog.Level]AppEngineLogLevel{ colog.LTrace: AppEngineLDebug, colog.LDebug: AppEngineLDebug, colog.LInfo: AppEngineLInfo, colog.LWarning: AppEngineLWarning, colog.LError: AppEngineLError, colog.LAlert: AppEngineLCritical, } type cologAppEngineHook struct { lvs []colog.Level lvMap LevelMap ctx context.Context formatter colog.Formatter } // Levels returns the set of levels for which the hook should be triggered func (h *cologAppEngineHook) Levels() []colog.Level { return h.lvs } // Fire method converts log level from colog to appengine and puts log. func (h *cologAppEngineHook) Fire(e *colog.Entry) error { lv := defaultCologAppEngineLevelMap[e.Level] b, err := h.formatter.Format(e) if err != nil { return err } msg := string(b) switch lv { case AppEngineLDebug: log.Debugf(h.ctx, msg) case AppEngineLInfo: log.Infof(h.ctx, msg) case AppEngineLWarning: log.Warningf(h.ctx, msg) case AppEngineLError: log.Errorf(h.ctx, msg) case AppEngineLCritical: log.Criticalf(h.ctx, msg) } return nil }
STACK_EDU
Why do we call clear-subscription-cache! when reloading code with Figwheel? Pour yourself a drink, as this is a circuitous tale involving one of the hardest problems in Computer Science. 1: Humble beginnings When React is rendering, if an exception is thrown, it doesn't catch or handle the errors gracefully. Instead, all of the React components up to the root are destroyed. When these components are destroyed, none of their standard lifecycle methods are called, like 2: Simple assumptions Reagent tracks the watchers of a Reaction to know when no-one is watching and it can call the Reaction's on-dispose. Part of the book-keeping involved in this requires running the on-dispose in a React At this point, your spidey senses are probably tingling. 3: The hardest problem in CS re-frame subscriptions are created as Reactions. re-frame helpfully deduplicates subscriptions if multiple parts of the view request the same subscription. This is a big efficiency boost. When re-frame creates the subscription Reaction, it on-dispose method of that subscription to remove itself from the subscription cache. This means that when that subscription isn't being watched by any part of the view, it can be disposed. 4: The gnarly implications If you are - Writing a re-frame app - Write a bug in your subscription code (your one bug for the year) - Which causes an exception to be thrown in your rendering code - React will destroy all of the components in your view without calling - Reagent will not get notified that some subscriptions are not needed anymore. - The subscription on-dispose functions that should have been run, are not. - re-frame's subscription cache will not be invalidated correctly, and the subscription with the bug is still in the cache. At this point you are looking at a blank screen. After debugging, you find the problem and fix it. You save your code and Figwheel recompiles and reloads the changed code. Figwheel attempts to re-render from the root. This causes all of the Reagent views to be rendered and to request re-frame subscriptions if they need them. Because the old buggy subscription is still sitting around in the cache, re-frame will return that subscription instead of creating a new one based on the fixed code. The only way around this (once you realise what is going on) is to reload the page. re-frame 0.9.0 provides a new function: re-frame.core/clear-subscription-cache! which will run the on-dispose function for every subscription in the cache, emptying the cache, and causing new subscriptions to be created after reloading.
OPCFW_CODE
Native Development Kit README NDK 1.00 ----------------------------- 0. PREAMBLE 0.1 COPYRIGHT The NDK is Copyright ©2005-2008 Alex Ionescu. 0.2 CONTACT INFORMATION The author, Alex Ionescu, may be reached through the following means: Email: firstname.lastname@example.org Mail: 1411 du Fort, #1207. H3H 2N7. Montreal, QC. CANADA. Phone: 1-(514)-581-7156 1. LICENSE 1.1 OPEN SOURCE USAGE Open Source Projects may choose to use the following licenses: GNU GENERAL PUBLIC LICENSE Version 2, June 1991 OR GNU LESSER GENERAL PUBLIC LICENSE Version 2.1, February 1999 OR EITHER of the aforementioned licenses AND (at your option) any later version of the above said licenses. 1.2 LICENSE LIMITATIONS The choice is yours to make based on the license which is most compatible with your software. You MUST read GPL.TXT or LGPL.TXT after your decision. Violating your chosen license voids your usage rights of the NDK and will lead to legal action on the part of the author. Using this software with any later version of the GNU GPL or LGPL in no way changes your obligations under the versions listed above. You MUST still release the NDK and its changes under the terms of the original licenses (either GPLv2 or LGPLv2.1) as listed above. This DOES NOT AFFECT the license of a software package released under a later version and ONLY serves to clarify that using the NDK with a later version is permitted provided the aforementioned terms are met. If your Open Source product does not use a license which is compatible with the ones listed above, please contact the author to reach a mutual agreement to find a better solution for your product. Alternatively, you may choose to use the Proprietary Usage license displayed below in section 1.3 If you are unsure of whether or not your product qualifies as an Open Source product, please contact the Free Software Foundation, or visit their website at www.fsf.org. 1.3 PROPRIETARY USAGE Because it may be undesirable or impossible to adapt this software to your commercial and/or proprietary product(s) and/or service(s) using a (L)GPL license, proprietary products are free to use the following license: NDK LICENSE Version 1, November 2005 You MUST read NDK.TXT for the full text of this license. Violating your chosen license voids your usage rights of the NDK, constitutes a copyright violation, and will lead to legal action on the part of the author. If you are unsure of have any questions about the NDK License, please contact the author for further clarification. 2. ORIGINS OF NDK MATERIAL, AND ADDING YOUR OWN 2.1 CONTRIBUTIONS AND SOURCES The NDK could not exist without the various contributions made by a variety of people and sources. The following public sources of information were lawfully used: - GNU NTIFS.H, Revision 43 - W32API, Version 2.5 - Microsoft Windows Driver Kit 6001 - Microsoft Windows Driver Kit 6000 - Microsoft Driver Development Kit 2003 SP1 - Microsoft Driver Development Kit 2000 - Microsoft Driver Development Kit NT 4 - Microsoft Driver Development Kit WinME - Microsoft Installable File Systems Kit 2003 SP1 - Microsoft Windows Debugger (WinDBG) 6.5.0003.7 and later - Microsoft Public Symbolic Data - Microsoft Public Windows Binaries (strings) - OSR Technical Articles - Undocumented windows 2000 Secrets, a Programmer's Cookbook - Windows NT/2000 Native API Reference - Windows NT File System Internals - Windows Internals I - II - Windows Internals 4th Edition If the information contained in these sources was copyrighted, the information was not copied, but simply used as a basis for developing a compatible and identical definition. No information protected by a patent or NDA was used. All information was publically located through the Internet or purchased or licensed for lawful use. Additionally, the following people contributed to the NDK: - Art Yerkes - Eric Kohl - Filip Navara - Steven Edwards 2.2 BECOMING A CONTRIBUTOR To contribute information to the NDK, simply contact the author with your new structure, definition, enumeration, or prototype. Please make sure that your addition is: 1) Actually correct! 2) Present in Windows NT 5, 5.1, 5.2 and/or 6.0 3) Not already accessible through another public header in the DDK, IFS, WDK and/or PSDK. 4) From a publically verifiable source. The author needs to be able to search for your addition in a public information location (book, Internet, etc) and locate this definition. 5) Not Reversed. Reversing a type is STRONGLY discouraged and a reversed type will more then likely not be accepted, due to the fact that functionality and naming will be entirely guessed, and things like unions are almost impossible to determine. It can also bring up possible legal ramifications depending on your location. However, using a tool to dump the strings inside an executable for the purpose of locating the actual name or definition of a structure (sometimes possible due to ASSERTs or debugging strings) is considered 'fair use' and will be a likely candidate. If your addition satsfies these points, then please submit it, and also include whether or not you would like to be credited for it. 3. USAGE 3.1 ORGANIZATION * The NDK is organized in a main folder (include/ndk) with arch-specific subfolders (ex: include/ndk/i386). * The NDK is structured by NT Subsystem Component (ex: ex, ps, rtl, etc). * The NDK can either be included on-demand (#include <ndk/xxxxx.h>) or globally (#include <ndk/ntndk.h>). The former is recommended to reduce compile time. * The NDK is structured by function and type. Every Subsystem Component has an associated "xxfuncs.h" and "xxtypes.h" header, where "xx" is the Subsystem (ex: iofuncs.h, iotypes.h) * The NDK has a special file called "umtypes.h" which exports to User-Mode or Native-Mode Applications the basic NT types which are present in ntdef.h. This file cannot be included since it would conflict with winnt.h and/or windef.h. Thus, umtypes.h provides the missing types. This file is automatically included in a User-Mode NDK project. * The NDK also includes a file called "umfuncs.h" which exports to User-Mode or Native-Mode Applications undocumented functions which can only be accessed from ntdll.dll. * The NDK has another special file called "ifssupp.h", which exports to Kernel-Mode drivers a few types which are only documented in the IFS kit, and are part of some native definitions. It will be deprecated next year with the release of the WDK. 3.2 USING IN YOUR PROJECT * User Mode Application requiring Native Types: #define WIN32_NO_STATUS /* Tell Windows headers you'll use ntstatus.s from NDK */ #include "windows.h" /* Declare Windows Headers like you normally would */ #include "ntndk.h" /* Declare the NDK Headers */ * Native Mode Application: #include "ntdef.h" /* Declare basic native types. */ #include "ntndk.h" /* Declare the NDK Headers */ * Kernel Mode Driver: #include "ntddk.h" /* Declare DDK Headers like you normally would */ #include "ntndk.h" /* Declare the NDK Headers */ * You may also include only the files you need (example for User-Mode application): #define WIN32_NO_STATUS /* Tell Windows headers you'll use ntstatus.s from NDK */ #include "windows.h" /* Declare Windows Headers like you normally would */ #include "rtlfuncs.h" /* Declare the Rtl* Functions */ 3.3 CAVEATS * winternl.h: This header, part of the PSDK, was released by Microsoft as part of one of the governmen lawsuits against it, and documents a certain (minimal) part of the Native API and/or types. Unfortunately, Microsoft decided to hack the Native Types and to define them incorrectly, replacing real members by "reserved" ones. As such, you 'cannot include winternl.h in any project that uses the NDK. Note however, that the NDK fully replaces it and retains compatibility with any project that used it. * You must have the WDK installed if using the WDK, even for non-kernel applications, because ntntls.h is required. No releases published No packages published
OPCFW_CODE