Document
stringlengths
395
24.5k
Source
stringclasses
6 values
from mhc2.convolutional import ConvolutionalPredictor def test_convolutional_predictor(): conv = ConvolutionalPredictor(max_training_epochs=5) peptides = [] labels = [] # generate a dataset where YSIINFEKL is a 9mer binding core # and all negative examples are 9+ stretches of leucine n_pos = 0 for i in range(3): for j in range(3): peptides.append("Q" * i + "YSIINFEKL" + "Q" * j) labels.append(True) n_pos += 1 for i in range(10): peptides.append("L" * (i + 9)) labels.append(False) Y = conv.fit_predict(peptides, labels) assert Y[:n_pos].mean() > Y[n_pos:].mean(), Y
STACK_EDU
>I> >Your problem is that the local passwd file is still the default. >> It's easy to change. Just edit /etc/nsswitch.conf. But why is this >> default a problem? >Because existing users get their passwords out of synch. I tried, and >found that changing password by the user didnot get reflected centrally, >and the synched passwords were unsynched. This never happens here. Maybe your setup (permissions of NIS+ tables) is not correct. Quote:>I wasnt suggesting removing them, just the users accounts. The point is I >couldnt find anywhere to say EXPLICITLY which accounts (other than root) >should be left local. Install one machine, and just look in the standard /etc/passwd. But you are right, documentation could be better here. I never found any hint why 'smtp' has to be there. >> > then you must reset all the user passwords (and then synch with >> >the network password). >> Why? User login information should only be in NIS+ databases. >Not if you have passwd: files nisplus. No. I use this on most of our machines, and ALL users are found ONLY inside NIS+. 'files' are only used for standard system accounts. This entry means: first check /etc/passwd and if you don't find the searched name there, look in NIS+ database. Quote:>> >However, after this it works. Note, though that >> >this might*up your email for a while, and you have to unsecure your >> >system until all the users change their new passwords. >Of course one can change every account to have a hard to crack passwd... I have some problems understanding, why you get 'out of sync' with the users passwords. We never had such a problem here. Could you please explain your problem with some more details? >> >o Also there is no way to deny a user access to a machine in a particular >> Try reading the man page of nsswitch.conf. Look for 'Interaction with > One has to switch back to NIS compat mode - which seems a bit retrograde >- not too mention perhaps unlikely on 2.5 systems that require the >installation of a seperate NIS Tranistion kit. No, no. You don't have to switch to NIS compat mode. You just use the old NIS syntax for your /etc/passwd. You don't have to install NIS Tranistion kit. Here what I'm using for some machines: group: files nisplus [all this mysterious accounts] This gives me access and looked out all other Quote:>The point is, that it is *NOT DOCUMENTED* explicitly. I'm sure if you take >a bunch of machines out of their boxes, or maybe even an existing set of >NIS machines, then the transition works. My experience, is that it less >well documented for a bunch of workstations with pre-existing users using Ok, you are right. The documentation is not the best in all points. Maybe a professional it not the best to check a books usefullness for beginners :-)
OPCFW_CODE
How I learned Conway's law the hard way I dislike it when people at work quote "laws", "rules" and other platitudes like: "Adding people to a late project only makes it later", "You can't produce a baby in one month by getting nine women pregnant", "the Pareto principle". Often they are a way to kill discussions and give up critical thinking, or just hide the fact that you don't know what you are talking about. And in the worst case, the discussion turns into a contest about who knows more of these rules. One of the quotes that triggers me is Conway's law: "Organizations design systems that mirror their own communication structure". It's not wrong, but I came to dislike it because I have seen it used as an argument for separating people in "component teams" that mirror some kind of arbitrary, up-front software architecture. I'm a fan of "feature teams" or "vertical teams" and "emerging architectures" and I believe that throwing a (good) bunch of people together to implement a, possibly bare-bones, system end-to-end will yield better results. Another technique I like is the "tracer bullet" and I don't think you can really do it unless the people implementing the tracer bullet are sticking close together, which is easier to do if they are a single team. Separating people in teams usually means creating conflicting goals, implicit distrust, us vs them attitude, developing things in isolation, lack of shared customer understanding etc. It also means spending a lot of time debating about priorities of one customer team vs another. For example in Meego, many of the early problems we had were because we separated applications and framework teams too early, whereas it would have been much better to start with one team developing both (a framework and a few "hero" applications) and then separating them later. We ended up doing some of that later down the line and things got much better. However a recent experience changed my mind. In this case the products were the applications, but also the framework used to build them, which was productized as a separate SDK. For historical reasons, the team building the framework was also the team building one of the applications, and other teams were building other applications and the tooling around the framework. When I started, I was advised to follow Conway's law and split the teams so that the framework would be an independent team, and all the applications its customers. My first instinct was to reject it and instead consider the framework a shared code that all teams would contribute to to support their end customers' goals. From my past experience, I was afraid that if I split the team, in the framework team we would just debate about priorities of different customers, while the applications teams would be stuck complaining that the framework team wasn't supporting them to meet their goals (I have seen this happen). The end result however was that: - the people with most knowledge of the framework where stuck also developing an application and had to spend time on non-framework related work - the framework and application developed with it were a "monolith", and the application violated the layers of the architecture - application teams contributing to the framework were constantly breaking each other's functionality To be fair all these issues could have been at least partially solved by requiring that the framework was used only in binary form, and having better CI. But it was very hard to put these constraints on each team, and would have been much easier to just separate the teams according to the intended architecture. So I came to the conclusion that yes, Conway's law is a thing and if you want to enforce a certain architecture, the best way to do it is to organize the teams accordingly. The corollary is that a software architect needs to be an "organization architect" as well.
OPCFW_CODE
My program runs fine and I am able copy the object however when I used the copy assignment (=) it still runs fine. Why is it not giving error? I didn't overload the (=) operator but still it runs fine in codeblocks. however when I ran this code on C++ shell it compiled successfully however the output was blank. Can somebody please explain why is it running fine in codeblocks even though I didn't write the code for the copy assignment? Also another question... If on line 1 I change the return type to string instead of string& it displays an error (about temporary variable) but if I change the code of line 2 to dummy(dummy& abc : ptr(new string (abc.print())) the program runs fine (despite the return type of line 1 being string). Why is it so? class dummy { string* ptr; public: dummy(string ab) { ptr = new string; ptr = &ab; } ~dummy() { delete ptr; } string& print() { return *ptr; } // line1 dummy(dummy& abc) { ptr = new string; ptr = &(abc.print()); } // line2 dummy(){}; }; int main() { dummy x("manzar"); dummy y; y = x; cout << x.print(); cout << "\n" << y.print(); } I didn't expect it to run fine however it is working fine. It is copying the content from x to y. Look at Howard Hinnants table of compiler genarated special members : https://stackoverflow.com/questions/24342941/what-are-the-rules-for-automatic-generation-of-move-operations Remember that the compiler is not required to warn you or error out when you do something stupid, nor when you do something that breaks language rules and lands you in undefined behaviour land, etc. The compiler is only required (in general) to stop you when your code has a syntax error and cannot be parsed. Anything that you do wrong beyond that is on you and the compiler is not required to help you. Nor is it required to tell you when it takes advantage of your error to produce garbage. It's your responsibility to know all the language rules and abide by them. The compiler will (according to the language rules), synthesize certain member functions for you with default behaviour unless you implement them yourself or explicitly = delete them. Just because something compiles and runs, in no way means that it is correct. "My program runs fine" I have a hard time believing this. You're doing some awfully funny things with pointers, and your code will leak memory like a sieve and will eagerly doubly-delete things. You would benefit from reading a good C++ book. ptr = new string; ptr = &(abc.print()); - Why would you do that. That's just plain wrong. You have asked multiple questions in a single question. Please don't do that as the result tends to be an incomprehensible mess of no value to others. Pick one question, preferably one that is not answered by What is The Rule of Three?. well,thank you for your help.i didn't knew that i was commiting so many mistakes.i am new to coding.i am learning c++ from cplusplus.com/doc/tutorial.but i am having a hard time understanding oops concepts.can somebody suggest some good material that could help me. The code compiles as the compiler automatically generates an assignment operator for you. See https://en.cppreference.com/w/cpp/language/copy_assignment for the full rules of when this does and doesn't happen. See also What is The Rule of Three? The reason your code doesn't work on cpp shell is probably due to various undefined behaviour including (but possibly not limited to): Violating the rule of three: What is The Rule of Three? Your dummy(string ab) constructor stores a pointer to a temporary variable. ptr = &ab; should be *ptr = ab; Your dummy(dummy& abc) copy constructor should be dummy(const dummy& abc) and again ptr = &(abc.print()); should be *ptr = abc.print(); Your default constructor dummy() should initialise ptr to nullptr print() should check that ptr is not null. It is part of the specification of C++ that by default the compiler will generate a default copy assignment operator if no user-defined copy assignment operators are included in the class definition. There are several other operators and functions that are auto-generated like this. The others are a default constructor, a copy constructor, a move constructor, a move assignment operator, and a destructor. The default behavior of copy assignment is to copy each non-static member of the object -- there is a distinction between whether this copy is done "trivially", if the object is a POD type, or if it really performs a member-wise copy but the result is the same. There is also some complexity regarding situations in which it is not possible for the default copy behavior to be generated: if the class has a nonstatic member that is const; if it has a nonstatic member that itself does not have a copy assignment operator, etc. As of C++11 you can opt out of having the compiler generate any of these special member functions via assigning them to the keyword delete. For example in your code if you add dummy& operator=(const dummy &) = delete; to the definition of dummy, the program will no longer compile.
STACK_EXCHANGE
If your business use-case model has business use cases as engineering to the model can be represented as something is wrong with the model. Very specific and concrete things that a user can use no one requests, this should warn you that the ways that that user and system can interact. This organization sells business solutions, custom made to each customer. A management case use case, for instance, might have the owners of the software, or the board, as its case actor. Central Idea The central idea of Unterminated quotes string ubuntu wallpaper Informative Speech my opinion, is that they begin to take each in the story: I was software in the Emergency. Auden wants the business of his poem to discover categorical self through the realization that he use characteristics your specialization and brown university creative writing order details such as the. Taken together, use keywords perform all tasks within the navigation. Normal Course of Us Provide a detailed description of the user errors and system responses that will take care during execution of the use-case under atheism, expected conditions. Again, download that use case template here. A requirements traceability matrix is used to ensure completeness - namely that all functional requirements are covered by at least one business use case, and that all system requirements are covered by at least one system use case. With the help of the BPD, try to develop a use case diagram. What do I see the system doing for me? Then, you get into the basic flow, and I like to think of the basic flow like ping pong. Go back to the BPD. El curriculum vitae ideal; Name reactions and reagents in organic synthesis pdf download; Sniffer short film analysis essay; Cape physics unit 1 past paper solutions fletcher; Online degree for creative writing; Ehrc report hidden in plain sight; Then, derive the product use techniques and requirements from the use use cases. Use cases can be made during several stages of software development, such as planning system websites, validating design, software software, and inventing an outline for online help and fundamental manuals. Suturing a temporary keratoprosthesis success move on to the sub-process Use Delivery. Here are some software descriptions of the business use cases: Keep Process-This engineering describes how the procedure takes appropriate actions to buy a solution to a description as defined by a set of why cases. Cancel the edit: The system performs any business the member has engineering, then people to step 5. To structure the information use cases, we have three years of relationships. Chipotle back to the start case study; Microsoft dynamics crm report authoring extension download; Cornell university library annotated bibliography; Cover letter for teaching college sample; Media representation of white collar crime; Perisher course documentary hypothesis; Hdsl telecom business plan Once the requirements engineering activities have been completed and This software describes how The Use takes engineering actions to deliver a solution to a Customer as defined on the system use cases. Brief descriptions of the business use cases: Order Process: the business analysts are happy with the requirements definitionthe test writers can create test cases based by a set of customer requirements. In this case we are engineering with the names of Business plan sources of advice for newlyweds use case and the actor. Depending on specific events or facts, a use-case business. They can use this essay type to carefully describe click on it as it software use you to case can help literary scholars, who may have a. The Sport Utility Vehicle SUV emerged in the s age because they help students with cases, affect how was developed throughout the s on a bland and purely functional platform As a Sport Marketer you will. It also describes how business use cases are developed and instantiated started. If there is a part of a base use case that is optional, or not necessary to understand the primary purpose of the use case, you can factor that part out to an addition use case in order to simplify the structure of the base use case. The system presents the updated view of the article to the member. Taken together, use cases perform all tasks within the business. Defining supporting relationships between goals and processes ensures that the business processes are aligned with the business strategy. Times business case studies Although this essay had not been modeled in the luggage process model, we can draw it directly in the use language diagram. Show changes: The payload selects Show changes which submits the sat content. We have this particular happening here. This is what is described in the Order Process. The case is to satisfy customer requirements in a. Free Training - Quick Start to Success Stop the engineering way. Management and supporting business use cases do not necessarily need to connect to a Kristjan haule thesis sentence actor, although use normally have some kind of external contact. This organization sells complex solutions, custom-made to each business. Cohort study case control There may be an almost every number of paths a use-case grapple can use. It is in this case rarely worth it to span the whole friendship, even if you only model a engineering of the business processes. Explanation A incredible purpose of modeling business use italics and actors is to describe how the logic is used by external agencies, importantly, its quotations and Language poetry criticism essays. The software use quotation that represents the business we call the leasehold use case. Study the process flow carefully. They are a powerful tool — and I hope you find this tutorial useful in applying use cases in your analytical work today. At the level of the bank, you would expect only to see supporting use cases that reflect the strategy and goals of the bank - one of which might well be 'operate software development business unit'. Here is an example of how considering business goals may reveal the importance of seemingly trivial business use cases. For suppliers the result of value is the money they receive for delivering materials or services to the business. For this reason, we do not include the parts of the company that handles billing, manufacturing, product management, and product development; they are considered external and therefore represented as business actors. Although the same functionality is involved, they are different situations and the system would need to handle the separate conditions in each use case. For example, consider the large furniture store used as an example in Guideline: Business Goal. Let's keep them unchanged. The survey description of the Business Use-Case Model should give a good, comprehensive picture of the organization. If there is a part of a base use case that represents a function of which the business use case only depends on the result, not the method used to produce the result, you can factor that part out to an addition use case. Once an agreement has been made with the customer, the solution is engineered in all details and then installed at the customer site. Success Guarantees: The article is saved and an updated view is shown. Not how the system is doing all that. Once an agreement has been made with the customer, the solution is engineered in all details and then installed at the customer site. The business was able to negotiate cost savings with suppliers and no longer need to process invoices. In use case diagram, an actor represents a user of system. Instead, the need for a new product is realized from market studies and the accumulated requests of many users. Large use cases may be complex and difficult to understand. Management-These are internal business use cases that coordinate the value chain-for example, Strategic Planning. Analyzing a use case step by step from preconditions to postconditions, exploring and investigating every action step of the use case flows, from basic to extensions, to identify those tricky, normally hidden and ignored, seemingly trivial but realistically often costly requirements as Cockburn mentioned above , is a structured and beneficial way to get clear, stable and quality requirements systematically. It must be made clear how to make tradeoffs between price and quality to ensure that both business goals are met.
OPCFW_CODE
When I compile xmame with DGA 1 support on 6.9 beta1 (and thus against 4.0) and after that run it under a 3.3.6 server it segfaults. I can't even get a normal window stand alone an DGA screen. Since xmame uses only DGA 1, I would expect it to still run under 3.3.6 even if compiled against 4.0 if this is not feasable atleast I would expect to just get an DGA not available error and be able to run in a window. If I remember correctly, there's a general bug in DGA1 support in XFree 4.0 (ie: it doesn't work :) I remember hearing this is fixed in the current XFree CVS tree; if XF 4.0.1 isn't released soon, this and the multitude of other bugs in the released XFree4 are going to cause quite a bit of trouble :( About compiling against the XFree 4 version, very rarely can you compile a program against a newer lib and have it work with the older version. The other way around is often desirable, although not always possible. This however isn't a regular program but an X-windows program and the X protocol is supposed to not suffer from this, if I compile xmame without DGA support it works fine under the 3.3 servers even if compiled against the 4.0 libs. I can even run it against the 3.3 libs. So to be clear I'm not trying to run xmame + DGA1 suppport against 3.3 libs, just against a 3.3 server, so we're not talking about dynamic linkijg against older libs here, just about talking to an older server. This should work fine (see http 1.0 vs 1.1 etc etc) I jsut tried this again with XFree-devel-4.0-0.28 from the beta2/snapshot dir on gribble and it still happens. Still segafaults? Crud, that was supposed to be fixed. Yes, I also tried it with the X-snapshot which was releaes a few hours later in the /pub/X dir (0.29 I believe). I'll try again with beta3 this weekend and lett you know. Still there in beta3 Still there on beta5, come on fix this one please, it is the last anoying bug left in 6.9 the rest of 6.9 is great. I can't reproduce this here; xmame, aktion, etc. compiled against 4.0 X libs seem to run OK on a 3.3.x server. This should be fixed in 4.0.1-0.43. Heuh, I thought you couldn't reproduce it, well anyway nice to have it fix, hint for reproducing: most apps, atleast xmame need to be run with root rights to even try to use dga and hence trip this bug. But I guess you've figured that out in the end. Can I get this version anywhere Ah its fixed in RC1, you rule! This is great, really great!
OPCFW_CODE
When deploying your web application you will likely be using Docker for containerization. Many base Docker images like Node or Python are running Alpine Linux. It is a great Linux distro that is secure and extremely lightweight. However many Docker images provide Alpine without its OpenRC init system. This causes a few issues when trying to run sidecar services with your primary process in a Docker Container. - Table of Contents - Case study - Configuring SSH - Wrapping up Let's imagine that we have a remote server hosted somewhere. We are using this server for deploying our production application. To run the application in an isolated environment we are continerizing a Node.js/Python/nginx or any other app. To administrate processes we would want to have direct access to production environment - docker container in this case. SSH is the way to go here, but how should we setup it? We could SSH into the remote server and then use docker exec but that would not be a particularly secure or elegant solution. Perhaps we should forward SSH connection to the Docker container itself? Binding ports is fairly easy - we will bind not only port 443 (or any other port you might use for your use case) but also port 22. Run command would like something like docker run -p 443:<docker_app_port> -p 22:22 <container_id>. The more challenging part would setting up the actual SSH inside the container. We will take a simple Node.js Dockerfile as a base. FROM node:12.22-alpine # added code goes here WORKDIR /app COPY . . RUN yarn RUN yarn build CMD ["yarn", "start"] To reproduce the connection to a remote server we would be running this Docker image locally and connecting using Firstly, in order to be able to login as root (or any other user) we would have to unlock the user and add authorized SSH keys (unless you would want to use text passwords to login which is very insecure). # create user SSH configuration RUN mkdir -p /root/.ssh \ # only this user should be able to read this folder (it may contain private keys) && chmod 0700 /root/.ssh \ # unlock the user && passwd -u root root user is unlocked to be logged into and there is a folder for SSH configuration. However at the moment anyone can login as root since there is not password and password auth is enabled by default. Our goal would be to disable password auth and to add our public key as an authorized one for this user. You may also use a different user for this purpose - just replace /rootwith the path to the desired users home directory and replace username. # supply your pub key via `--build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)"` when running `docker build` ARG ssh_pub_key RUN echo "$ssh_pub_key" > /root/.ssh/authorized_keys Now in order to disable password auth via SSH we have to first install SSH. I will be using OpenSSH for this but other implementations should work similarly. RUN apk add openrc openssh \ && ssh-keygen -A \ && echo -e "PasswordAuthentication no" >> /etc/ssh/sshd_config Stripped Alpine Docker images like the Node one do not provide OpenRC by default so we should install it ourselves. Then generate host SSH keys so clients may authorize our container as an SSH host. Finally, append PasswordAuthentication no to the end of sshd_config to disable password auth via SSH. For real applications you would invest in pre-generating host keys so that keys do not update every time a new container is built. However this and managing keys more elegantly is out of scope for this post. At this point we may start the RUN rc-status \ # touch softlevel because system was initialized without openrc && touch /run/openrc/softlevel \ && rc-service sshd start However when you run the container you would see something like kex_exchange_identification error. When docker exec into the container and execute rc-status you would see that sshd service is crashed. Service is crashed because Alpine Docker images allow only a single process to be launched. It is actually a good concept that facilitates using microservices and creating docker compositions. However in this particular case there is not way to run SSH in a different container. To avoid that the actual CMD command should boot multiple processes. FROM node:12.22-alpine ARG ssh_pub_key RUN mkdir -p /root/.ssh \ && chmod 0700 /root/.ssh \ && passwd -u root \ && echo "$ssh_pub_key" > /root/.ssh/authorized_keys \ && apk add openrc openssh \ && ssh-keygen -A \ && echo -e "PasswordAuthentication no" >> /etc/ssh/sshd_config \ && mkdir -p /run/openrc \ && touch /run/openrc/softlevel WORKDIR /app COPY . . RUN yarn RUN yarn build ENTRYPOINT ["sh", "-c", "rc-status; rc-service sshd start; yarn start"] In this final Dockerfile I combined all previous RUN commands into a single one to reduce the amount of layers. Instead of running rc-status && rc-service sshd start in RUN we do that in sh -c. This way Docker container will execute only a single process sh -c that would spawn childs. As a side effect this Node.js application will not recieve SIGINT/SIGTERM signals (or any other signals from Docker) when stopping Docker container. Run built container using docker run -p 7655:22 <container_id>. In a different terminal instance run ssh root@localhost -p 7655. Voila - you successfully SSHed into a Docker Container. Using SSH for your production app would be the same except you would be using its IP instead of localhost and a valid port. To properly build and run container without an app around it replace When any difficulties with running SSH may arise first try to docker exec into the container and check sshd service should be started. If it has crashed or is not present you may not be starting it properly in ENTRYPOINT or have incorrect configuration in sshd_config. You may also be missing SSH host keys or use an already bound port. To further troubleshoot you can run the container with docker run -p 7655:22 7656:10222 <container_id>. Then docker exec <container_id> and run $(which sshd) -Ddp 10222. This would create another instance of sshd that would listen on a different port (10222) with verbose logging. From there you should see informational messages on why the process might crash. If the process does not crash attempt connecting. Run ssh root@localhost -p 7656 on the Docker host machine. From this point you will see additional logs in your previous terminal window with details on why the connection has been refused. While this tutorial is pretty specific to running SSH in an Alpine Docker container, you may reuse this knowledge to run SSH in other Linux Docker distros. Or you may have better luck configuring other sidecar services inside an Alpine Docker container. In any case I hope you found what you were looking for in this post or learnt something new.
OPCFW_CODE
Supported OS and architectures |Windows 10 or Windows Server 2019 |x86, arm64-v8a or armeabi-v7a ABI |Ubuntu 18.04+ or Alpine 3.9+ |x86_64, aarch64, armv7 or arm |x86_64, aarch64, armv7 or arm |Big Sur or later |x86_64 or aarch64 It is not supported to put firewalls between devices of the same swarm. Wherever possible, devices should be placed within one broadcast domain. Placing devices of the same swarm across different broadcast domains requires manually configuring initial peers and configuring communication between these broadcast domains accordingly. Apart from the mDNS parameters, the three used TCP ports can be freely configured when starting Actyx. The command line options for these are --bind-swarmfor the inter-node data transfer port (default: 4001) --bind-apifor the HTTP API used by applications (default: 4454) --bind-adminfor the admin port used by the Node Manager and Actyx CLI (default: 4458) actyx --help for more details. A lot of different factors play into performance of Actyx and your apps. Assuming a standard network setup, rugged tablets or other devices with relatively low computing power, and a standard factory use case, these are approximate limits: |below 200 ms, not guaranteed as dependent on several factors |No. of nodes |max. 100 nodes |Event data rate |~1 event per node per 10 seconds Please refer to the performance and limits page for more detailed information, also regarding typical disk usage etc. The following list shows the factors that influence performance. Please note that these are not requirements, but assumptions made for the above performance characteristics: |LAN setup and latency |standard / WiFi |Webview (Node.js) and Docker |Rugged tablets or other devices with relatively low computing power (e.g. Raspberry Pi 3) |1-2 GHz, x86/arm architecture |Business logic complexity |Standard factory use case, production data and machine data acquisition The limits regarding performance and disk space described on this page are only true within the circumstances outlined above. If one of these factors changes, the limits for your solution might change. If you are looking for guidance on a specific use case, please refer to our conceptual guide or contact us. Node Settings Schema Which settings are available and which values they may have is defined in the so-called Actyx Node Setting Schema. The most recent version thereof is available for download at: admin section of the settings holds a list of authorized users, represented by their public keys (you can find yours in your O/S-specific user profile folder under If you need to access a node that doesn’t accept your key, but you have administrator access to the (virtual) device it is running on, then you may use ax users add-key to get yourself in — make sure to stop Actyx before doing this. displayName property is shown in the Node Manager or ax nodes inspect etc., so it is useful to set it to some short string that helps you identify the node. You can change the /admin/logLevels/node setting to adjust the logging output verbosity at runtime; valid values are DEBUG, INFO, WARN, or ERROR. /api/events/readOnly setting controls whether the node will send events to the rest of the swarm. Its main use is to create a “silent observer” that you use to test new app versions without risking to taint the swarm with development or test events. licensing section is described in licensing apps. swarm section you can fine-tune the networking behavior of Actyx: announceAddresses: an array of addresses allows you to declare IP addresses under which the node is reachable, but that are not listen addresses. This is frequently necessary when running Actyx inside a Docker container, see configuring the bitswapTimeout: maximal wait time for a data request sent to another swarm node. You may need to increase this on very slow networks if you regularly see bitswap timeout warnings in the logs. blockCacheCount: the number of IPFS data blocks kept in the local actyx-datafolder for the current topic. All swarm event data blocks and pinned files will be kept regardless of this setting, only voluntarily cached blocks can be evicted. blockCacheSize: the size in bytes up to which data blocks are kept in the local actyx-datafolder for the current topic. The same restriction applies as for blocks in that events and pinned files are not eligible for eviction, so the cache may grow larger. blockGcInterval: the interval at which eligible data are evicted from the local actyx-datafolder for the current topic. There should be no need to change this. branchCacheSize: cache size in bytes for those IPFS blocks that contain event stream metadata. You may need to increase this in situations where many devices have been producing events for a long time: if this cache becomes too small, application query performance will drastically decline. initialPeers: an array of addresses of the form /ip4/<IP address>/tcp/<port>/p2p/<PeerId>to which this node shall immediately try to connect. mdns: flag with which usage of the mDNS protocol for peer discovery can be disabled. pingTimeout: Each connection to another Actyx node is continually monitored via messages sent over that TCP connection (they are called “ping”, but have nothing to do with the pingnetwork tool). When three successive pings have not been answered within the allotted timeout, the connection is closed and will be re-established. You may need to increase this on very slow networks if you regularly see ping timeout warnings in the logs. swarmKey: an additional layer of encryption between Actyx nodes that allows you to separate swarm so that they cannot connect to each other. See the guide on swarm keys. topic: a logical separation of nodes within the same swarm — events can only be replicated within the same topic, and each node can only be part of one topic. This is mainly useful for effectively erasing all events within a swarm by switching all nodes to a new topic. The old events will still be in the actyx-datafolder, you can access them by switching a node back to the old topic; there will be no interference between old and new topic. Ephemeral Event Streams eventRouting section (added in version 2.16) is divided in two subsections – routes section is an array of patterns, it allows the administrator to create stream routes based on tag expressions (i.e. most expressions that can be used after - from: the tag expression used to filter events. For example, the expression 'logs:info' | 'logs:debug'will match all events containing either the - into: the name of stream to place the matching events into. Event matching is done from top to bottom, thus, if you declare the following routes: - from: "'logs:info' | 'logs:debug'" - from: "'logs:info'" All events tagged will be placed in the info_or_debug stream, not making it into the Migration is done automatically, if you have an existing node, the routes for the existing Actyx streams will be performed automatically. Similarly, if you do not have an eventRouting configuration in place, when launching a new node, the same streams as previously (1– files) will be created in addition to the default stream zero. streams section is an object which maps stream names to policies. It allows the administrator to configure retention policies for the streams declared in - maxEvents: the number of most recent events to keep. E.g. if you set maxEventsto 1000 and currently have 1024 events store, the oldest 24 events may be deleted. - maxSize: the size beyond which streams will start being pruned. E.g. if you set 1GBand your stream occupies 1500MBthe oldest events may be removed until 1GBis reached again. This setting supports the following units - - maxAge: the age beyond which events will start being removed. E.g. if you set this setting to 1h, all events older than 1 hour at the time of pruning may be removed. This setting supports the following units: Under the hood, events are held in blocks — the smallest unit Actyx is able to delete. If a block is not completely filled with events or if not every event in the block has expired, the block will not be deleted. defaultstream does not support retention policies, it will always be permanent. - Streams are only created by routing events to them, this means that if you configure a stream but no routing rule points to it, it will not be created. In this case, a warning will be raised. Here's an example of the complete configuration for the - from: "'logs:warn'" - from: "'logs:trace' | 'logs:debug'" - from: 'logs'
OPCFW_CODE
- Kiting (MMORPG term) Kiting is a term encountered in MMORPGs such as " EverQuest", " The Lord of the Rings Online" or " World of Warcraft" referring to a popular method of killing mobs (monsters) by staying at a distance, using ranged attacks, and running whenever the enemy comes near. Similar tactics may be used in other computer and video games. [citebook|title=Gaming Hacks|author= Simon Carless|year= 2004|publisher=O'Reilly|pages=112|id=ISBN 0596007140] The term "kiting" is generally considered to refer to "flying a kite" [http://www.wikihow.com/Kite-Mobs-in-Everquest Kiting defined] , which is what the process looks like from a third party. The player doing the kiting leads the enemy around (directed by the AI to move towards the player to attack them), which is often moving at a reduced speed caused by the player in some manner (for example, a "slow" spell or injury). It has been suggested that the term "kiting" refers to the slang banking term " check kiting", meaning to illegally float money back and forth between accounts. Generally, the banking term refers to money not reaching its destination, which is similar to the goal of kiting a target in a game. It has also been suggested that "kiting" comes from "Killing In Transit", but this is more commonly regarded as a The advantage of the strategy is that a safe distance is kept between the player and the target while the player keeps bombarding the target with ranged attacks (such as spells, arrows, or other projectiles). This ideally results in a dead monster without the player taking a hit. The obvious disadvantages of this tactic are the annoyances of constant running around, its slowness, and the possibility of "adds" (other monsters in the area assisting the primary target). The tactic relies on being able to generate more damage per secondthan the mob's hit pointregeneration without running out of mana or projectiles, and so is limited in this respect. This limitation is less relevant when the kiter's only task is keeping the monster's attention while his or her friends deal damage, since in many games the other players are not "in combat" and can rest or recover easily. Kiting was an extraordinarily effective tactic in the first several months after "EverQuest" was released, allowing players to kill monsters that "conned" red (on a scale of green-blue-white-yellow-red, signifying the level of the monster relative to the player's level, white being equal, green much lower and red much higher). In the summer of 1999, Verant Interactiveimplemented several nerfs apparently designed to make kiting a less viable tactic. The most notable change made damage-over-time (DoT) spells only 66% as powerful while the target chased the player. DoT spells, among many others, have since been revised several times (now do full damage to running mobs), and attempts have been made to promote grouping. Kiting continues, however. EverQuest II" implemented a locked-encounter system that countered several tactics that had emerged in the original game. When a player enters a locked encounter, they lose any movement speed enhancements they have (except for a special sprint ability). Movement speed enhancements were useful in kiting to maintain a safe distance between the player and the target. A common strategy in " World of Warcraft" involves a player "pulling" a boss away from a group of enemies, kiting the boss while the rest of the player's party defeats the other mobs. The kiter then usually gets the aggro off them after the other enemies are defeated and then the whole group fights the boss together with no other distractions. Sometimes players will kite aggressive, high level mobs into unusual areas to wreak havoc. In "World of Warcraft" for example, one party of orc hunters kited a high-level boss mob, called Kazzak, into the human capital city Stormwind. [cite web | title=Google Video of Lord Kazzak attacking Stormwind, and as boxx mob, killed everyone in that city. | url=http://video.google.com/videoplay?docid=-982380251124231965 ] Those responsible were banned, and Blizzard Entertainmentreset the game settings so that Kazzak could not be kited that far. The most basic method of kiting is to attack an enemy from a distance and simply run away, stopping to attack again as often as necessary in order to maintain aggro and to whittle down the enemy's hit points. More advanced techniques require knowledge of a game's specific mechanics. For example, in World of Warcraft, the Mage class has access to "frost" spells which slow down the enemy's movement or even freeze them in place, allowing the mage to keep his distance even if his normal movement speed is less than that of his enemy. Other useful in-game abilities can include movement speed boosts for the player or teleportation powers, depending on the game and character. To reduce the likelihood of encountering additional opponents while kiting, it is important for a player to be aware of his surroundings. Some players will move in a circle while kiting in order to stay within an area they know to be relatively clear of enemies. Two players working together may kite an enemy without actually moving, if the game's AIor aggro mechanisms allow. The players position their characters on opposite sides of an AIcontrolled enemy, some significant distance apart. The players alternate attacking the enemy with ranged attacks. If the two players are causing approximately equal damage to the enemy and have proper timing, the enemy will continuously change which player it is targeting or has the most aggro towards, causing it to run back and forth between the two players, wasting most of its time moving instead of attacking. Line of Sightkiting exists in several MMORPG's, but is probably most evident in City of Heroes. The idea of line of sight kiting is to kite an enemy who has ranged attacks by hiding behind objects and around corners to break their line of sight to you. Most enemies with ranged attacks can continue attacking the player even while moving, which defeats the purpose of kiting. By breaking the line of sight, the player forces the enemy to stop attacking while they run to a position from which they can see the player. This can allow the player more time to reach the next corner or obstacle, or for abilities with a cooldownto recharge. Line of sight manipulation can also be used to bring an enemy into position for an ambush by the player's teammates or to draw an enemy away from a group of its companions who have not yet noticed the player. Another common use of the term kite, although in some ways the inverse of the original, is "reverse-kiting" (sometimes called fear-kiting). This is when a player will attack an enemy, and then use an ability to keep the enemy away, without the player actually having to move, usually through the use of an ability than induces fear in the target in World of Warcraft, as this will cause the enemy to run around randomly, rather than attack. Once the targeted enemy is then incapable of attacking, the player can then start to damage the enemy safely. In some other games, knockback or repulsion abilities may be used for the same effect. Everquest also created the term "Quad Kiting". This method involved a spell caster that can deal damage to four targets grouped together. "Quad Kiting" was difficult to accomplish, but yielded more than double the experience in the same amount of time. Wikimedia Foundation. 2010.
OPCFW_CODE
No option to install Blazor PWA on SWA after implementing the fallback route Adding a routes.json to implement a fallback route as specified https://docs.microsoft.com/en-us/azure/static-web-apps/deploy-blazor#fallback-route, results in the option to install not to appear. The reason is that routes.json is added to the PWA offline cache list. This file is not accessible from the client. To resolve this issue the service-worker.published.js file needs to be changed to exclude routes.json. It may be worth while to call this out in the documentation. In the console tab Error in the network tab . The network tab indicates that routes.json cannot be retrieved which is why the integrity check failed. Based on https://github.com/dotnet/aspnetcore/issues/22472#issuecomment-640948415, routes.json needs to be excluded from the cache. Change line 12 of service-worker.published.js to const offlineAssetsExclude = [ /^service-worker\.js$/, /^routes\.json$/ ]; Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 6b5615c8-7144-399d-52d9-04283a942b50 Version Independent ID: 7e846219-4121-92f4-f9f1-1a35353261ac Content: Tutorial: Building a static web app with Blazor in Azure Static Web Apps Content Source: articles/static-web-apps/deploy-blazor.md Service: static-web-apps GitHub Login: @craigshoemaker Microsoft Alias: cshoe @c0g1t8 Thanks for the feedback! We are taking a look into this and will get back to you soon. @anthonychu This is an unfortunate side effect of having routes.json in the wwwroot folder. I think the location of routes.json should be reconsidered. Why is it not just another config file at the root of the app? @anthonychu This is an unfortunate side effect of having routes.json in the wwwroot folder. I think the location of routes.json should be reconsidered. Why is it not just another config file at the root of the app? @anthonychu @danroth27 Actually, I like this idea a lot. There is already a provision for this - https://docs.microsoft.com/en-us/azure/static-web-apps/github-actions-workflow#route-file-location. I didn't notice this before. Defaulting to the location specified by the app_location property would solve this problem. So in the interim, changing the GitHub action instead of changing service-worker.published.js would be the most transparent action to take. Hello @c0g1t8 🙂 Yep that's the way to do it right now. We are considering automatically searching for the config file and copying it to the publish artifacts. @c0g1t8 Just checking if you have chance to see above response of @anthonychu Let us know if you have further question on this. @SnehaAgrawal-MSFT I did see the message from @anthonychu and followed through on it. I did the following: moved routes.json to the client folder - app_location changed GitHub Action by adding routes_location pointed to the same location as the app_location The option to install the PWA appeared. IMO - A note should be added to the documentation to cover this scenario. Note: I had to edit the GitHub Action on GitHub. Pushing remotely is not allowed due to OAuth restriction. ! [remote rejected] api -> api (refusing to allow an OAuth App to create or update workflow `.github/workflows/azure-static-web-apps-green-smoke-0a025fc1e.yml` without `workflow` scope) error: failed to push some refs to 'https://github.com/c0g1t8/BlazorPWAonSWA.git'
GITHUB_ARCHIVE
|Original author(s)||Linus Torvalds| |Developer(s)||Junio Hamano and others| |Initial release||7 April 2005| 2.34.1 / 24 November 2021 |Written in||C, Shell, Perl, Tcl| |Operating system||POSIX (Linux, macOS, Solaris, AIX), Windows| Git (//) is a distributed revision control system. It is a computer program that helps people create other computer programs together. Git was made to be fast. It was created by Linus Torvalds for use in developing the Linux kernel which he also created. Git's current development is looked after by Junio Hamano. It is free and open source software released under the GNU General Public License version 2 software license. References[change | change source] - "Initial revision of "git", the information manager from hell". GitHub. 8 April 2005. Archived from the original on 16 November 2015. Retrieved 20 December 2015. - "Commit Graph". GitHub. 8 June 2016. Archived from the original on 20 January 2016. Retrieved 19 December 2015. - "Git Source Code Mirror". GitHub. Archived from the original on 8 February 2017. Retrieved 1 January 2017. - "Git's LGPL license at github.com". GitHub. 20 May 2011. Archived from the original on 11 April 2016. Retrieved 12 October 2014. - "Git's GPL license at github.com". GitHub. 18 January 2010. Archived from the original on 11 April 2016. Retrieved 12 October 2014. - Linus Torvalds (2005-04-07). "Re: Kernel SCM saga..". linux-kernel mailing list. http://marc.info/?l=linux-kernel&m=111288700902396. Retrieved 2011-11-30. "So I'm writing some scripts to try to track things a whole lot faster." - "About - Git". git-scm.com. Retrieved 2021-05-12. Other websites[change | change source] |The English Wikibooks has more information on:| - Official website - Git Community Book - Introduction to git-svn for Subversion/SVK users and deserters Archived 2012-01-16 at the Wayback Machine by Sam Vilain - Git for computer scientists - Git Magic: a comprehensive listing of Git tips & tricks - Git Quick Reference Archived 2012-01-19 at the Wayback Machine - Linus Torvalds hosting a Google Tech Talk on Git - Git Wiki at kernel.org
OPCFW_CODE
Aircraft continuously circling before landing (not holding) On Flightradar24 I saw an Airbus 320 circling a lot before landing. My best guess is for some reason the aircraft had to stay in air for longer time due to unknown reasons. If this is the case, why didn't it enter a holding pattern or take longer vectors instead? We can exclude two possible causes: doing relay for the Tour de France https://twitter.com/flightradar24/status/1304072106947289088?lang=de showing passengers the northern lights https://www.news.com.au/travel/travel-advice/flights/easyjet-pilots-perform-midair-uturn-so-passengers-can-see-northern-lights/news-story/171e0fceb915f192dd1d5ea4fb8846bc There you go, 2 possible reasons less ;) ! These would help us answer, whatever you can find: What is the scale on that map? Which day, time (and time zone)? What weather there and then (was it just drifting with the wind, with steady bank and airspeed for comfort)? What other traffic there and then (busy runways or gates)? Any https://en.wikipedia.org/wiki/NOTAM? (As you're new here, Dev, in case it's not obvious, to reply, please don't add more comments like this one, but instead edit your question. Great first question, btw.) FYI circling is another thing: https://skybrary.aero/articles/circling-approach. What we see is closer to orbiting. This is very odd - a new user asks a question and it's answered by 2 more new users... @SteliosAdamantidis I found no authoritative technical use of "orbiting" for airplanes. If "circling" is misleading here, a better term might be "loitering." Clearly it's for spraying chemtrails. How else are they going to be sprayed. They're not going to spray themselves. There could be multiple reasons for an aircraft to be doing such circling. The airline might be conducting some flying training (yes, a costly affair when you can do the same on a simulator) They could be doing some test flying on that aircraft to evaluate something. They were put on a hold but not over the usual holding point for Kolkotta. It could even be due to some blockage on the runway that required the ATC to place the incoming flight on a loose hold but not over any specific point. Unless we know the specifics it is difficult to give a particular reason. The 4th point is a good one. Too much altitude or time. In rare cases an aircraft will have too much fuel to land. I suspect they had time to kill and not enough room to do it. You wouldn’t hold like that though, you would set up a hold in the airbus flight management system and it would hold an exact holding pattern in space - not drifting to the south east in circles!
STACK_EXCHANGE
Repeating a data structure with different values I have many places in my API where I will need to describe a list of objects. Each object has the same keys / structure but different values. How can I tweak the values of each instance of some data structure while retaining all the original type, description, etc of the original structure? for example if I had the following data structure Restaurant # Data Structures ## Restaurant (object) + restaurant_name: McDonald's (string, required) - The name of this restaurant + years_of_operation: 54 (number, required) - The number of years since established Then, let's say I want to instantiate several Restaurants in a GET response like so: ### List all restaurants [GET /restaurants] + Response 200 (application/json) + Attributes + data (array) + (Restaurant) + (Restaurant) + restaurant_name: Bob Evans + years_of_operation: 23 + (Restaurant) + restaurant_name: Eataly + years_of_operation: 16 the JSON body would look like this (note how years_of_operation for Bob Evans and Eatly are now numbers) and the rendered documentation will only show this (the descriptions for restaurant_name and years_of_operation for Bob Evans and Eatly are now missing) I thought that MSON would carry over those descriptions and type definitions. Otherwise, I have to update a description (or type, requirement, etc) everywhere that data structure is used... but I was under the impression this is a sort of problem MSON is supposed to solve? Am I doing something silly? I've updated my question with better examples. The reason they are strings and not numbers is because the default type is string and the value isn't set as a number type. If you change the following they will become numbers: + `years_of_operation`: 23 (number) + `years_of_operation`: 16 (number) Right - that's what I thought MSON would handle (since years_of_operation was already defined as a number in the data structure Restaurant. So then MSON doesn't help eliminate the need to repeat data structure definitions in this case? ... So the benefit of using a data structure in this case is nill? In a larger API, MSON gives you a way to reuse a data structure. Given perhaps a CRUD version of a restaurant API you could do something like the following: # Restaurant API ## Restaurants Collection [/restaurants] ### List all restaurants [GET] + Response 200 (application/json) + Attributes (array[Restaurant], fixed-type) ### Create a Restaurant [POST] + Request (application/json) + Attributes (Restaurant) + Response 200 (application/json) + Attributes (Restaurant) ## Restaurant [/restaurants/{id}] + Parameters + id: 1 (number) ### View [GET] + Response 200 (application/json) + Attributes (Restaurant) ### Update [PATCH] + Request (application/json) + Attributes (Restaurant) + Response 200 (application/json) + Attributes (Restaurant) ### Delete [DELETE] + Response 204 # Data Structures ## Restaurant + `restaurant_name`: McDonald's (string, required) - The name of this restaurant + `years_of_operation`: 54 (number, required) - The number of years since established In your particular use-case I agree with you that how MSON works doesn't quite work well for that particular case. If you wanted to perhaps copy a real world response for an example over you could also do something like the following where I've included an actual JSON body alongside the attributes. As another option. ### List all restaurants [GET] + Response 200 (application/json) + Attributes (array[Restaurant], fixed-type) + Body [ { "restaurant_name": "x", "years_of_operation": 5 }, { "restaurant_name": "y", "years_of_operation": 6 }, { "restaurant_name": "z", "years_of_operation": 5 } ] In terms of rendering in Apiary documentation, it will then only keep 1 example which may be desired to keep the attributes section small and compact and just including the base structures (and not any larger examples): While the JSON example, and the mock server would show the included example: Hmmm, that last half is helpful. Thank you so much Kyle!
GITHUB_ARCHIVE
using System.Collections.Generic; using System.Text; namespace NSwag.InterfaceGenerator { public class ClassWrapper { private string ClassCode { get; } private string Namespace { get; } private IEnumerable<string> Usings { get; } public ClassWrapper(string classCode, string ns, IEnumerable<string> usings) { ClassCode = classCode; Namespace = ns; Usings = usings; } public string Get() { return Build(); } private string Build() { var sb = new StringBuilder(); AddUsings(sb); AddNamespaceHeader(sb); AddCode(sb); EndNamespace(sb); return sb.ToString(); } private static void EndNamespace(StringBuilder sb) { sb.AppendLine("}"); } private void AddCode(StringBuilder sb) { foreach (var c in ClassCode.Split('\n')) sb.AppendLine(c.Contains("\t") || c.Contains(" ") ? $"\t{c}" : $"\t\t{c}"); } private void AddNamespaceHeader(StringBuilder sb) { sb.AppendLine($"namespace {Namespace}"); sb.AppendLine("{"); } private void AddUsings(StringBuilder sb) { foreach (var @using in Usings) sb.AppendLine(@using.StartsWith("using") ? @using : $"using {@using};"); sb.AppendLine(); } } }
STACK_EDU
Unable to access camera in Docker container of Ubuntu I am writing a simple code of Opencv which captures images and show us. But whenever i am running it it is telling unable to access camera with this index. I tried running my docker file with this command docker run -ti --device /dev/video0:/dev/video0 pradyumn10/ubuntu-python3 /bin/bash This opens camera for a second and then it closes it and gives a error "unable to display" Please share how you started this container (Which parameters) and possibly the Dockerfile itself. @Yaron I didnt make a docker file i just pulled a image of ubuntu and runned it and then i installed opencv in it using "apt install python3-opencv" My code: import cv2 cam=cv2.VideoCapture(0) while True: _, frame= cam.read() if frame is not None: cv2.imshow("frame",frame) cam.release() My issue was that I was running the container in privileged mode with a user other than root, i.e. with options -u vscode --privileged --device /dev/video0:/dev/video0 (among others). The privileged option caused the webcam owner and group to be root, however since I was running the container as vscode permission to access /dev/video0 was being denied. The solution was to not run the container in priviledged mode (i.e. omit the --privileged flag--causes the webcam owner to be root, but group video) and add the vscode user to the video group. This was easily done in the by adding this line to my Dockerfile: RUN usermod -aG vscode video and rebuilding the container. If you must run the container in privileged mode you can instead sudo chown root:video /dev/video0 once the container is running. Your camera is most probably a web camera and most probably mounted to the /dev/video0 or /dev/video{N} - you will have to find out {N} on your host machine first. once you get it, you can try mounting it into your docker container like this: mount /dev/video0 /testvideo docker run -it --rm --read-only -v "/testvideo:/testvideo" bash after that your app in docker should try to connect to /testvideo instead of /dev/video0 P.S. i didn't try it myself, but feels like it linux should not have any issues mounting web camera as any other device. I would give it a try, but no guarantee :) It is giving error when i am running the second command "docker: Error response from daemon: error while creating mount source path '/testvideo': mkdir /testvideo: read-only file system." yep, it's should be readonly, because you can't write into web camera, so it should be like this: docker run -it --rm --read-only -v "/testvideo:/testvideo" bash This answer didn't work for me. The mount command gave me the error /dev/video0 is not a block device See my answer.
STACK_EXCHANGE
Boskernovel – Chapter 2906: Seeing Heartless Again brief lamentable quote-p1 Novel–Chaotic Sword God–Chaotic Sword God Chapter 2906: Seeing Heartless Again slippery exuberant But at this time, a gorgeous woman in white slowly washed out into presence on the room there. She withstood near the bone tissue tower, her presence completely tucked away. She sounded like a ghost. The Heartless Boy or girl failed to give up. Following that, he commenced a broad-selection hunt, unleashing different key procedures. He employed his impressive skills to look every little thing and all over the place, departing no jewel unturned. Jian Chen promptly described every thing he experienced with the bone tissue tower, such as how Sheng Yi got vanished. Chaotic Sword God When he turned up during the Spirits’ Community, the Heartless Youngster without delay sensed where Sheng Yi’s tower was. He needed a step, as well as stars promptly receded associated with him. Jian Chen observed this realization to get astounding. “Jian Chen continues to be living. How blessed, how fortunate. On the other hand, the unusual value on him sure is extraordinary. When he conceals his appearance, even I can’t uncover him. If I hadn’t utilised a secret technique that may peer within the previous of this region, I probably still wouldn’t found him, even at this point.” the Heartless Youngster eased up. Before, just where Sheng Yi died, all traces had been completely erased, which has been why he located almost nothing along with his top secret procedure. In Fang Jing’s hands hovered a white colored crystal. Pulses of terrifying strength emanated out of the crystal. It hid world-trembling power on the inside. Quite some time down the road, the Heartless Boy or girl presented through to these research. His face turned out to be as ugly as it could get, as all remnants acquired been pulled from this space. Despite the presence of his remarkable ability, he found nothing. If your formidable had been eager, they can freely observe almost every activity the fragile produced. Nevertheless, merely the traces within the specified spot ended up being erased. As soon as he kept the area, the wonderful things of his solution approach all originated into influence. He was the Heartless Youngster. Since he experienced mad over Sheng Yi’s passing away, he seemed to be stuffed with fear, instructed to personally set foot into this ruined world. Jian Chen identified this bottom line being impressive. But currently, the Heartless Child’s physique suddenly came out ahead of Jian Chen. Over the following instant, she suddenly vanished without any find. In the following second, she instantly vanished with out a track. “Also, what actually transpired earlier?” Suddenly, her eyes narrowed marginally, staring direct at in which the passageway was. Despite the fact that she was still extremely far away from the passageway at the moment, her gaze looked so that you can pierce through area and cross via the wonderful distance, locking right into the passageway. This was just what exactly made a unique farming terrifying. If he wished for to discover a guy, he did not require any traces or sales opportunities. All he found it necessary to know was exactly where he had handed by in past times, and he could specifically check out by essentially turning with the files on this room or space. He was the Heartless Kid. Since he sensed furious over Sheng Yi’s death, he has also been loaded with worry, made to personally established foot into this damaged world. He was the Heartless Baby. Because he noticed furious over Sheng Yi’s dying, he seemed to be full of fret, required to personally established foot into this destroyed environment. Quickly following that, the passageway over the Spirits’ World area begun to shake violently. Being the passageway surged with gentle, a determine had already taken by helping cover their lightning performance, radiating which has a roaring reputation. Shortly later on, the passageway on the Spirits’ Entire world area begun to shake violently. Because the passageway surged with mild, a physique possessed already chance by helping cover their super performance, radiating having a roaring presence. When he turned up from the Spirits’ Entire world, the Heartless Boy or girl instantly sensed the place Sheng Yi’s tower was. He had taken a step, plus the superstars instantly receded regarding him. Jian Chen was excessively important to the Myriad Bone fragments Guild today. If Jian Chen died, then it would be very difficult for the Myriad Bone tissue Guild to live the truly amazing threat they encountered. From the Spirits’ Planet, where Sheng Yi died healed a similar calmness as just before right away. Just ruined bone tower hovered there on your own. The Heartless Youngster stared at Jian Chen, with his fantastic term eased up. He explained, “Thankfully you’re excellent, or I’ll be in strong difficulty. Jian Chen, make sure you turn back to how you would originally appear. I am a lot more designed to that. I am around now, so there’s no need for you to keep on camouflaging yourself in any case.” Jian Chen right away discussed every thing he familiar with the bone tissue tower, like how Sheng Yi got vanished. From the Spirits’ World, where Sheng Yi passed away healed the exact same serenity as well before right away. Only a wrecked bone fragments tower hovered there by yourself. This gal transpired to be Sheng Jin! In terms of Jian Chen, he got still left the area in the past, now travelling to the passageway. There is nobody else in the living space of total silence. Furthermore, he possessed died whenever the bone fragments tower shattered. In the event the solid were definitely willing, they might freely discover each individual action the vulnerable created. Additionally, he experienced died in the event the bone tower shattered. “Our Myriad Bone Guild will be looking into this make any difference carefully. Jian Chen, I would require out of here initial.” Having a influx of his hands, strong energy promptly enveloped Jian Chen, and this man vanished. Every thing was pointless unless they may completely remove all remnants that they had put aside in room or space. “Sheng Yi has passed away. He didn’t disappear. When the bone tower shattered, he obtained already died…” After studying almost everything, the Heartless Child’s eye twinkled, and the man sank into his feelings.
OPCFW_CODE
package com.officialsounding.crypto.test; import static org.junit.Assert.*; import java.security.InvalidKeyException; import org.junit.Before; import org.junit.Test; import com.officialsounding.crypto.cipher.Mirdek; import com.officialsounding.crypto.util.Card; import com.officialsounding.crypto.util.Card.Rank; import com.officialsounding.crypto.util.Card.Suit; public class MirdekTest extends Mirdek{ String IV = "IPDZO WKGST VARME QYBCF JNHUL"; String Key = "KEYPHRASE"; String PT1 = "PLAIN TEXTX"; String PT2 = "PLAINTEXT"; String PT3 = "PLAIN, TEXT"; String CT = IV+" OYNYG IMYOE"; String CTnoIV = "OYNYG IMYOE"; String badKey = "fasdf$%312"; @Before public void setUp() throws Exception { initalizeDeck(true,IV); } @Test public void testEncrypt() throws InvalidKeyException{ initialize(Key,IV); String ct = encrypt(PT3); assertTrue(CT.equals(ct)); } @Test public void testDecrypt() throws InvalidKeyException{ initialize(Key,IV); String pt = decrypt(CT,true); assertTrue(PT1.equals(pt)); } @Test public void testInitializeDeck() throws InvalidKeyException { assertTrue(charToCard('X', true).equals(r.get(0))); assertTrue(charToCard('I', true).equals(r.get(r.size()-1))); initalizeDeck(false,new String()); assertTrue(iv.length() == 29); } @Test public void testKeyDeck() throws InvalidKeyException { keyDeck(Key); assertTrue(charToCard('T', false).equals(l.get(0))); assertTrue(charToCard('V', false).equals(l.get(l.size()-1))); } @Test public void testMixDeck() throws InvalidKeyException { keyDeck(Key); mixDeck(); assertTrue(charToCard('T', false).equals(l.get(0))); assertTrue(charToCard('V', false).equals(l.get(l.size()-1))); assertTrue(charToCard('P', true).equals(r.get(0))); assertTrue(charToCard('X', true).equals(r.get(r.size()-1))); } @Test public void testMoveDeckFlip() { moveDeck(l,d,true); assertTrue(charToCard('Z', false).equals(d.get(0))); assertTrue(charToCard('A', false).equals(d.get(d.size()-1))); } @Test public void testMoveDeckNoFlip(){ moveDeck(l,d,false); assertTrue(charToCard('A', false).equals(d.get(0))); assertTrue(charToCard('Z', false).equals(d.get(d.size()-1))); } @Test public void testCountedCut() throws InvalidKeyException { countedCut(); assertTrue(charToCard('Y', false).equals(l.get(0))); assertTrue(charToCard('X', false).equals(l.get(l.size()-1))); assertTrue(charToCard('L', true).equals(r.get(0))); assertTrue(charToCard('I', true).equals(r.get(r.size()-1))); assertTrue(d.size()==1); } @Test public void testLetterSearchEven() throws InvalidKeyException { countedCut(); int count = letterSearch('K',false); assertTrue(count+" ",count == 13); assertTrue(charToCard('L', false).equals(l.get(0))); assertTrue(charToCard('Z', false).equals(l.get(l.size()-1))); } @Test public void testLetterSearchOdd() throws InvalidKeyException { countedCut(); int count = letterSearch('L',false); assertTrue(count+" ",count == 14); assertTrue(charToCard('M', false).equals(l.get(0))); assertTrue(charToCard('Y', false).equals(l.get(l.size()-1))); } @Test public void testCharToCard() { assertTrue(charToCard('a',true).equals(new Card(Rank.ACE,Suit.SPADES))); assertTrue(charToCard('a',false).equals(new Card(Rank.ACE,Suit.CLUBS))); assertTrue(charToCard('x',true).equals(new Card(Rank.JACK,Suit.DIAMONDS))); assertTrue(charToCard('x',false).equals(new Card(Rank.JACK,Suit.HEARTS))); } @Test(expected= InvalidKeyException.class) public void testBadKey() throws InvalidKeyException { keyDeck(badKey); } @Test(expected= InvalidKeyException.class) public void testBadIV() throws InvalidKeyException { initalizeDeck(true,badKey); } }
STACK_EDU
// iteratory pre polia: forEach, map, filter, find, reduce // iterovat cez pole - nie je potrebne pouzivat cyklus for //foreach // nemeni velkost povodneho pola // const cislo = [0, 1, 2, 3, 4]; // for (let i = 0; i < cislo.length; i++) { // console.log(cislo[i]); // } const ludia = [ { meno: 'feri', vek: 30, pozicia: 'programator' }, { meno: 'janko', vek: 22, pozicia: 'programator' }, { meno: 'iveta', vek: 25, pozicia: 'programator' }, { meno: 'dusi', vek: 20, pozicia: 'dizajner' }, { meno: 'traktorista', vek: 64, pozicia: 'boss' } ]; // const zobrazCloveka = clovek => { // // callback // console.log(clovek); // }; // ludia.forEach(zobrazCloveka); // to iste ako arrow function ludia.forEach((clovek, index) => { console.log('clovek', clovek, index); }); // pri vytvarani noveho pola pouzijeme hodnoty z povodneho pola a nato nam sluzi map // nemeni velkost povodneho pola const vekLudi = ludia.map((clovek, index) => { console.log(index); return clovek.vek + 10; }); console.log('vekLudi', vekLudi); const menaLudi = ludia.map(clovek => { return clovek.meno; }); console.log('menaLudi', menaLudi); const noviLudia = ludia.map(clovek => { return { prveMeno: clovek.meno.toUpperCase(), novyVek: clovek.vek + 20 }; }); console.log('noviLudia', noviLudia); // filter // meni velkost pola // filter vyhodnoti logiku vramci funkcie a nasledne vrati najdenu hodnotu ktoru hladame // ak sa v poli filtrovana hodnota nenachadza vrati nam prazdne pole const mladyludia = ludia.filter(clovek => { return clovek.vek <= 30; }); console.log('mladyludia', mladyludia); const programatori = ludia.filter(clovek => { return clovek.pozicia === 'programator'; }); console.log('programatori', programatori); const mladiProgramatori = ludia.filter(clovek => { return clovek.vek < 25 && clovek.pozicia === 'programator'; }); console.log('mladiProgramatori', mladiProgramatori); const ludia2 = [ { id: 1, meno: 'feri', vek: 30, pozicia: 'programator' }, { id: 2, meno: 'janko', vek: 22, pozicia: 'programator' }, { id: 3, meno: 'iveta', vek: 25, pozicia: 'programator' }, { id: 4, meno: 'dusi', vek: 20, pozicia: 'dizajner' }, { id: 5, meno: 'traktorista', vek: 64, pozicia: 'boss' } ]; // find // ak nenajde hladany prvok vrati undefined // skvele pre ziskanie jedinecnej hodnoty z pola const clovekId = ludia2.find(clovek => { return clovek.id === 0; }); console.log('clovekId', clovekId); // reduce // sluzi na manipulaciu aktualne iterovanej hodnoty s predoslou iterovanou hodnotou // 1 vstupny parameter accumulator - acc - total z celkovej kalkulacie // 2 vstupny parameter currentValue - curr - aktualna iterovana hodnota const ludia3 = [ { id: 1, meno: 'feri', vek: 30, pozicia: 'programator', plat: 2000 }, { id: 2, meno: 'janko', vek: 22, pozicia: 'programator', plat: 900 }, { id: 3, meno: 'iveta', vek: 25, pozicia: 'programator', plat: 1900 }, { id: 4, meno: 'dusi', vek: 20, pozicia: 'dizajner', plat: 1200 }, { id: 5, meno: 'traktorista', vek: 64, pozicia: 'boss', plat: 9000 } ]; // const scitaniePlatov = ludia3.reduce((previousValue, currentValue) => { // console.log('total', previousValue); // console.log('aktualny plat', currentValue.plat); // previousValue += currentValue.plat; // return previousValue; // }, 0); // console.log('scitaniePlatov', scitaniePlatov); // accumulator - acc // currentValue - curr const scitaniePlatov = ludia3.reduce((acc, curr) => { console.log('total', acc); console.log('aktualny plat', curr.plat); acc += curr.plat; return acc; }, 0); console.log('scitaniePlatov', scitaniePlatov);
STACK_EDU
Select rows that where a change has occurred in a field and join them to another table I have the following two tables: Table 1 datetime (datetime) code1 (int) code2 (int) Table 2 code2 (int) description (text) Lets say an example of the data is: Table 1 DateTime Code1 Code2 ** 14/11/2016 6:55:00 PM 6 21 14/11/2016 6:56:00 PM 6 21 ** 14/11/2016 6:57:00 PM 6 23 ** 14/11/2016 6:58:00 PM 6 28 14/11/2016 6:59:00 PM 6 28 14/11/2016 7:00:00 PM 6 28 ** 14/11/2016 7:01:00 PM 6 22 ** 14/11/2016 7:02:00 PM 6 23 14/11/2016 7:03:00 PM 6 23 14/11/2016 7:04:00 PM 6 23 ** 14/11/2016 7:05:00 PM 6 27 ** 14/11/2016 7:06:00 PM 5 8 ** 14/11/2016 7:07:00 PM 5 9 14/11/2016 7:08:00 PM 5 9 ** 14/11/2016 7:09:00 PM 5 11 ** 14/11/2016 7:10:00 PM 5 12 14/11/2016 7:11:00 PM 5 12 ** 14/11/2016 7:12:00 PM 5 14 ** 14/11/2016 7:13:00 PM 5 15 14/11/2016 7:14:00 PM 5 15 ** 14/11/2016 7:15:00 PM 5 17 I would like to run an sql-express-2012 query that will return only the starred rows and then join the returned data to the description table based on the code2 - resulting in the following output table: Final output table DateTime Code1 Code2 Description ** 14/11/2016 6:55:00 PM 6 21 some text ** 14/11/2016 6:57:00 PM 6 23 some text ** 14/11/2016 6:58:00 PM 6 28 some text ** 14/11/2016 7:01:00 PM 6 22 some text ** 14/11/2016 7:02:00 PM 6 23 some text ** 14/11/2016 7:05:00 PM 6 27 some text ** 14/11/2016 7:06:00 PM 5 8 some text ** 14/11/2016 7:07:00 PM 5 9 some text ** 14/11/2016 7:09:00 PM 5 11 some text ** 14/11/2016 7:10:00 PM 5 12 some text ** 14/11/2016 7:12:00 PM 5 14 some text ** 14/11/2016 7:13:00 PM 5 15 some text ** 14/11/2016 7:15:00 PM 5 17 some text Regards, Mark This answer assumes that the granularity in the time column is fixed at one-minute intervals: (It also doesn't use, as requested, window functions.) Select a.*, description From #tbl1 As a Left Join #tbl1 As b On a.datetime = DateAdd(Minute, 1, b.datetime) And a.code1 = b.code1 And a.code2 = b.code2 Left Join #tbl2 On a.code2 = #tbl2.code2 Where b.datetime Is Null; Thanks, mendosi. I have only done a little bit of testing but this seems to work for me. I am however, a little confused by the On a.datetime = DateAdd(Minute, 1, b.datetime) And a.code1 = b.code1 And a.code2 = b.code2 Could you elaborate? Also how would I amend the code to select only the rows with Code1 = 3 for example? @beliskna That is just testing to see if code1 and code2 were the same 1 minute in the future from the current row. If so then don't return the row. As to the other question, you would change the last line to WHERE b.datetime Is Null And a.code1 = 3; In order to select only rows where Code2 changes you can Partition over code2: select DateTime, Code1, Code2 from (select *, rownumber() over (partition by code2 order by datetime) c from table1 ) t where c = 1 This is the difficult part, we can augment this with a join to table2: select t.DateTime, t.Code1, t.Code2, t2.Description from (select *, rownumber() over (partition by code2 order by datetime) c from table1 ) t left join table2 t2 on t.code2 = t2.code2 where t.c = 1 Thanks but I dont think the functions you have suggested are supported in sql-server-express You can also use the LEAD function: WITH CteLead AS( SELECT *, ldCode2 = LEAD(Code2) OVER(PARTITION BY code1 ORDER BY datetime) FROM Tbl1 ) SELECT datetime, code1, code2, t2.description FROM CteLead cl INNER JOIN Table2 t2 ON t2.code2 = cl.code2 WHERE code2 <> ldCode2 OR ldCode2 IS NULL ORDER BY datetime; I don't have much success using the lead, lag and over statements with sql-server-express, they appear to be unsupported? SQL Server 2012 supports the LEAD function. But I am using sql-server-express 2012 and that reports to me that the function is unsupported - or maybe it is the reporting services that does not support the function? Seems like you want to get every first row (based on datetime) for each group of code1, code2. Use row_number analytic function to assign numbers and then take only the first for each group: select first_occurence.datetime, first_occurence.code1, first_occurence.code2, table2.description from ( select * from ( select *, row_number() over (partition by code1, code2 order by datetime) as rn from table1 ) table1 where rn = 1 ) first_occurence join table2 on first_occurence.code2 = table2.code2 After going through your desired output again it seems like above may not be enough. I'm not sure about the logic but I assume that a particular hour in a day also makes the group (in your example code1 = 6, code2 = 23) so add this: convert(varchar(10), datetime, 103) -- date without time datepart(hour, datetime) -- only hour To the PARTITION BY clause: select first_occurence.datetime, first_occurence.code1, first_occurence.code2, table2.description from ( select * from ( select *, row_number() over (partition by code1, code2, convert(varchar(10),datetime,103), datepart(hour, datetime) order by datetime) as rn from table1 ) table1 where rn = 1 ) first_occurence join table2 on first_occurence.code2 = table2.code2 Thanks but I dont think the functions you have suggested are supported in sql-server-express Check this: http://www.sql-server-performance.com/forum/threads/is-row_number-function-exist-in-express.2509/ I have run the command it reports the database version is 2012 - I am wondering if it is the reporting services element that doesnt support the funciton?
STACK_EXCHANGE
Visio Token Review When investing in virtual currency, you need to do your research. We provide you with reviews of each world top cryptocurrency out there, so that you can find the best crypto coins to invest in for you. This review of Visio Token consists of three chapters: origin & background, technology and pros & cons. Origin & Background Visio, launched online at VisioPlatform.com. It assists content creators share their work to a worldwide audience. The Visio platform not only enables content creators to give out their work but also allows content consumers to pay for work they appreciate. Transactions on Visio go on using Visio Tokens. The platform plans to host an extensive diversity of content, as well as film, music, documents, photos, and more. On the whole, Visio intends to be the world’s first completely decentralized content aggregation raised area. It is a “DCAP”, or a Disseminated Content Aggregation Platform. In general, Visio is a dispersed content management system, or CMS. The databases mirror using data stored in a blockchain. Thus, Visio uses IPFS as its data storage system. The team selects IPFS because it is open source and developed constantly. They also select it because it is a contemporary circulated data storage system. Furthermore, it is ideal for the requirements of a decentralized content management system like Visio. Visio will index, explore, and sort data on its network in different methods within the IPFS network. The network can move around because it is not bound by IP addresses. 50 million Visio Tokens were reserved for the ICO, 10 million for the development fund, and 2 million for bounties of the total supply of Visio tokens. Commencing February 2018, one Visio Token is worth about $0.02 USD, with a market cap of around $1 million. - PoS Supply: unlimited with 1% increase - PoW Rewards: 4 Visio Token per block until block no. 250,000 - PoS: 1 Visio Token per block - Initial Supply: 62 million Visio Tokens - Block Time: 1 minute - Algorithm: X13 - Resistance to actions of ISPs and peripheral authorities through the use of decentralized applications - Peer-to-peer streaming with zero requirement for a centralized content server - User-directed aggregation, together with all movies and subtitles - An incentivized reward system using the Visio Token - User-led moderation with no centralized temperance authority - File storage and hosting method. It enables for a low barrier to entry for content addition Finally, we have not identified any obvious cons with the Visio Token. If you have concluded that this is the coin for you, congratulations! Go for it! Buy Visio Token here.
OPCFW_CODE
I recently had to install Windows 11 on a FreeDOS machine, and I hit a lot a couple of walls in the process: - My main problem was that I didn’t have access to another Windows machine to download the .exefiles in the Windows 11 downloads page. - The second issue was that the machine only accepted FAT32 devices to boot from, and the Windows 11 ISO contains a file larger than 4gb (the maximum allowed size for a file in the FAT32 file system). So I decided to document what worked for me to overcome these issues. Update 🤦♂️: It turned out that there was a much easier and straightforward way to solve the issue using macos. 1. Install Ubuntu I only had access to a macos machine, these are the steps I followed: - Download the Ubuntu ISO. - Use the Balena Etcher software to burn the ISO into the USB drive. - Note: I tried to use the same software for burning the Windows 11 ISO, but that didn’t work due to the 4gb file size restriction. - Boot Ubuntu on the FreeDOS machine using the USB drive, and install it. 2. Download the Windows 11 ISO This is an easy step, just download the Windows 11 multi-edition ISO in your desired language. 3. Format the USB drive - Connect the USB drive to the machine running Ubuntu. - Open the - Select the USB drive. - Tap on Format → Partition using MBR / DOS → Format. - Create a partition → Select FAT type. Note: This will delete all the files from the USB, so it’s ready for us to burn the Windows image. 4. Split the We need to split the install.wim file inside the Windows 11 ISO given it is larger than 4gb and exceeds the FAT32 file size limit. - Mount the Windows 11 ISO in your Ubuntu machine. - Right click the ISO. - Open With → Disk Image Mounter. - Open the mounted image in the file system. - Copy the install.wimfile inside the sourcesfolder to your Desktop. - Copy the wimtoolsfrom the console. sudo apt-get install wimtools. - Split the install.wimfile into smaller files. wimlib-imagex split install.wim install.swm 4000. 5. Prepare the bootable USB - Copy all the files from the mounted ISO image to the USB drive except for the - Copy all the install.swmfiles generated before and paste them in the sourcesfolder of your USB drive. 🎉 Now you have a bootable Windows 11 image in your USB drive. 🎉
OPCFW_CODE
I have a Cyberoam firewall in our hotel that I’m trying to troubleshoot. I have five interfaces: Port A - LAN (192.168.0.1) Port B - WAN Port C - 4ipnet Wi-Fi Controller (192.168.52.1) Port D - LAN (192.168.180.1) Port E - LAN (10.10.1.7) All ports can ping each other. 4ipnet is my Wi-Fi controller and it has DHCP enabled IP address for the guest and its default IP is 172.16.10.1. We plan to integrate the 4ipnet into our hotel pms server that has an IP address of 192.168.0.3 but it’s not connecting. Is there a firewall rule for this? Because I’ve tried to put a 172.16.10 IP series in my PC and I can ping and still remote my hotel pms server but on the other hand the pms server can't ping me. Not familiar with that setup but the firewall has no knowledge of the 172.16.10 network, therefore it can't send any traffic to it. Try adding a route in the firewall 172.16.10.0/24 pointing to the wifi controller's 192.168.52 address. This will tell the firewall, if it needs to get a packet back to something on 172.16.10, then it needs to route it through the 4ipnet controller. Something I try to keep in mind is that the separation of guest and PMS networks is likely intentional and may be mandated since PMS will hold private guest data, and may also have employee and credit card data. If separation of the guest and PMS systems are not a concern you may want to evaluate removing the additional network zone rather than linking them.Edited Jun 2, 2019 at 22:03 UTC Hi Sirs, i just found out that my wifi controller has an external ip address of 192.168.52.2, this IP is still in the guest network right? Even though the dhcp address range for guest wifi is 172.16.10.55 to 172.16.16.254 . Correct me if im wrong sir, shouldn’t we have to connect the guest wifi to pms server because i need the guest name and room number from the pms to create a username and password for the guest wifi login page? The PMS (or a system on behalf of the PMS) comunicates with the captive portal (sign in page) allowing wifi access to exchange information needed to assign logon information or track usage for billing if applicable. The end user device comunicates with the captive portal. The end user devices should not have access to the PMS or other sensitive resources. The captive portal can be built into the wifi controller (it can be on the 4ipnet), or be separate (on the cyberroam another device, hosted). The PMS may comunicate with the portal directly, or there may be a separate interface PC for integrations with door management, room charges, phones, WiFi, etc.. Many devices support configurations which are inadvisable, which can lead to downtime and liability concerns.
OPCFW_CODE
Pattern Match Exercises I'm trying to download an animation, but it isn't working. What should I do? [index] Some animations in this Companion Website require the Adobe® Flash® Player. To obtain the free Adobe® Flash® Player, visit: http://www.adobe.com/downloads/ and click on the appropriate link. Why can't I open the PDF files? [index] If you have problems opening pdf files, you may not have the required version of Adobe® Reader®. To download a free version of Adobe® Reader®, visit the Adobe site at: www.adobe.com/products/acrobat/readstep.html I'm a PC user and I'm having trouble downloading Microsoft® Word / Microsoft® Excel files. Can you help? [index] Clicking on file links will result in different behaviours on different systems, depending on how your system is configured. Left-clicking on the links once will most likely result in the file being opened in your browser's window. If you would prefer to save any of the Word or Excel files directly to your hard drive, simply point to the link, right click on your mouse, and select 'Save Target as...'. To open the link in a new window, simply point to the link, right click on your mouse, and select 'Open in New Window'. I'm a MAC user and I'm having trouble downloading Microsoft® Word / Microsoft® Excel files. Can you help? [index] Clicking on file links will result in different behaviours on different systems, depending on how your system is configured. Left-clicking on the links will most likely result in the file being downloaded to the default location on your hard drive. Find the file and open it by clicking on it. If the file has a generic icon and fails to open, you can try dragging the file to the application's icon. For example, if it is a Microsoft® Word document, try dragging the file to Microsoft® Word's icon. Alternatively, you could open Microsoft® Word and then use the Open command and navigate to the appropriate file and open it that way. Why can't I open up the '.gsp' files that I downloaded? [index] Some of our technology applications make use of The Geometer's Sketchpad files. If you do not have this program installed on your machine, you can order the Instructor's Evaluation version from the Keypress website: http://www.dynamicgeometry.com/instructor_resources/evaluation_edition/index.php When opening the Sketchpad files for the first time, you may need to show your computer where you have saved the program. If an 'Open With' Windows box appears with a list of programs, select 'Other' and locate the folder where you saved The Geometer's Sketchpad. Click Open. Tick the check box that reads 'Always use this program to open these files' and click OK. From now on, all files with the extension '.gsp' will attempt to be opened with The Geometer's Sketchpad. Please note: the demo version will allow you to view and use the Sketchpad files, but not save changes or print them out. To purchase the full version, contact: W & G Australia Pty Ltd. 49 - 51 Enterprise Avenue PO Box 377 Berwick VIC 3806 Tel: (03) 9796 1177 I pressed 'Reset' at the end of one of the drag-and-drop activities, but not all of the labels went back to the starting box. What should I do? [index] This tends to occur if you are in the middle of dragging an item over to the answer, but your time runs out. The best thing to do is to click on the 'Refresh' or 'Reload' button that appears on your Internet browser's toolbar. Alternatively, you can press the 'Start' button in the activity itself, and manually drag-and-drop the label to the desired position. I can't read the labels in the drag-and-drop activities, as the font is too small. Can I make it larger? [index] The drag and drop activities are viewed best on a full screen. You need to resize the window so that it takes up the whole screen of your computer. If you are using a PC, you can use the 'maximise' button that appears in the top right corner of the window. The button features a square on it, and appears next to the 'close' button, which features an 'X'. MAC computers also have this feature but it may be located elsewhere, depending on the version of the operating system. When I go to one of the drag-and-drop activities, an Adobe® window pops up, asking me if I want to install and run 'Adobe® Flash® Player'. Should I press 'Yes' or 'No'? [index] All of our drag and drop activities require Adobe® Flash® Player to function properly. If you have an older version of the program, you will need to download the current version, so press 'Yes'. |Pattern Match Exercises| I've entered the correct answer in the box, but the grading tool is marking it as incorrect. What's going on? [index] The grading tool is programmed to accept answers in a particular format only. For example, you may have entered '$20' into the answer box, but it will actually only grade the term '20' as correct. To avoid this problem, please read the question very carefully – symbols like ° or $ may already be part of the question, so you will not need to include them in your answer. Please also read the instructions at the beginning of the pattern match exercises for each chapter – these will state things like the number of decimal places required in your answer, or whether you need to spell out or use a digit in your answer. Why does my browser open up a blank window in addition to the file I've downloaded? [index] Clicking directly on a link to a downloadable file will result in different behaviour depending on your computer platform (eg. PC/MAC) or your internet browser (eg. Internet Explorer/Safari). Some will commence a download immediately, whilst others will open up an unnecessary blank screen in addition to the file you clicked on. You may have also noticed that some browsers will open a file directly within its correct application (eg. Microsoft® Word or Adobe® Reader®) whilst others will open it within a browser window and reduce your available options. My question isn't here. What should I do now? [index] You are welcome to contact us at firstname.lastname@example.org and we will contact you as soon as possible.
OPCFW_CODE
In this article, we will explore the topic of utilizing stored procedures in Snowflake, a powerful cloud data platform. Stored procedures offer a way to store and execute SQL statements, enabling the creation of complex and reusable logic within your data solutions. We will cover everything from understanding the concept of stored procedures to modifying and deleting them, as well as incorporating best practices for optimal performance and security considerations. Understanding Stored Procedures in Snowflake Let's start by defining what stored procedures are. In Snowflake, a stored procedure is a named block of code that can be executed in a single call. It can consist of SQL statements, control statements, and optional procedure-specific parameters. Stored procedures are essential in Snowflake for various reasons. They promote code reusability, help encapsulate business logic, enhance security, and improve overall performance. Definition of Stored Procedures A stored procedure is a collection of SQL and control statements that are stored as a database object. It provides a way to encapsulate and organize complex logic, making it easier to maintain and reuse. By creating a stored procedure, you can execute multiple statements in a single call, reducing network latency and improving overall efficiency. For example, imagine a scenario where you need to perform a series of data transformations and calculations on a large dataset. Instead of executing each SQL statement individually, you can create a stored procedure that contains all the necessary logic. This not only simplifies the code but also reduces the number of round trips between the client and the server, resulting in faster execution. Furthermore, stored procedures allow you to modularize your code. You can break down complex tasks into smaller, more manageable units, making it easier to debug, maintain, and update your codebase. This modular approach also promotes code reusability, as you can call the same stored procedure from multiple parts of your application. Importance of Stored Procedures in Snowflake Stored procedures play a crucial role in Snowflake by streamlining data-related tasks and promoting consistency. They enable developers to centralize business logic, reducing redundancy and allowing for easier maintenance. One of the key advantages of using stored procedures is the ability to encapsulate complex business rules and calculations. Instead of scattering the logic across multiple SQL queries or application code, you can consolidate it within a stored procedure. This not only improves code readability but also makes it easier to update the logic when business requirements change. Moreover, stored procedures enhance data security by limiting direct access to sensitive data and enforcing proper authorization controls. By granting execute permissions on the stored procedure while restricting direct table access, you can ensure that only authorized users can interact with the underlying data. This helps protect sensitive information and prevents unauthorized modifications. Additionally, stored procedures can significantly improve performance in Snowflake. By executing multiple SQL statements in a single call, you reduce the overhead of network latency and communication between the client and the server. This can be particularly beneficial when dealing with large datasets or complex data transformations. In summary, stored procedures in Snowflake provide a powerful mechanism for encapsulating and executing complex logic. They promote code reusability, enhance security, and improve overall performance. By leveraging stored procedures, you can streamline data-related tasks, maintain consistency, and ensure efficient execution of your code. Setting up Stored Procedures in Snowflake Before diving into the process of creating stored procedures in Snowflake, it is important to understand the prerequisites and steps involved. This will ensure a smooth and successful implementation of stored procedures in your Snowflake account. Prerequisites for Creating Stored Procedures In order to create and execute stored procedures, there are a few prerequisites that need to be taken into account: - Privileges: You must have the necessary privileges granted to your Snowflake user account. These privileges include the CREATE PROCEDURE privilege and appropriate access to the underlying database objects. Without these privileges, you will not be able to create or execute stored procedures. - SQL Knowledge: It is essential to have a basic understanding of SQL. This includes familiarity with querying, data manipulation, and database concepts. Having a solid foundation in SQL will greatly assist you in writing the logic for your stored procedures. - Snowflake Scripting Syntax: Familiarity with Snowflake's scripting syntax is also crucial. Snowflake provides a set of scripting constructs and syntax elements that enable you to write powerful and efficient stored procedures. Understanding these syntax elements will help you in defining the body of your stored procedures. Step-by-Step Guide to Creating a Stored Procedure Now that you are aware of the prerequisites, let's dive into the step-by-step process of creating a stored procedure in Snowflake: - Connect to Snowflake: Start by connecting to your Snowflake account using a SQL client or the Snowflake web interface. This will provide you with the necessary environment to create and execute stored procedures. - Check Privileges: Ensure that you have the necessary privileges to create procedures and access the required database objects. Without the appropriate privileges, you will encounter errors during the creation or execution of your stored procedures. - Write the Logic: Write the SQL statements that make up the logic of your stored procedure. Consider the desired input and output parameters, as they will play a crucial role in defining the functionality of your stored procedure. - Define the Procedure: Using the CREATE PROCEDURE statement, define the name, input parameters, and the body of your stored procedure. This step is where you bring together the logic you wrote in the previous step and encapsulate it within a procedure. - Create the Procedure: Execute the CREATE PROCEDURE statement to create the procedure in your Snowflake account. This will register the procedure and make it available for execution. By following these steps, you will be able to successfully set up and create stored procedures in Snowflake. Stored procedures provide a powerful way to encapsulate complex logic and improve the efficiency and reusability of your SQL code. Executing Stored Procedures in Snowflake Once you have created a stored procedure, you can execute it whenever needed. This allows you to automate complex tasks and streamline your data processing workflows. Let's explore different aspects of executing stored procedures in Snowflake. How to Call a Stored Procedure To call a stored procedure in Snowflake, you need to use the CALL statement followed by the name of the procedure and its input parameters (if any). This allows you to pass specific values or variables to the procedure, enabling dynamic execution and flexibility. When calling a stored procedure, Snowflake executes the statements defined in the procedure's body, providing the desired functionality. This can include data manipulation, calculations, or any other operations that you have defined within the procedure. For example, if you have a stored procedure that calculates the total sales for a specific product, you can call the procedure and pass the product ID as a parameter. The procedure will then retrieve the necessary data, perform the calculations, and return the result. Handling Output from Stored Procedures Stored procedures in Snowflake can output data or return result sets. This allows you to retrieve and utilize the results of the procedure for further analysis or application workflows. There are different ways to handle the output from stored procedures in Snowflake. One approach is to use a SELECT statement to capture the output. This allows you to retrieve specific columns or values returned by the procedure. Another approach is to leverage Snowflake's result sets functionality. Snowflake automatically generates a result set for each SELECT statement executed within the procedure. You can then access and process these result sets using the appropriate SQL statements. By properly handling the output from stored procedures, you can process and utilize the returned data as needed for further analysis or application workflows. This can include generating reports, updating other tables, or integrating the data with external systems. Overall, executing stored procedures in Snowflake provides a powerful way to automate and streamline your data processing tasks. By defining reusable procedures, you can save time and effort, while ensuring consistent and reliable results. Modifying and Deleting Stored Procedures As your data solutions evolve, you may need to modify or remove existing stored procedures. Let's explore how to make changes to stored procedures in Snowflake. Updating Stored Procedures To update a stored procedure in Snowflake, you can use the ALTER PROCEDURE statement. This allows you to modify the logic of the procedure without recreating it from scratch. When updating a stored procedure, it's essential to consider potential dependencies and communicate any changes effectively to ensure compatibility with other components of your data solution. Removing Stored Procedures If a stored procedure is no longer needed, it can be removed using the DROP PROCEDURE statement. This permanently deletes the procedure and any associated metadata. However, before removing a stored procedure, it is crucial to review its usage within the data solution and consider any potential impact on dependent components. Best Practices for Using Stored Procedures in Snowflake In order to maximize the benefits of using stored procedures in Snowflake, it is essential to follow best practices. Let's explore some key considerations. When working with stored procedures in Snowflake, there are several performance considerations to keep in mind. It's important to optimize your code to minimize resource utilization and ensure efficient execution. Use appropriate indexing, limit data transfers, avoid unnecessary data conversions, and employ proper caching mechanisms to enhance performance. Security is of paramount importance when working with data. Stored procedures can play a vital role in enforcing security measures within your data solutions. Adhere to the principle of least privilege, granting only the necessary access to stored procedures and associated database objects. Enable access controls, implement encryption, and regularly review and update security policies to safeguard your data. By following these best practices, you can utilize stored procedures effectively and efficiently within your Snowflake projects. In conclusion, stored procedures are a key component of Snowflake's capabilities, enabling the creation of complex and reusable logic. Understanding how to use, execute, modify, and delete stored procedures is essential for building performant, secure, and scalable data solutions in Snowflake. By incorporating best practices and staying up to date with the latest features, you can leverage the full potential of stored procedures in Snowflake and gain deeper insights from your data. You might also like Learn how to harness the power of CROSS JOIN in SQL Server to combine data from different tables with ease. Learn how to harness the power of API integration in SQL Server with our comprehensive guide. Learn how to harness the power of STRIM in SQL Server with our comprehensive guide. Learn how to utilize the ARRAY LENGTH function in SQL Server to efficiently manage and manipulate arrays within your database. Learn how to effectively use the CURSOR feature in SQL Server to streamline your database management and improve query performance. “[I like] The easy to use interface and the speed of finding the relevant assets that you're looking for in your database. I also really enjoy the score given to each table, [which] lets you prioritize the results of your queries by how often certain data is used.” - Michal P., Head of Data
OPCFW_CODE
A word that means "in its designed environment" I've considered the use of "in-situ," which may be the best match. In-situ seems to have a shade of meaning connoting an original location, where I am looking for something more along the lines of "Where it was meant to be." Is the object a human creation? Is the environment? The object is created by an intelligent being. I wish the word to be independent of whether the environment is natural or artificial. An engineer might talk about "in spec" as opposed to "out of spec" to talk about a component operating within the intended context. "In its natural habitat" maybe? or "in its intended place" ? Please give an example sentence (which you can make up) using the phrase "in its natural habitat." I think this phrase is a good fit to the question, but you need to do more than suggest an answer -- you need to show that your answer is reasonable. Upvote because natural applies well. (Not to be confused with original.) @Graffito, thank you for intended. I ran with it. If society is the designed environment of a human being, then well-adjusted should fit the bill; for a thing, so should the adjectives fitting or appropriate. Upvote in light of the question as understood by this answer Technically,  in its intended environment  or  in its intended setting: ... [T]he ultimate objective of the Engineering and Manufacturing Development phase [is] [t]o demonstrate an affordable, supportable, interoperable, and producible system in its intended environment. – answers.com, as in countless other browser matches for ‘intended-environment’. (Browser matches for ‘intended-setting’ are flooded by references to control panels and, appropriately, jewelry.) Colloquially (and more concisely),  in its element: Be in a situation or environment that one particularly likes and in which one can perform well: She was in her element with doctors and hospitals. – oxforddictionaries.com A fish out of water is out of its element. – EL&U: What does “You're out of your element” mean? Latin surely has a perfect and obscure two-word phrase for where intended while, as noted in the question, in situ does not apply because it is taken to mean in the original environment. Please note a subtle distinction between an original and a natural environment. Glibly put, original refers to origin whereas natural refers more to essential nature. An artificial creation may originate in a factory or laboratory but everything about its purpose, including its intended setting, is natural for it.
STACK_EXCHANGE
If you have already installed and are upgrading Wayfinder see Upgrade. Wayfinder runs on Windows, however, the installer is currently supported only for Unix and Linux systems. Please contact us on Slack for help using the installer on windows. The Wayfinder installer is built into the Wayfinder CLI, and configures the Wayfinder server and all its dependencies in a cluster in your cloud provider. The installation procedure has two stages: 1. Interactive stage - Requests and validates all options - Creates ingress IPs required for valid DNS This stage creates two files in your install directory, which contain the values you choose in response to prompts: If you want to automate installation, you can run the interactive stage alone, and use these generated files to do the automated installs. For details, see Prepare for an automated install and Use non-interactive install in your automation below. 2. Non-interactive stage Installs Wayfinder and all dependencies, including: - Cloud Networking - Cloud Kubernetes - TLS Certificates During the interactive stage of the installer, there are three possibilities for licensing: - The installer detects an existing license key, and you confirm you want to use it. - The installer detects an existing license key, and you want to use a different one. You are prompted to paste in the license key. - There are no existing licenses, and you request a new one. The new license key unlocks a trial version. When it expires, you can contact Appvia to get a full license. This procedure does a complete installation of Wayfinder. For detailed options see Review the Prerequisites before installing Wayfinder. To install Wayfinder: Ensure you are logged in to the cloud provider in which you want to install Wayfinder. Create a directory, for example wf-install, for Wayfinder to create install files: mkdir wf-installcd wf-install Run the installer: In the first stage, the installer prompts you for all configuration options, and then the second stage completes the installation. The second stage can take between 10 and 40 minutes depending on your cloud provider. Important: If this is a re-install, you must follow the procedure in Update Wayfinder's cloud access. This procedure runs the first, interactive, stage of the installer only. This generates two files needed to do an automated/non-interactive install using your automation script. For detailed options, see To prepare files for an automated install: Run the first stage of the installer only: wf install --init-only You are prompted for all configuration options. At the end of this stage, two files are created: wf-install.yaml—you can commit this into a source control system. wf-install-secrets.yaml—do NOT commit this to source control. Instead, ensure this file is encrypted or provided from a secrets management system. To automate your installs, you can use these files in your automation using the non-interactive install procedure below. The non-interactive install is appropriate for use in your automated installs. The non-interactive install does a complete installation and takes between 10 and 40 minutes depending on cloud vendor. For detailed options see To run a non-interactive install: Ensure you have these files in your current directory: See Prepare for an automated install above. Use this command in your automation script to run the installer with no prompts: wf install --non-interactive After successfully installing Wayfinder, but before using it, you must provide it with one or more cloud accounts that you want it to use. For more information, see Cloud Accounts. Assuming you have not used the --ui-tls-private-key-pem during the wf install installation, the Wayfinder install uses cert-manager which can encounter errors while requesting a certificate. You can check the status of the request by looking at the certificate object in the wayfinder namespace, i.e., kubectl -n wayfinder get certificate. Occasionally cert-manager can get into an exponential backoff; fixing this can be achieved quickly by deleting the certificate request with a False status and then re-running the
OPCFW_CODE
I have SQL query like this. loop at itab into wa. concatenate 'PR' wa-f1 into wa_range-low. wa_range-sign = 'I'. wa_range-option = 'EQ'. append wa_range to r_range. clear wa_range. endloop. if r_range is not initial. sort r_range by low. delete adjascent duplicates from r_range. select <fields> from <db table> into jtab where f1 in r_range. endif. This statement causing Runtime Error. Because r_range filling with 12,000 records. And the Runtime Error saying following solution: Error analysis An exception occurred that is explained in detail below. The exception, which is assigned to class 'CX_SY_OPEN_SQL_DB', was not caught and therefore caused a runtime error. The reason for the exception is: The SQL statement generated from the SAP Open SQL statement violates a restriction imposed by the underlying database system of the ABAP system. Possible error causes: o The maximum size of an SQL statement was exceeded. o The statement contains too many input variables. o The input data requires more space than is available. o ... You can generally find details in the system log (SM21) and in the developer trace of the relevant work process (ST11). In the case of an error, current restrictions are frequently displayed in the developer trace. How to correct the error The SAP Open SQL statement concerned must be divided into several smaller units. If the problem occurred due to the use of an excessively large table in an IN itab construct, you can use FOR ALL ENTRIES instead. When you use this addition, the statement is split into smaller units according to the restrictions of the database system used. If the error occures in a non-modified SAP program, you may be able to find an interim solution in an SAP Note. If you have access to SAP Notes, carry out a search with the following keywords: "DBIF_RSQL_INVALID_RSQL" "CX_SY_OPEN_SQL_DB" "ZEM_UPDATE_ZEMCATSFI" or "ZEM_UPDATE_ZEMCATSFI" So I have modified above SQL like this: loop at itab into wa. concatenate 'PR' wa-f1 into wa_range-low. wa_range-sign = 'I'. wa_range-option = 'EQ'. append wa_range to r_range. clear wa_range. endloop. if r_range is not initial. sort r_range by low. delete adjascent duplicates from r_range. select <fields> from <db table> into jtab FOR ALL ENTRIES IN R_RANGE where f1 = r_range-f1. endif. I Can't test this scenario in my Development System. So not sure if its resolves the problem or not. Because there are no duplicates entries in R_RANGE, where FOR ALL ENTRIES can make difference. Please suggest me your Ideas.
OPCFW_CODE
Frame, time, max, and average. In this research, human body will be marked and tracked using depth camera. The arm motion from the trainer will be sent through network and then mapped into 3D robotic arm in the destination server. The robotic arm will move according to the trainer. In the meantime, trainee will follow the movement and they can learn how to do particular tasks according to the trainer. The telerobotic-assisted surgery tools will give guidance how to slice or do simple surgery in several steps through the 3D medical images which are displayed in the human body. User will do training and selects some of the body parts and then analyzes it. The system provide specific task to be completed during training and measure how many tasks the user can accomplish during the surgical time. The telerobotic-assisted virtual surgery tools using augmented reality (AR) is expected to be used widely in medical education as an alternative system with low-cost solution. - augmented reality - 3D medical images - robotic arm - virtual surgery The study of robot controlled to follow certain paths by considering the collision with surrounding object has been studied extensively for the past years. The robot is usually manipulated through a particular device. In order to adjust the robot movement, user should reprogram the microcontroller that is attached to the robot to do certain task. Robotic arm is one of the robotic parts, which is widely used for several fields including in the medical rehabilitation to assist a disable people. Implementing the real 3D arm is quite costly; the 3D simulated articulated robotic arm is offering new ways of simulation. In addition, the gesture tracking also offers the natural ways of interaction by providing the real-time synchronization between human arm and 3D arm. Kinect was originally invented as a device for game; however, it also can be used for other purposes such as helping the recovery process of patient who got stroke. The doctor can monitor the improvement of patient and trail their nerve movement. By investigating the nerve of a person’s skeletal connection, healing specialists will have competency to find zones of body parts that requires intense training. The response that person acquired throughout or subsequently through a rehabilitation period is still possible to be perfected to solve precise badly behaved zones for the patient’s movement. The Kinect has a possibility as rehabilitation tools in house. Rehabilitation in the house provides suppleness for the patient to do regular reiteration of therapy. Furthermore, to motivate neuron recurrence inside human mind that manipulates body change, therapy should be repetitive for a period of time. Idea behind the interface is that new position commands for the robot will be derived from the depth map captured by the Kinect. The interface offers methods to start and stop the Kinect, translate points from the Kinect’s position to robot’s position, and retrieve the latest position calculated by the Kinect. The control algorithm which is able to find the next position of the robot and, so that he/she can get exactly the wanted functionality. The Kinect uses a few seconds to start and to calibrate itself to give accurate depth measurements, so it is recommended that the Kinect interface is initialized at the same time as the robot system is turned on. Most of the devices that used in hospital are really costly, and the procedure can be so complicated. They require very strict permission with authorized personnel to run the device and the queue can be quite long. Furthermore, some researchers focus on studying the communication between patient and therapist by providing the remote access through network to do home-based therapy . In addition, the training system for elderly is also being studied to provide better design for attractive training system, since it can be conducted in home personally . Augmented reality is also widely used for helping people on finding the path/route for pilgrim by combining AR technology and GPS trackers [3, 4, 5, 6]. In addition, the telecommunication in medical field such as telemedicine also has strong correlation with brain computer interface and haptic technology that increases the realism of telemedicine technology. The virtual character that assists the user will do their best to imitate human behavior as well as their interaction among themselves that will strive collision response in virtual environment [7, 8, 9, 10, 11, 12, 13, 14, 15]. With advanced collision handling, it will also reflect the realism of interaction between user and virtual agent . Besides, researcher also studied the hand and finger tracking to assist doctor on viewing and manipulating the 3D model of human body [16, 17]. While another research try to improve augmented or virtual reality technology by producing realistic facial expression in the conventional teaching system . 2. Research method 2.1. Research methodology The process is initiated by placing the Kinect camera in front of the user and then adjusting the optimum distance for acquiring finest depth image. Subsequently, it will capture the human body, which is then transferred into depth image stream. The second phase is random decision forest algorithm application by choosing a set of threshold for segmentation. The threshold and attribute which have high-density information are chosen and then the process is repeated. The final phase is mapping between joint skeleton and robotic arms and then starting the simulation by controlling remotely. The methodology of the project consists of three main processes as shown in Figure 1. Figure 1 describes that the telerobotic arm was initiated by placing the depth camera in front of the user to capture the human body and produce a stream of depth image. The algorithm used here is random decision forest that can determine the arm position of human body in real time. The process will be repeated until the desired accuracy has been satisfied. There are two vectors that built joint direction. Three axes, which define the joint frame, are the local or joint axis, the normal axis, and the binormal axis as shown in Figure 2. 2.2. Result of software requirement The proposed application provides some features for user to interact with the system that is classified as functional and nonfunctional requirement as listed below: 2.2.1. Functional requirement 22.214.171.124. Gesture tracking Gesture recognition can be divided into four main phases: hand movement detection, classifying the gesture from a collection of images, pull out the characteristic and then distinguish the gesture. The hand motion is determined through color of the skin and movement analysis. The speed of hand motion is computed to interpret the gesture localization from repeated images. Therefore, by collecting the depth images and analyzing the images, it can stimulate meaningful gestures that contain a particular command. 126.96.36.199. Real-time feedback The real-time feedback is provided with depth image stream that is send out by Kinect and then analyzed using random forest algorithm. The algorithm will produce a skeleton joint that will be mapped into 3D robotic arm remotely. 2.2.2. Nonfunctional requirement 188.8.131.52. Finger tracking The finger tracking is not covered in this research due to its complexity and it requires very short distance during tracking process while gesture requires a longer distance. 184.108.40.206. Full body synchronization The robotic arm does not need full body tracking and synchronization because the system will focus on the arm of human. 2.3. Use case diagram Figure 3 shows the use case diagram of the system. There are three main use cases: Kinect sensor, 3D articulated arm, and 3D arm control. 2.3.1. Actor description The actor of the above system represents the patient or other users who interact with the arm robot system 2.3.2. Use case description The Kinect sensor is a motion-sensing controller that is able to sense user arm in motion mode. The Kinect needs sensor calibration before it is used, and the Kinect is able to detect depth by using an IR camera. The 3D articulated arm communicates with Microsoft Robotic Studio, while 3D arm control with gesture tracking. 2.4. Analysis phase This section describes the sequence of project phases such as sequence diagram, activity diagram, and architecture of the system. 2.4.1. Sequence diagram Figure 4 shows the sequence process of the system that starts with simulation and calibration with Kinect sensor and then it is continued by reading the depth data of user arm. Microsoft Robotic Studio will render the 3D arm and is synchronized with interpreted command of 3D arm and then performs 3D arm control. 2.4.2. Activity diagram While the activity diagram in Figure 6 starts with gesture classification, joint classification, than joint synchronization between human joint with 3D arm joint, and provide real time interaction between 3D articulated arm. 2.5. Architecture design phase Figure 7 depicts the architecture design phase that has Kinect sensor component, and the other component is Microsoft Robotic Simulator that has capability to connect with Kinect driver and Kinect software development kit (SDK). Microsoft Robotic Studio consists of visual programming language and 3D environment model. 3. Result and discussion The first testing shows how to manage the tracker for hand control; it is started by initiating the Kinect camera to capture the human joint and render 3D hand that will follow the mouse movement. 3.1. Hand tracking control The hand tracking control is used for motion detection of human hand as shown in Figure 8. The system is to track and differentiate right hand or left hand of human. Figure 9 is the result of robotic arm rotation using Kinect gesture. Figures 10 and 11 also demonstrate the movement of joint of robotic arm using Kinect gesture. The robotic arm will move according to the joint data that are sent through network by following the trainer movement remotely. 3.2. Performance testing The performance of rendering is satisfying when the frame per second (FPS) is very high and the highest score reaches 60 while the lowest is 54, as shown in Figure 12. Telerobotic is one of the essential research topics, which is widely applied into medical rehabilitation and even manufacturing process in industry. This paper aims to provide telerobotic arm with six joints that represent human arm. This arm can be rotated and can act like human arm. The idea of the project is to provide training or exercise for poststroke patient to move their hand by controlling the 3D articulated arm. The human hand is tracked down by depth camera, and the behavior of real hand and 3D Arm is synchronized in real time. In this project, we provide five joints of 3D robotic arms that can be rotated according to its angle. The 3D arms will be simulated with domino effect when they collide with each other. The user will control the arm by performing a gesture in front of Kinect camera, and the synchronization of joints between human arm and 3D arm is performed in real time. The control process through Kinect by imitating mouse cursor movement also runs smoothly during the testing process. This finding is believed to bring potential benefits to rehabilitation for certain patients such as poststroke rehabilitation. The result is very convincing, while interaction is conducted naturally just by waving their hands or rotating the joints of our hand, 3D arm will do rotation as well. For future recommendation, the result of project can be improved further by conducting clinical test to real patient in medical rehabilitation and it can be used as a simulator in manufacturing process in industries. This work was supported by the deanship of scientific research (DSR), King Abdulaziz University, and Jeddah Saudi Arabia. The authors, therefore, gratefully acknowledge the DSR for technical and financial support.
OPCFW_CODE
Static linking has long gone out of fashion, at least on the average Linux desktop distribution. There are however good reasons to still (or again) support this for our frameworks. One is the rise of application bundles (Flatpak, Android APK, AppImage, etc). Bundles often only contain a single executable, so there is no benefit of sharing a library (at least in most bundle formats, Flatpak is a bit different there). Still we need to ship everything the shared libraries provide, no matter if we need it or not. Static linking is of course not the magic solution to this, but it’s a fairly generic way of allowing the compiler to drop unused code, reducing the application size. As application bundles are usually updated as a whole, we also don’t benefit from the ability to update shared libraries independently, unlike with a conventional distribution. Besides application bundles, there are also single process embedded applications that can benefit from static linking, so this is relevant for the effort of bringing KF5 to Yocto. In particular on lower powered embedded devices the startup overhead of dynamic linking can be noticeable. In order to make our frameworks usable as static libraries, there’s essentially two areas that might need a few adjustments, build system and code. On the build system side there’s two things to look at. The first one is to not force libraries to be built as shared libraries and instead allow the user to select this. That is, don’t use SHARED keyword in the add_library call. Normally CMake would default to static libraries when doing that, but ECM’s KDECMakeSettings changes that for us. To actually build static libraries, you need to set the BUILD_SHARED_LIBS CMake option The other aspect that needs attention on the CMake side is how private library dependencies are handled. For shared libraries the consumer doesn’t need to know anything about those as this is encoded in the shared library file. A static library however is just a simple archive of object files, without such meta data, so public and private dependencies are conveyed in the CMake config file to the consumer. This however means that the consumer needs to also look for all private dependencies in order to link against those. That’s done by also listing those in the CMake config file for the static library, next to the public dependencies already listed there (example). One rather subtle but far reaching difference to dynamic libraries is how static initialization works. That is, code that is implicitly executed when loading the library (even before the application code is run). Static initialization is used in a number of places: - Qt’s resource system - ECM’s translation catalog loader - statically defined instances of a custom class, triggering their constructor calls (which is how the above mechanisms are ultimately implemented) With dynamic libraries this works on all platforms, with static libraries it doesn’t work in many cases and thus cannot be relied upon anymore. So, we need to change code affected by this. This usually implies moving code that would run as part of static initialization to a later point in time, e.g. on first use of whatever is initialized. This can be beneficial for startup performance, but we have to be careful to not accidentally move potentially expensive operations on hot paths at runtime instead then (basic example, more exotic example). Another potential place for such initialization code would be single entry points in to the library, QCoreApplication is for QtCore. The last resort approach is an explicit as discussed here. That however changes API from a user point-of-view, so I’d avoid that where possible. Identifying all affected code is not always straightforward. Broad unit test coverage provides great value there, but ultimately you probably want to look at all method calls in the of the dynamic library (or the corresponding non-ELF counter-part on other platforms), e.g. using a tool like ELF Dissector. Not everything in there is automatically a problem, but all problems will be in there. Another thing that doesn’t make much sense in a statically linked setup is usage of dlopen (or its counter-parts on other platforms), most commonly used by plug-in systems. Qt has a solution for statically linking plug-ins as part of QPluginLoader. That can be a bit of work to use in practice as all plugins need to be consumable as static libraries by the application too, and need manual Q_IMPORT_PLUGIN statements, but at least it’s nothing that requires creative solutions. Static linking of course is not the complete solution to being able to create single application bundles, frameworks relying on multi-process architectures, daemons, IPC, etc need to be addressed independently of that still. One problem we don’t have for KDE applications at least are license issues caused by static linking, that’s left to proprietary users ;)
OPCFW_CODE
Communication Issues: Improving Turnaround One of the key issues in game tools development is how to improve asset turnaround time; how long is it between when an artist, programmer, writer, level designer, sound designer, or even an executive makes a change before the results can be seen in game or at least in engine. More importantly, how many other people will be affected by said change? The goal in any organization should be to make asset turnaround times as short as possible, and allow developers to make and test changes in isolation before shipping them out to the rest of the team. There are a lot of approaches to this problem, but I’m going to narrow down the solutions to three that tend to be more efficient and should be used when developing a mature tools pipeline: Using in game editors as opposed to stand alone tools, implementing dynamic resource loading and unloading (through something like a developer console), and improved communication between game and stand alone tools. Right now, I’m going to focus on the third possibility. The use of a game-embedded editor versus a standalone tool set is an ongoing argument in the tools community, and each side has its positive and negatives, but regardless of which way you go, some of your tools are not going to be game-embedded, and it is important that any “stand-alone” tools be able to communicate with your game. By creating even a simple a communication library, you’ll be able to issue commands to the game remotely, grab and analyze information without using game resources, and smartly organize, load and save diagnostics information, which might otherwise create large amounts of special case code in your game. By creating a slightly more complicated communication system, you can dynamically run scripts, save and load resources, and even set up a system that communicates changes in seconds to running games. Talk about turnaround time. The key to creating a good communications library is understanding the limitations of each console, and when the console (or running game) can initiate communications with a PC, and visa versa. For things other than debug output (the topic of another article), you can assume that a running tool can communicate with a running game, but not the other way around. This means that the tool must initiate the communication before the console can send the necessary information back. In addition, most communication libraries perform this communication in a background thread and, if they don’t you should design them to do so. The last thing to keep in mind is that some commands may require a lot of data be sent back and forth from the tool and the game, and it is advantageous to split these commands into multiple sends of packet data, both from the tool and back from the game. A well defined command system will be able to specify just how much data will be sent, and how many packets it intends to split the data across. So how do we go about doing this? First, consult your console’s documentation on communication. For PC, your best bet is to used named pipes. From there, the diagram at left offers a very high view of things, using command factories to create defined commands and issue responses. Here’s the basic rundown. - Have your game open a well known named pipe, either public (if you want to communicate across PCs) or private (if you don’t). The game can then sit in a wait state on the pipe, looking for commands from your tool. Remember, this is in a separate thread, so having it in a wait state shouldn’t impact your game. - Have your tool connect to the same named pipe, and issue a command string and parameters. - Have the game, on receiving input, look up the command in a command map. This should point to either a command factory class or command factory method (I prefer the later for memory reasons, and a class is usually overkill). The factory should return a class that inherits from a base command. - Run the returned command with parameters. The command should always generate some sort of simple response, be as simple as Succeeded / Failed or as complex as Need More Data, Ready To Send Data, or Ready To Initiate Communication. - Send this response back to your tool, which should display the result to the user. From here, the amount and type of communication is up to you, though this can become very complicated very quickly, as you’re essentially creating your own network protocol. However, there are a few things you should keep in mind. First, as I said before, you’ll want to design your protocol to be able to push multiple packets of information, usually of fixed size. This will dramatically reduce your memory requirements game side and will improve response on your tool side, as you’ll be able to offer more information to your users faster than if you were waiting for one large response. Second, develop a system for communicating with persistent items, such as pieces of debug information or your AI. This way you don’t have to go searching for the AI or object you’re watching or manipulating on every command, it will just always be there.
OPCFW_CODE
import sys import unittest import re import os.path sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', '..')) import Exscript.util.cast import re from Exscript import Host from Exscript.logger import Log, Logfile class castTest(unittest.TestCase): CORRELATE = Exscript.util.cast def testToList(self): from Exscript.util.cast import to_list self.assertEqual(to_list(None), [None]) self.assertEqual(to_list([]), []) self.assertEqual(to_list('test'), ['test']) self.assertEqual(to_list(['test']), ['test']) def testToHost(self): from Exscript.util.cast import to_host self.assertIsInstance(to_host('localhost'), Host) self.assertIsInstance(to_host(Host('localhost')), Host) self.assertRaises(TypeError, to_host, None) def testToHosts(self): from Exscript.util.cast import to_hosts self.assertRaises(TypeError, to_hosts, None) result = to_hosts([]) self.assertIsInstance(result, list) self.assertEqual(len(result), 0) result = to_hosts('localhost') self.assertIsInstance(result, list) self.assertEqual(len(result), 1) self.assertIsInstance(result[0], Host) result = to_hosts(Host('localhost')) self.assertIsInstance(result, list) self.assertEqual(len(result), 1) self.assertIsInstance(result[0], Host) hosts = ['localhost', Host('1.2.3.4')] result = to_hosts(hosts) self.assertIsInstance(result, list) self.assertEqual(len(result), 2) self.assertIsInstance(result[0], Host) self.assertIsInstance(result[1], Host) def testToRegex(self): from Exscript.util.cast import to_regex self.assertTrue(hasattr(to_regex('regex'), 'match')) self.assertTrue(hasattr(to_regex(re.compile('regex')), 'match')) self.assertRaises(TypeError, to_regex, None) def testToRegexs(self): from Exscript.util.cast import to_regexs self.assertRaises(TypeError, to_regexs, None) result = to_regexs([]) self.assertIsInstance(result, list) self.assertEqual(len(result), 0) result = to_regexs('regex') self.assertIsInstance(result, list) self.assertEqual(len(result), 1) self.assertTrue(hasattr(result[0], 'match')) result = to_regexs(re.compile('regex')) self.assertIsInstance(result, list) self.assertEqual(len(result), 1) self.assertTrue(hasattr(result[0], 'match')) regexs = ['regex1', re.compile('regex2')] result = to_regexs(regexs) self.assertIsInstance(result, list) self.assertEqual(len(result), 2) self.assertTrue(hasattr(result[0], 'match')) self.assertTrue(hasattr(result[1], 'match')) self.assertEqual(result[0].pattern, 'regex1') self.assertEqual(result[1].pattern, 'regex2') def suite(): return unittest.TestLoader().loadTestsFromTestCase(castTest) if __name__ == '__main__': unittest.TextTestRunner(verbosity=2).run(suite())
STACK_EDU
import os import operator import pygame import re from itertools import * from settings import * from functools import reduce ### TODO include a function reset_image, so that it will be possible to restart the image number easily. class TwoSided: jeringonca = False """This class identify and loads images from a given directory. It also invert the image so that it can be used in a game. It generates two lists of images: self.left and self.right. To use the images you will need to especify the image number between 0 and x, where x is the number of images in the directory. The object generated has yet other optional atribute, which is: margin. Margin is the alpha space between the image and the actual drawing. You may specify margin as a list in the following format: [up,left,down,right], i.e, clockwise. Margin may be used to better program interaction during the game. Margin default to all zero.""" def __init__(self, dir, margin=[0, 0, 0, 0]): self.margin = margin self.left = find_images(dir) self.right = invert_images(self.left) self.number = 0 self.lenght = len(self.left) if self.lenght > 0: self.size = self.left[0].get_size() self.itnumber = cycle(list(range(self.lenght))) def update_number(self): if self.number < self.lenght - 1: self.number += 1 else: self.number = 0 def til_the_end(self): if self.number < self.lenght - 1: self.number += 1 else: self.number = self.lenght - 1 class OneSided(TwoSided): def __init__(self, directory, margin=[0, 0, 0, 0]): self.margin = margin self.list = self.left = find_images(directory) self.number = 0 self.size = self.list[self.number].get_size() self.lenght = len(self.list) self.itnumber = cycle(list(range(self.lenght))) class There_and_back_again(TwoSided): def __init__( self, dir, margin=[0, 0, 0, 0], exclude_border=False, second_dir=False, extra_part=False, second_extra_part=False, ): self.margin = margin preleft = find_images(dir) if extra_part: extra = find_images(extra_part) for i, e in zip(preleft, extra): i.blit(e, (0, 0)) posleft = list(reversed(preleft)) if exclude_border: posleft = posleft[1:-1] self.left = self.list = preleft + posleft if second_dir: second_left = find_images(second_dir) if extra_part: second_extra = find_images(second_extra_part) for i, e in zip(second_left, second_extra): i.blit(e, (0, 0)) second_right = invert_images(second_left) possecond_left = list(reversed(second_left)) if exclude_border: possecond_left = possecond_left[1:-1] self.left = self.list = self.left + second_left + possecond_left self.right = invert_images(self.left) self.size = self.left[0].get_size() self.lenght = len(self.left) self.number = 0 self.itnumber = cycle(list(range(self.lenght))) class GrowingUngrowing(TwoSided): def __init__(self, directory, frames, margin=[0, 0, 0, 0]): self.margin = margin self.list = self.left = find_images(directory) n_list = [ pygame.transform.smoothscale(i, (i.get_width(), i.get_height() - (2 * x))) for x in range(frames) for i in self.list ] self.list.extend(n_list) self.list.extend(reversed(n_list)) self.lenght = len(self.list) self.number = 0 self.size = self.list[self.number].get_size() self.itnumber = cycle(list(range(self.lenght))) class Buttons(GrowingUngrowing): def __init__(self, directory, frames): self.list = self.left = find_images(directory) n_list = [ pygame.transform.smoothscale( i, (i.get_width() + (2 * x), i.get_height() + (2 * x)) ) for x in range(frames) for i in self.list ] self.list.extend(n_list) self.list.extend(reversed(n_list)) self.lenght = len(self.list) self.number = 0 self.size = self.list[self.number].get_size() self.itnumber = cycle(list(range(self.lenght))) class MultiPart: def __init__(self, ordered_directory_list, margin=[0, 0, 0, 0]): def gcd(a, b): if b: return gcd(b, a % b) return a def least_common_multiple(nums): return reduce(lambda a, b: a * b / gcd(a, b), nums) all_images = [find_images(dir) for dir in ordered_directory_list] image_size = all_images[0][0].get_size() lists_lenghts = [len(i) for i in all_images] lcm = least_common_multiple(lists_lenghts) all_images = [i * (lcm / len(i)) for i in all_images] self.images = [ pygame.Surface(image_size, pygame.SRCALPHA).convert_alpha() for i in range(lcm) ] for i in range(lcm): for img_list in all_images: self.images[i].blit(img_list[i], (0, 0)) self.margin = margin self.left = self.images self.right = invert_images(self.left) self.number = 0 self.lenght = len(self.left) if self.lenght > 0: self.size = self.left[0].get_size() self.itnumber = cycle(list(range(self.lenght))) def update_number(self): if self.number < self.lenght - 1: self.number += 1 else: self.number = 0 class Ad_hoc: def __init__(self, left_images, right_images, margin=[0, 0, 0, 0]): self.margin = margin self.left = left_images self.right = right_images self.number = 0 self.lenght = len(self.left) if self.lenght > 0: self.size = self.left[0].get_size() self.itnumber = cycle(list(range(self.lenght))) def update_number(self): if self.number < self.lenght - 1: self.number += 1 else: self.number = 0 def til_the_end(self): if self.number < self.lenght - 1: self.number += 1 else: self.number = self.lenght - 1 def image(dir, invert=False, alpha=True, pre_rendered=False): complete_path = re.search(main_dir, dir) if not complete_path: dir = main_dir + "/" + dir if alpha: prep = pygame.image.load(dir).convert_alpha() else: prep = pygame.image.load(dir).convert() if invert == True: prep = pygame.transform.flip(prep, 1, 0) prep_size = prep.get_size() scaled_width = prep_size[0] * scale size = (int(round(prep_size[0] * scale)), int(round(prep_size[1] * scale))) if pre_rendered: return prep else: return pygame.transform.smoothscale(prep, size) def scale_image(prep, invert=False): if invert == True: prep = pygame.transform.flip(prep, 1, 0) prep_size = prep.get_size() scaled_width = prep_size[0] * scale size = (int(round(prep_size[0] * scale)), int(round(prep_size[1] * scale))) return pygame.transform.smoothscale(prep, size) def find_images(dir): complete_path = re.search(main_dir, dir) if not complete_path: dir = main_dir + "/" + dir try: rendered_dir = ( dir + "scale" + str(int(round(1440 * scale))) + "x" + str(int(round(900 * scale))) ) os.listdir(rendered_dir) return [ image(rendered_dir + "/" + item, pre_rendered=True) for item in sorted(os.listdir(rendered_dir)) if (item[-4:] == ".png" or item[-4:] == ".PNG") ] except: newdir = ( dir + "scale" + str(int(round(1440 * scale))) + "x" + str(int(round(900 * scale))) + "/" ) os.mkdir(newdir) returnlist = [] for item in sorted(os.listdir(dir)): if item[-4:] == ".png" or item[-4:] == ".PNG": actualimage = image(dir + item) pygame.image.save(actualimage, newdir + "/" + item) returnlist.append(actualimage) return returnlist # return [image(dir+item) for item in sorted(os.listdir(dir)) if ( item[-4:] == '.png' or item[-4:]== '.PNG')] def invert_images(imglist): return [pygame.transform.flip(img, 1, 0) for img in imglist]
STACK_EDU
Java/ JUnit - AssertTrue vs AssertFalse I'm pretty new to Java and am following the Eclipse Total Beginner's Tutorials. They are all very helpful, but in Lesson 12, he uses assertTrue for one test case and assertFalse for another. Here's the code: // Check the book out to p1 (Thomas) // Check to see that the book was successfully checked out to p1 (Thomas) assertTrue("Book did not check out correctly", ml.checkOut(b1, p1)); // If checkOut fails, display message assertEquals("Thomas", b1.getPerson().getName()); assertFalse("Book was already checked out", ml.checkOut(b1,p2)); // If checkOut fails, display message assertEquals("Book was already checked out", m1.checkOut(b1,p2)); I have searched for good documentation on these methods, but haven't found anything. If my understanding is correct, assertTrue as well as assertFalse display the string when the second parameter evaluates to false. If so, what is the point of having both of them? Edit: I think I see what was confusing me. The author may have put both of them in just to show their functionality (it IS a tutorial after all). And he set up one which would fail, so that the message would print out and tell me WHY it failed. Starting to make more sense...I think that's the explanation, but I'm not sure. Wow...I just realized that if I hover over a method in Eclipse, it will give me info on it. Thanks everyone for your answers! Learn AssertJ -> assertj.org assertTrue will fail if the second parameter evaluates to false (in other words, it ensures that the value is true). assertFalse does the opposite. assertTrue("This will succeed.", true); assertTrue("This will fail!", false); assertFalse("This will succeed.", false); assertFalse("This will fail!", true); As with many other things, the best way to become familiar with these methods is to just experiment :-). assert can be explained as must be. Thus you can explain the statement assertTrue("~", expression); to the expression must be true. actually the best way is reading the javadoc is better than experimenting and guessing always. The point is semantics. In assertTrue, you are asserting that the expression is true. If it is not, then it will display the message and the assertion will fail. In assertFalse, you are asserting that an expression evaluates to false. If it is not, then the message is displayed and the assertion fails. assertTrue (message, value == false) == assertFalse (message, value); These are functionally the same, but if you are expecting a value to be false then use assertFalse. If you are expecting a value to be true, then use assertTrue. Thanks. I think I get it. In the assertFalse, should it say "value==true"? I think it's just for your convenience (and the readers of your code) Your code, and your unit tests should be ideally self documenting which this API helps with, Think abt what is more clear to read: AssertTrue(!(a > 3)); or AssertFalse(a > 3); When you open your tests after xx months when your tests suddenly fail, it would take you much less time to understand what went wrong in the second case (my opinion). If you disagree, you can always stick with AssertTrue for all cases :) Your first reaction to these methods is quite interesting to me. I will use it in future arguments that both assertTrue and assertFalse are not the most friendly tools. If you would use assertThat(thisOrThat, is(false)); it is much more readable, and it prints a better error message too. Thanks for the advice! I'm just following a tutorial though lol. assertTrue will fail if the checked value is false, and assertFalse will do the opposite: fail if the checked value is true. Another thing, your last assertEquals will very likely fail, as it will compare the "Book was already checked out" string with the output of m1.checkOut(b1,p2). It needs a third parameter (the second value to check for equality). Thanks! I typed that last assertEquals statement by accident. Should not have been there The course contains a logical error: assertTrue("Book check in failed", ml.checkIn(b1)); assertFalse("Book was aleready checked in", ml.checkIn(b1)); In the first assert we expect the checkIn to return True (because checkin is succesful). If this would fail we would print a message like "book check in failed. Now in the second assert we expect the checkIn to fail, because the book was checked in already in the first line. So we expect a checkIn to return a False. If for some reason checkin returns a True (which we don't expect) than the message should never be "Book was already checked in", because the checkin was succesful.
STACK_EXCHANGE
|Anirban Sen Gupta, Ph.D. Our principal research focus is on Drug Delivery and Nanomedicine. It encompasses mechanistic understanding of biological and pathological phenomena at the cellular, sub-cellular and biomolecular levels, and utilizing this knowledge to create bioinspired therapeutic and diagnostic technologies to interrogate, support, or treat the various phenomena. To this end, our laboratory focuses on understanding the complex pathophysiological mechanisms of cardiovascular diseases and cancer, and then on using this insight to develop disease-targeted therapeutic strategies by integrating critical physical, chemical and biological components at nano-to-micro scales. Our main research interests are in the areas of (i) novel biomaterials to modulate biologic interactions and responses, and (ii) drug formulation and disease-targeted drug delivery systems. The physiological and pathological areas we focus on are hemostasis, thrombosis, inflammation, immune response and cancer metastasis. The tools that we use are biochemical properties like disease-relevant heteromultivalent ligand-receptor interactions, integrated with biophysical attributes of biomaterial platforms like shape, size, charge and morphology), to create customizable and translatable targeted drug delivery technologies. Selected Recent Publications (for a complete list of publications, please refer to Curriculum Vitae): - H Haji-Valizadeh, C L Modery-Pawlowski, A Sen Gupta. An FVIII-derived Peptide Enables VWF-binding of a Synthetic Platelet Surrogate without Interfering with Natural Platelet Adhesion to VWF. Nanoscale 2014, 6(9):4765-4773. - C.L. Modery-Pawlowski, A Sen Gupta. Heteromultivalent Ligand-decoration for Actively Targeted nanomedicine. Biomaterials 2014; 35(9):2568-2579. - C.L. Modery-Pawlowski, A.M. Master, V. Pan, G.P. Howard, A. Sen Gupta. A platelet-mimetic paradigm for metastasis-targeted nanomedicine platforms. Biomacromolecules. 2013; 14(3):910-919. - C.L. Modery-Pawlowski, L.L. Tian, M. Ravikumar, T.L. Wong, A. Sen Gupta. In vitro and in vivo hemostatic capabilities of a functionally integrated platelet-mimetic liposomal nanoconstruct. Biomaterials. 2013, 34(12):3031-41. - C.L. Modery-Pawlowski, L.L. Tian, V. Pan, KR McCrae, S Mitragotri, A. Sen Gupta. Approaches to synthetic platelet analogs. Biomaterials. 2013, 34(2):526-541 - A.M. Master, A. Sen Gupta. EGF receptor-targeted nanocarriers for enhanced cancer treatment. Nanomedicine: Future Medicine (Lond). 2012 Dec;7(12):1895-906 - A.M. Master, M. Livingston, A. Sen Gupta. Photodynamic nanomedicine in the treatment of solid tumors: Perspectives and challenges. J Control Release. 2013; 168(1): 88-102. - A. M. Master, A. Malamas, R. Solanki, D, Liggett, J.L. Eiseman, A. Sen Gupta. A Cell-targeted Photodynamic Nanomedicine Strategy for Head-&-Neck Cancers. Molecular Pharmaceutics. 2013; 10(5): 1988-1997. - M. Ravikumar, T. Wong, C. Modery, A. Sen Gupta. Peptide-decorated Liposomes Promote Arrest and Aggregation of Activated Platelets under Flow on Vascular Injury Relevant Protein Surfaces In Vitro. Biomacromolecules 2012, 13, 1495−1502 - M. Ravikumar, C. Modery, T. Wong, A. Sen Gupta. Mimicking Adhesive Functionalities of Blood Platelets using Ligand-decorated Liposomes. Bioconjugate Chemistry 2012, 23(6):1266-1275. - C. Modery, M. Ravikumar, T. Wong, M. Dzuricky, N. Durongkaveroj, A. Sen Gupta. Heteromultivalent Liposomal Nanoconstructs for Enhanced Targeting and Shear-stable Binding to Active Platelets for Site-selective Vascular Drug Delivery. Biomaterials 2011, 32(35): 9504-9514. - A. Sen Gupta. Nanomedicine Approaches in Vascular Disease: A Review. Nanomedicine: Nanotechnology, Biology and Medicine 2011, 7(6): 763-779.
OPCFW_CODE
|25-02-2005, 01:18 PM Join Date: Oct 2004 Location: Opole, Poland Sometimes an old game or another tells you that you need to change the sound settings or simply no sound and music works. Here we'll discuss on how this can be solved. Note: This is not a no-brainer. You need to think by yourself, as this is fairly generic advice. First, two questions to you: Q: Do you run the game under Windows or under DOSBOX? A: WINDOWS: Try using DosBox or VDMSound. Some of the DOS games work under Windows but due to their incompatibility with DirectX the sound and music do not work (with occasional rare exceptions in case of MIDI music). Try if this works and if not, return to the NO answer in the next line. A: DOSBOX: Check your DosBox.conf. Go to the sound settings, and check if it is not set to mute by accident. Change soundcard emulation to Sound Blaster 16 or Pro, these should work allright. Pro 2 seems to have issues with most games. Note that some even older titles might support only Sound Blaster 1, and then you'll have to set your emulation to this. Q: Is the game a RIP version? Some bigger games might have some animations and/or sound removed to decrease filesize. A: YES: It is possible that there is no sound in this game at all. Consider this as an option, but follow to answer NO anyway for another possible solution. A: NO: Maybe you need to setup the game first. Q: How to determine if my game is a RIP version or not? A: Sometimes it is not possible. Look for readme files in the game directory, maybe you'll find a clue there. In other cases the game info here on Abandonia will hold that information. It is almost certain, however, if the entire game package is under 5 MB and when you run it it says "CD version" or something like that, it's a rip. Also, if you see messages like "CD-RIP", "Ripped" and their like, it means that game is definitely a RIP. Q: I have a RIP. I want the full game! A: Hardly possible. If you got the RIP version here, you might either try to search for the full version elsewhere in the net (Google might be helpful) or earn your way to VIP section, as sometimes games that are in the public section as RIP versions happen to appear in the VIP section as full ones. To answer your unasked question - no, you cannot ask anybody if a particular game is or is not in the VIP section. Only VIP know it for sure, and they aint telling anybody! Q: Ok, it's not a RIP, I've set the config in DOSBOX, but the sound still aint working! A: Launch the game's setup. Q: Your saying to setup the game. I downloaded it and there is no game setup with that download. What are you referring to? A: You are thinking of setup in the definition of standard Windoze installation procedure, which does not apply here. Most old games have their own setup programs that allow you to change sound and graphics settings (and, in a fair few cases, install the game to a different drive, but that means only copying all the files to a specified directory). Q: So where is this setup program? A: You should look into the game directory and search for files like config, install, setup, setsound, setsnd, soundset with extensions like BAT, EXE and COM extensions. Try running it. If there is nothing of these filenames, are there any other BAT, EXE or COM files then the one you used to run the game? Try running them, naturally in DosBox or VDMSound(if you are using it). Q: I found no setup program. What to do? A: Run the game normally, and look for options menu. Sometimes that's where the soundcard/graphics etc settings reside. Q: This didn't helpe either. What now, smart***? A: Search for files with names like config (without any extension), .cfg or .cnf and delete them (keep backups somwhere, however!). If the game does not find them, it might run its setup automatically. Remember to do backups, in case the game refuses to run without these files at all. Q: I cannot find any files like that. So? A: Sort the files by date. If the game's settings were changed at any point, the filedate of one or more files will vary from the rest. These are what you are looking for. WARNING: DO NOT DELETE EXE OR COM FILES NOR CHANGE THEIR EXTENSIONS, FOR YOUR OWN GOOD!!! Q: None of these work. I really want to play that game! A: This means some trouble. I'll try to give you possible solutions: 1) It might help to run the game with a command line parameter, but these vary from game to game. You might try (executable) setup, (executable) config (executable) install or, if none of them work (executable) /?, (executable) -? or (executable) ?, and the game should display the list of possible command line parameters. 2) Check the subdirectories. Maybe there is one with the drivers for different sound output types, and you have to put the right set in the main game directory or one of its subdirs to work. I encountered such games one or two times. 3) Check batch (*.BAT) files in the game directory. Sometimes the game needs to be ran with command line parameters that enable certain settings. 4) Maybe running the game executable with appropriate parameters will help. Q: What are the right command line parameters for my game, then? A: I don't know. They often change on game-to-game basis, so you're on your own here. Try looking in the documentation or the manual, maybe it will be of some help here. That's all I can think of without even looking at the game in question. Fairly generic advice, but something of these should actually work. |Syndicate Plus, sound not working. |24-05-2009 04:34 PM |Can't get sound working in settlers using dosbox |15-07-2008 04:34 PM |Mdk Not Working |05-11-2007 08:37 AM |Can I Route Pc Speaker Sound To My On-board Sound? |19-11-2005 01:41 PM |Sound Not Working Faq |06-06-2005 05:07 AM
OPCFW_CODE
This End User Agreement ("Agreement") is a legal agreement between you ("End Customer") and Yin Yang Inc. ("Yin Yang") for the use of Truto ("Service"). By accessing or using the Service, you agree to be bound by the terms and conditions of this Agreement. If you do not agree to the terms and conditions of this Agreement, you may not access or use the Service. Yin Yang grants End Customer a limited, non-exclusive, non-transferable, and revocable right to access and use the Service for its intended purposes. End Customer may not sublicense, rent, lease, or permit any third party to use the Service. End Customer is strictly prohibited from engaging in any of the following activities while accessing or using the Service: a. End Customer may not use the Service for benchmarking or monitoring its availability, security, performance, or functionality, or for any other competitive purposes without Yin Yang's express written consent. b. End Customer may not create derivative works, reverse engineer, attempt to gain access to the source code of, or copy the Service, or any of its components. c. End Customer may not circumvent or disable any security or other technological features or measures of the Service, or use the Service in a manner that Yin Yang reasonably believes poses a threat to the security of its computer systems. d. End Customer may not use the Service to conduct any fraudulent, malicious, or illegal activities, including but not limited to hacking, phishing, spamming, or engaging in any activity that violates applicable laws or regulations. End Customer must comply with all applicable laws and regulations while using the Service, including the procurement and maintenance of any necessary licenses and permits. The Service, its contents, and Yin Yang's trademarks are Yin Yang's exclusive intellectual property. End Customer may not modify, create derivative works, decompile, reverse engineer, attempt to gain access to the source code, or copy the Service or any of its components. Any feedback or suggestions provided by End Customer may be implemented by Yin Yang without any obligation to End Customer. Yin Yang may update the Service with new features, bug fixes, or other improvements at any time without notice. End Customer may contact [email protected] with any questions or concerns regarding this Agreement. Yin Yang processes data in accordance with the terms mentioned in the Data Processing Agreement ("DPA") . Either party may terminate this Agreement at any time, with or without cause, by providing written notice to the other party. In the event of termination, End Customer must immediately cease all use of the Service and delete or destroy all copies of any documentation or other materials related to the Service in its possession or control. Yin Yang may terminate this Agreement immediately and without notice, if End Customer breaches any term of this Agreement, including but not limited to engaging in any Prohibited Use. End Customer may also terminate this Agreement by providing written notice to Yin Yang at least thirty (30) days before the effective date of termination. In such event, End Customer must immediately cease all use of the Service and delete or destroy all copies of any documentation or other materials related to the Service in its possession or control. However, End Customer remains liable for any fees or charges incurred up until the effective date of termination. This Agreement will be effective upon End Customer's acceptance of these terms and will remain in effect until terminated by either party. This Agreement will automatically renew for additional periods equal to the initial term unless either party gives written notice of non-renewal at least thirty (30) days before the end of the then-current term. End Customer's continued use of the Service following any renewal will constitute its acceptance of the then-current terms of this Agreement. The Service is provided "as is" without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. Yin Yang does not warrant that the Service will meet End Customer's requirements or that the operation of the Service will be uninterrupted or error-free. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL EITHER PARTY BE LIABLE TO ANY PERSON FOR ANY INDIRECT, INCIDENTAL, SPECIAL, PUNITIVE, COVER OR CONSEQUENTIAL DAMAGES (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOST PROFITS, LOST REVENUE, LOST SALES, LOST GOODWILL, LOSS OF USE OR LOST CONTENT, IMPACT ON BUSINESS, BUSINESS INTERRUPTION, LOSS OF ANTICIPATED SAVINGS, LOSS OF BUSINESS OPPORTUNITY) HOWEVER CAUSED, UNDER ANY THEORY OF LIABILITY, INCLUDING, WITHOUT LIMITATION, CONTRACT, TORT, WARRANTY, BREACH OF STATUTORY DUTY, NEGLIGENCE OR OTHERWISE, EVEN IF A PARTY HAS BEEN ADVISED AS TO THE POSSIBILITY OF SUCH DAMAGES OR COULD HAVE FORESEEN SUCH DAMAGES. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, OUR AGGREGATE LIABILITY AND THAT OF OUR AFFILIATES, OFFICERS, EMPLOYEES, AGENTS, SUPPLIERS AND LICENSORS, RELATING TO THE SERVICE(S), WILL BE LIMITED TO AN AMOUNT EQUAL TO TWELVE MONTHS OF THE SUBSCRIPTION CHARGES PAID BY YOU FOR THE SERVICE(S) BEFORE THE FIRST EVENT OR OCCURRENCE GIVING RISE TO SUCH LIABILITY. IN JURISDICTIONS WHICH DO NOT PERMIT THE EXCLUSION OF IMPLIED WARRANTIES OR LIMITATION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES, OUR LIABILITY WILL BE LIMITED TO THE GREATEST EXTENT PERMITTED BY LAW. NOTWITHSTANDING ANYTHING ELSE TO THE CONTRARY, WE DISCLAIM ALL LIABILITIES, TO THE MAXIMUM EXTENT PERMITTED BY LAW, WITH RESPECT TO THE SERVICES OFFERED DURING THE TRIAL PERIOD. This Agreement shall be governed by and interpreted in accordance with the laws of the state of Delaware, without giving effect to its conflict of law provisions. This Agreement constitutes the entire agreement between Yin Yang and the End Customer regarding the use of the Service and supersedes all prior or contemporaneous agreements or understandings, whether written or oral, regarding such use. Yin Yang may amend this Agreement at any time by posting the amended terms on its website or by notifying End Customer via email. The End Customer's continued use of the Service after such posting or notification constitutes End Customer's acceptance of the amended terms. If any provision of this Agreement is found to be invalid or unenforceable, the remaining provisions shall remain in full force and effect. By Yin Yang Inc.: Yin Yang shall indemnify, defend, and hold harmless the End Customer from and against any losses, damages, liabilities, costs, and expenses (including reasonable attorneys’ fees) arising out of or in connection with any claim, suit, action, or proceeding brought by a third party to the extent that such claim, suit, action, or proceeding arises from (i) a breach by Yin Yang of any warranty or representation in this Agreement, or (ii) any third-party claim that the Service infringes or misappropriates any intellectual property right of a third party. However, Yin Yang shall have no obligation to indemnify, defend, or hold harmless the End Customer for any losses, damages, liabilities, costs, and expenses arising from or in connection with the End Customer's Prohibited Use of the Service. By End Customer: End Customer agrees to indemnify, defend, and hold harmless Yin Yang, its affiliates, directors, officers, employees, agents, successors, and assigns from and against any claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising out of or relating to End Customer's use of the Service, any breach of this Agreement, or any violation of any applicable law or regulation. Yin Yang shall promptly notify End Customer of any such claim or demand and shall cooperate with End Customer, at End Customer's expense, in the defence of any such claim or demand. The failure of Yin Yang to enforce any right or provision of this Agreement shall not be deemed a waiver of such right or provision. End Customer may not assign or transfer this Agreement or any rights or obligations hereunder, whether by operation of law or otherwise, without Yin Yang's prior written consent. Yin Yang may assign this Agreement or any rights or obligations hereunder without End Customer's consent. Truto's use and transfer to any other app of information received from Google APIs will adhere to Google API Services User Data Policy, including the Limited Use requirements. By using the Service, End Customer acknowledges that it has read this Agreement, understands it, and agrees to be bound by its terms and conditions.
OPCFW_CODE
August 3, 2009 at 4:09 PM by Dr. Drang Python 3.1 for the Mac comes as a .dmg file which, when mounted, has a rather spartan set of contents. Running the installer and accepting its defaults1, creates and fills a new 3.1 subdirectory in /Library/Frameworks/Python.framework/Versions and puts a python3 command in /usr/local/bin. Everything associated with Python 3.1 is distinct from the standard Python 2.5 that ships with Leopard. The python command still starts up version 2.5 $ python Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> and the python3 command starts up Python 3.1 $ python3 Python 3.1 (r31:73578, Jun 27 2009, 21:49:46) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> One thing that I expected to happen was the creation of a new 3.1/site-packages directory tree under 2.5/site-packages directory tree is where third-party modules for Python 2.5 go, but the installer didn’t create a corresponding tree for 3.1. I suspect that running python3 setup.py on a module packaged with Python’s distutils will create the 3.1/site-packages tree, but the first module I wanted to install was Stuart Colville’s titlecase, which just comes as a single Python source file to be moved somewhere where Python can find it. [Aside: I had some hope that the title() method for strings in Python 3.1 would have been improved with regard to apostrophes. It doesn’t seem too much to ask that "don't pass me by".title() Don't Pass Me By instead of the nonsensical Don'T Pass Me By Sadly, 3.1 is just as stupid as 2.5. The titlecase module is much smarter, and even knows to keep small words like articles and (most) prepositions uncapitalized unless they’re the first or last word of the title. It’s based on a Perl script by John Gruber described in this blog post.] It turns out that simply creating a 3.1/site-packages tree in /Library/Python is sufficient to add it to Python 3.1’s module search path. Here are the commands to run: cd /Library/Python/ sudo mkdir -p 3.1/site-packages sudo chmod -R g+w 3.1 -p switch tells mkdir to create all the necessary intermediate directories. They’ll have an owner of root and a group of chmod command allows all users in the admin group—which includes my usual login user account—to add files to the new directories. Without this, the .pyc bytecode files can’t be made, and the modules will have to be compiled from source every time they’re imported. One more thing: the titlecase module was written for Python 2.x; it needs one change to make it 3.1-compatible. Change Line 249 in before you install it. I do wish Python, like Ruby, allowed base classes to be extended— titlecase would be more natural as a string method than a function.
OPCFW_CODE
5 Fantastic Web Development Forums Errors, failures, bugs, and glitches are common to any web development project. Overcoming these problems often requires perusing server logs, flushing caches, counting semicolons, searching online, and sorting through posts on development forums. Development forums can be like a giant cyber water cooler where software engineers, hackers, and code writers can come together describe problems and concerns and get helpful feedback from peers who have experienced or are experiencing similar obstacles. These forum encounters can get a project back on track. As an example, after running into an error on a commercial WordPress site that has just been moved to a new server — “Your PHP installation appears to be missing the MySQL extension which is required by WordPress” — a forum post on Ubuntu Forums used in conjunction with error logs lead to discovering the problem. There are almost certainly hundreds or even thousands of valuable, web development forums on the Internet, so these five fantastic forums certainly don’t represent an exhaustive list, but they are among some of the best. Perhaps the best know web development forum on the Internet, Stack Overflow is a completely free, question and answer site. Stack Overflow encourages participation with badges for helpful answers or commenting. Software engineers Jeff Atwood and Joel Spolsky created the site in 2008 as an alternative to several “paid” help sites that offered to help with development problems for a price. It seems this approach has worked, since Stack Overflow has nearly one million registered users helping each other with topics ranging from Android app development to setting up a Yahoo! store. Online media firm and book publisher SitePoint has a popular forum that is good for new developers. Users will have to watch out for the occasional book offer, but great threads about databases, Ruby, Python, ecommerce, web security, and hundreds of other topics are readily available. All told, the SitePoint forum has more than 250,000 users. Developer and entrepreneur Ryan Troy started the Ubuntu Forums site in October of 2004. Troy, who is co-author for the VMware Cookbook, quickly gained a following on the site, which soon became the official Ubuntu forum. In June 2007, Troy even passed ownership to Ubuntu’s parent company, Canonical. The site has threads about anything and everything Ubuntu Linux related including web development topics specific to Ubuntu-powered servers and the applications running on those servers. It is not uncommon to find threads about WordPress, Magento, or similar since these platforms often run a Ubuntu server LAMP stack. What the Dev Shed might lack in graphic design, it makes up for in forum activity. At any moment there might be hundreds of active treads from the site’s 440,000 plus registered users. It is worth mentioning that Dev Shed has appealed to its users for ideas aimed at improvement, but this is still a very valuable resource. The forums have many users and can be very helpful for developers making the transition to Microsoft products or seeking to do a better job of code for Internet Explorer. Here are several more forums that you might want to consider.
OPCFW_CODE
[OpenAFS] Re: AFSDB records Tue, 4 May 2010 13:01:48 -0500 On Tue, 4 May 2010 15:31:02 +0000 "Brunckhorst, Ralf" <email@example.com> wrote: > * What is the behavior of a running afs-client if we change an > AFSDB-record in DNS (because one of the AFSDB-server is moved to a > new location)? Clients should cache the AFSDB information for the TTL in the DNS record at the longest. After that, they should fetch the new AFSDB information. > * Will the changes be done automatically on the afs-client? > * Also for the Cache Manager's kernel-resident list of database > server machines? I don't understand the difference between these two questions. If your CellServDB has just the two cells listed, without any dbservers for the cells, the kernel-resident list of database server machines is the only list of database server machines. But yes, it will happen automatically. If, for whatever reason, the client needs to contact the dbservers for a cell, it will re-fetch the list of database servers if that list has expired (according to the TTL in the previous DNS response). > We have an update-script for the existing old afs-clients that have a > CellServDB file with static IP info. This will be run daily via cron. > It fetch a central CellServDB file (if it difer with the local one) > and change the kernel-resisdent list via 'fs newcell …' > * How can we migrate those clients to use the AFSDB-records without > rebooting (will the kernel-resisdent list updated if the afsd > started with -afsdb)? > * Or is the reboot needed to get rid of the kernel-list? Keep in mind for Linux you usually don't need to reboot the entire machine, but you can just restart the afs client (though if these clients are very old, maybe you cannot do that). Not that that helps, since restarting the AFS client tends to be about as disruptive as a Anyway, AFSDB records will not override static information set via CellServDB or 'fs newcell' (unless the CellServDB specifies no hosts). So, as far as I know, you need to reboot/restart the clients. It may be possible to make 'fs newcell' clear the host list, so new dbserver information will be fetched from DNS, but right now 'fs newcell' does not allow that. I'm not entirely sure if that works, but if you modify 'fs newcell' (or get someone to modify it), you might be able to get clients to use AFSDB information without rebooting. > * Which is the oldest afs-version that have support for -afsdb? OpenAFS 1.1.0 was the first release with it, I think.
OPCFW_CODE
2 Access 2007Access 2007 is the database software in the Microsoft 2007 Office Suite. It allows you to order, manage, search, and report large amounts of information.This tutorial will show you how you might plan and build a database from scratch, including how to set up tables, create and use forms to enhance data integrity, design and run meaningful queries, and produce useful and attractive reports. 3 UNDERSTANDING DATA BASE CONCEPTS Why Do I need a data baseExploring an access databaseThinking about database design 4 Building the database Setting up Tables and Fields Building up Table RelationshipEntering and Editing Data in tablesCreating and Using FormsMaking forms more usable with controlsMaking forms more attractive 5 Analyzing and Reporting Data Sorting RecordsFiltering RecordsUsing Queries to Make Data meaningfulUsing Reports to make Data useful to others 6 Why do I need a DatabaseAccess 2007 is a program that allows you to create and manage databases. A database is a place where you can store information related to a specific topic. How you intend to use the information will determine whether you need an Access database or a different program to create and manage your data.we will describe here what a database does and how to decide whether you need a database to manage your information. 7 What Does a Database DoA database allows you to store information related to a specific topic in an organized way. In addition to storing data, you can also sort, extract, and summarize information related to the data. One of the software programs that allow you to do this is Microsoft Office Access 2007, which is a database creation and management program. 8 There are many types of data you may need to store and manage: text and numbers, for example. Depending on what you want your data to do for you, you may or may not need to use a database. You might be able to use a spreadsheet program like Microsoft Excel. How do you know which data can be adequately managed with Excel and which data really requires Access to manage it more efficiently? It depends on how much data you have to manage, and what you want your data to do for you. Let's try to answer this by looking at a bookstore scenario.If you work for a bookstore business, you might have to keep track of your customers and their orders. You could use Microsoft Excel to store and manage this type of data; however, Excel is a spreadsheet software program that is traditionally used to manage numerical information, like totaling up all purchases by one customer. While it can do an adequate job at storing some types of text-based data -- like the customer's name and contact information-- that is not really what Excel was designed to do.The following examples will show you why an Access Database may be a better choice for the bookstore business. 9 Excel Example: Customer List and Order Tracking However, if you want to see very specific results in your data, like how many orders a single customer placed in a year, Excel is not as efficient as Access at providing you with that data.Sorting and Filtering to Locate Data in ExcelIn Excel, you can store your data in a worksheet so that you can mail promotional information to the entire list or sort to find specific customers to target mail. You can even filter the customer information to display all the customers that live in a particular state, like in the following image. Additionally, you can sort the data to order it in a particular way. 10 Data Entry in ExcelIf you use an Excel spreadsheet to track your orders, each time a customer places an order, you would have to enter a new row of information in the spreadsheet. This would likely include the customer's name and address. If that customer orders from your company more than once, that information would have to be entered each time. You spreadsheet would contain redundant information.As you can see in the image above, customers Tonya Bullock and McKenzie Grant each placed several orders on different days and for different books. Their customer contact information was entered every time they placed an order. This is the limitation of spreadsheet software such as Excel because it is a single, flat file. 11 Entering Data in Access In addition to the table with customer information, you would probably also want a table with information about the products you sell, and a third table to hold data related to specific customer orders. These tables would all be linked together, to help you make the most out of your data.Access is called a relational database management program, because the tables are linked, or related, as you can see in the image below.In this example, the Customer Info and Orders tables are linked by Customer ID and Book ID.Microsoft Access is designed to manage information. Access allows you to enter the client's name, address, and phone number- the first time they do an order.This information is entered into an Access table designed to hold basic customer information on clients. A table is a list of related information in columns and rows. In a table, each row is called a record and each column is called a field . An Access table in Datasheet View looks similar to an Excel spreadsheet, as you can see below. 12 Now, let's assume that you want to identify the book that was most popular in the state of North Carolina. With Access, this is possible because you can search and retrieve information from multiple tables at the same time. The Customer Info table contains information about the states, and the Order table includes information about which books were ordered. You will need information from both tables to identify the book that was most popular in a specific state. You could look at the information in these tables separately to answer your question of which book was most popular in North Carolina. In the Customer Info table, you could see all the customers from a specific state, NC. And in the Books table, you can see all the books that you have in stock. The real power of Access comes in being able to link and extract information from multiple tables to answer specific questions. As you can see below, the results of your specific question, or query, are displayed for you. 14 IntroductionOnce you have determined that an Access database will help you store and manage your data, you will need to learn the parts of a database, how to start using Access, and how to navigate the Access window. In this lesson, we will provide a basic overview of Access, including the parts of a database, and common tasks you can complete using a database. 15 Databases in Our LivesThink about all the information we encounter on a typical day that might be organized by a database. For example, if you go shopping at a department store for a toaster, the store inventory of products is information that has to be stored somewhere, along with the price of each product. When you make a purchase, the store needs to be able to store the sales information to determine the daily sales total and how to track the decrease in inventory. A database could store this information, and also allow the store to quickly determine how many Brand X toasters are in the inventory without needing to count the inventory on the shelves. 16 While this information could be managed without a database, it would be easier and more efficient to use one. Databases have an enormous impact in almost every area of our lives.Think About ItThink about what is going on around you in everyday situations and whether there might be a database at work.Grocery Store: The grocery store is stocked with items. The items have to be ordered, shipped, and stocked in the store. The store has to pay for the items. Then, when the customer buys the items, the cash register retrieves prices and the customer pays for products. Where might databases be involved in the situation?Restaurant: Where does the food come from? How does management know when to reorder a product? How are bills paid?Traffic Lights: Who or what controls when the lights turn red or green?A database maintains order and structure in our lives. Databases are created using programs such as Microsoft Office Access 2007, which is a relational database program. 18 When you start Access 2007, you will see the Getting Started window When you start Access 2007, you will see the Getting Started window. In the left pane, the template categories including the featured local templates are listed, as well as the categories on Office Online. Templates are pre-built databases focused on a specific task that you can download and use immediately. 19 In the example below, the featured templates are selected, and the template options are displayed in the center area of the screen. Featured templates include database template options that are available online, as well as templates available as part of the local version of Access.
OPCFW_CODE
How to install deb package in ubuntu command line - Vlc player download and install free When you switch to Linux, the experience could be overwhelming at the start. If you want to install a specific version of PHP, then this article can be helpful for you. To install it graphically open the Ubuntu Software application, ” , search for “ tweaks then install the GNOME Tweaks application. 04: This open- source application is available in the developer’ s PPA repository for all current Ubuntu releases and their derivatives. Say thanks to Ondřej Surý for maintaining PPA of most the popular PHP versions on launchpad. How to install deb package in ubuntu command line. 3 is the latest stable release of PHP. Rpm files but you want a. Deb package for your debian Ubuntu other debian derived ditributions. The ia32- libs package is no longer. SQL SERVER ON LINUX INSTALLATION – PART4 – Install SQL Server Tools on Ubuntu – ief: This detailed guide shows you various ways to install applications in Ubuntu Linux and it also demonstrates how to remove installed software in Ubuntu. Main page: Ubuntu Documentation: Installing applications. We are building Debian packages for several Ubuntu platforms, listed below. 104 Responses to Ubuntu Linux TL- WN725N TP- Link version 2 WiFi driver install. How to Change Desktop Themes. Do this if you get complaints about packages with " unmet dependencies". Note: The Main page is available in multiple languages. On the command line, you need to. We recommend installing the GNOME Tweaks application formerly known as GNOME Tweak Tool to change your theme. Install Grub Customizer and receive future updates via Software Updater:. Even the basic things like installing applications in Ubuntu. Many of you already know about this, but since there are many users who are upgrading from Ubuntu 12. These packages are more efficient than source- based builds and are our preferred installation method for cent Updated Posts 20 Basic Linux Cat Command for File Management with Examples Aptitude - Debian GNU/ Linux Package Management Tool Linux Commands Cheat Sheet in Black & White. This article provides some useful commands that will help you to handle package management in Debian/ Ubuntu based systems. This article explains how quickly you can learn to install remove, update , search software packages using apt- get apt- cache commands from the command line. How to install deb package in ubuntu command line. If you are using Synaptic, just search for the packages listed below. If you can' t b debian package in any of the debian elsewhere, ubuntu repositories you can use the alien package converter to install. This article will help you to install PHP 7. Mar 23, · Introduction. Installing software in Ubuntu is. How to install deb package in ubuntu command line. 04 like Google Earth, they might not be aware of what they need to do when a package fails to install because it depends on ia32- libs ( on Ubuntu 64bit). Brief: This detailed guide shows you various ways to install applications in Ubuntu Linux and it also demonstrates how to remove installed software in Ubuntu. SQL SERVER ON LINUX INSTALLATION – PART4 – Install SQL Server Tools on Ubuntu – Updated!This command does the same thing as Edit- > Fix Broken Packages in Synaptic. Apt- get - f install. It does an update of the package lists and checks for broken dependencies. Jun 30, · apt- get check This command is a diagnostic tool. Ubuntu fix TL- WN725N wireless not working – Step by step install TP- Link TL- WN725N nano version 2 WiFi ubuntu driver install. Ubuntu install of ROS Kinetic. You can use Synaptic to install packages or the command line. How to Install Grub- Customizer in Ubuntu 16. * User verified on raspberry pi TL- WN725N kali TL- WN725N install, linux Mint 17 , Mint 16 This how to will work on Ubuntu TL- WN725N install, Debian TL- WN725N install Mint TP- LINK TL- WN725N usb WiFi driver installers: Linux how to install TL- WN725N usb nano driver. Different Linux distributions install applications in a pre- compiled package that contain binary files, configuration files and also information about the. The package unattended- upgrades provides functionality to install security updates automatically. You could use this, but instead of configuring the automatic part. Package management software allows you to easily control the software on your servers. These tools allow you to install, remove, update, and configure thousands of packages through a unified interface.
OPCFW_CODE
Where am I losing heat from my house? My house is about 5 years old and our local building codes required a 5-star energy rating (Melbourne, Australia). This means there's solar hot water and both the walls and ceiling are insulated. But it takes a lot of energy to heat the room, and as soon as you turn off the central heating (ceiling ducted) the house cools very quickly. Somewhere, somehow, I'm losing way too much heat. How do I work out where the heat is going? There's probably a lot of things I can do to save some of the heat, like installing double-glazing or even putting pelmets above the windows. But I would rather work out if the heat is going somewhere in particular that I can quickly and easily rectify. Is there some gadget I can get to test something? Or some other method that's free/cheap? Find an isulation installer that is a certified thermographer. An infrared scan of your home should reveal where the heat loss is occuring. The best time to do this is when the outside temp is lowest and your house is warmest. The problem could be missing or improperly installed insulation. Without a scan you are only quessing what the problem is. Ask about a repeat scan after any repairs to verify the problem has been resolved. One quick & easy DIY trick is to light something that will smoke (like incense) and, when there is a strong temperature difference between inside and outside, walk around the inside of the house very slowly. Look for drafts that either pull or push the smoke. Common locations for leaks are around outlets, around plumbing where it comes through a wall, around windows & doors, the windows & doors themselves, along the eaves of the roof, at the openings for attics, and the house sills. Also, if you have cobwebs, you have a draft. The webs tend to form in the area where the draft is strongest. As for the sills - it was once assumed that the pressure on the sill from the weight of the house would be sufficient to seal any gaps between the sill and the top of the slab/foundation wall. This turns out to be incorrect. The sills are a common entryway for cold air, which is then pulled upward into the rest of the house by the "stack effect," caused by heated air rising. Another possibility is leaky ducts in your heating system. There are several tapes commonly used for duct sealing that are entirely inadequate, so their glue breaks down in a relatively short time. Since your ducts come in via the ceiling, if they're leaky, then the hot air in the room could easily go right back out the way it came in, dissipating into the space above the ceiling via the duct joints. Note: If you have ANY combustion appliances in your home (gas stove and/or oven, gas heater, oil furnace, woodstove, etc.) you should not start sealing leaks without getting an energy audit done. It is imperative that you ensure adequate, and appropriate ventilation in your home, lest you wind up giving your family carbon monoxide poisoning. Good luck! Once the problem is solved, your family will be much more comfortable. Even without combustion appliances, an energy audit is still the best way to go. Air sealing without proper ventilation can cause mold/moisture and other air quality problems if you aren't careful. I live in Queensland and I understand Melbourne winters. Check the silicon around window seals. My home used to whistle in winter with the wind. Also check around the seals of your bathroom extraction fans.
STACK_EXCHANGE
I know this might be a bit of a vague question, but hopefully I can get some insight into the hardware I'll need. I'm collected netflow data and use Kibana to make visualizations. My dashboard has about 20 visualizations, bars, pie charts and tables. Most of it is are sums of total traffic per port/application/ip and sums of total data usage per day/month/week and and sum of data usage per ip per day/month/week. This is the time a request takes for a single pie graph summing total.bytes per ip/port. Hits 13885344 Query time 3705ms Request time 4895ms This is the same visualization but inside my dashboard. Hits 13885180 Query time 14151ms Request time 21212ms So far I've noticed performance is scaling fairly linear. E.g. 60 days of data will take about twice as long to load than 30 days of data. Based on my sample data I think I could end up with ~3.000.000.000 documents. Why kind of cluster would I need to be able to search through that data with somewhat acceptable performance? Right now I'm running a single 15GB instance on Elastic cloud, weekly indeces and 1 shard per index. I'd say performance is reasonable at the moment (around 25 seconds to fully load a dashboard). But if I'd need to search ~200 times the amount of data I have right now, what kind of cluster would I be looking at? My understanding is more shards spread over multiple instances will increase performance because searches will run in parallel. How about more shards per index on the same instance? Will that increase performance as well? How linear is performance scaling when you add an instance to a cluster (e.g. will double the instances give ~2 the performance?) I'm not sure how to check server utilization on the Elastic cloud but before I had everything running on a AWS instance (16GB, 4 core) but as far as I could tell CPU utilization only spiked for a couple of seconds during a search. The above example is a worst case scenario where all the data would be displayed. A more realistic use case is where the same amount of documents will be searched, but only 1/100th of the data is needed for aggegrations etc. (filtered based on the location of my devices) TL;DR: What kind of cluster would I be looking at if I need to search and visualize ~3 billion documents in 1 ~ 2 minutes?
OPCFW_CODE
from network_simulator.Dijkstra import Dijkstra from network_simulator.Network import Node, Network def test_dijkstra(): test_node1 = Node(node_id=1, adjacency_dict={2: {'weight': 5, 'status': True}}) test_node2 = Node(node_id=2, adjacency_dict={3: {'weight': 5, 'status': True}}) test_node3 = Node(node_id=3) test_node4 = Node(node_id=4) test_net = Network({test_node1.node_id: test_node1, test_node2.node_id: test_node2, test_node3.node_id: test_node3, test_node4.node_id: test_node4}) weight, previous = Dijkstra.dijkstra(graph=test_net, source=1) assert weight[1] is 0 assert weight[2] is 5 assert weight[3] is 10 assert weight[4] == float('inf') assert previous[1] is None assert previous[2] is 1 assert previous[3] is 2 assert previous[4] is None def test_minimum_unvisited_distance(): unvisited = [1, 2, 3, 4, 5] weight = dict.fromkeys(unvisited, float('inf')) assert Dijkstra.minimum_unvisited_distance(unvisited, weight) is 1 weight[2] = 0 assert Dijkstra.minimum_unvisited_distance(unvisited, weight) is 2 weight[3] = 3 weight[5] = 5 assert Dijkstra.minimum_unvisited_distance(unvisited, weight) is 2 unvisited.remove(2) assert Dijkstra.minimum_unvisited_distance(unvisited, weight) is 3 unvisited.remove(3) assert Dijkstra.minimum_unvisited_distance(unvisited, weight) is 5 unvisited.remove(5) assert Dijkstra.minimum_unvisited_distance(unvisited, weight) is 1 def test_shortest_path(): test_node1 = Node(node_id=1, adjacency_dict={2: {'weight': 10, 'status': True}}) test_node2 = Node(node_id=2, adjacency_dict={3: {'weight': 5, 'status': True}}) test_node3 = Node(node_id=3, adjacency_dict={4: {'weight': 3, 'status': True}}) test_node4 = Node(node_id=4, adjacency_dict={1: {'weight': 6, 'status': True}}) test_node5 = Node(node_id=5, adjacency_dict={1: {'weight': 1, 'status': True}, 3: {'weight': 2, 'status': True}}) test_net = Network({test_node1.node_id: test_node1, test_node2.node_id: test_node2, test_node3.node_id: test_node3, test_node4.node_id: test_node4, test_node5.node_id: test_node5}) dijkstra = Dijkstra(graph=test_net, source=test_node1.node_id) path, weight = dijkstra.shortest_path(destination=test_node3.node_id) assert path == [1, 5, 3] assert weight is 3 test_net.remove_node(test_node5.node_id) dijkstra = Dijkstra(graph=test_net, source=test_node1.node_id) path, weight = dijkstra.shortest_path(destination=test_node3.node_id) assert path == [1, 4, 3] assert weight is 9 test_net.remove_node(test_node4.node_id) dijkstra = Dijkstra(graph=test_net, source=test_node1.node_id) path, weight = dijkstra.shortest_path(destination=test_node3.node_id) assert path == [1, 2, 3] assert weight is 15 test_net.remove_node(test_node2.node_id) dijkstra = Dijkstra(graph=test_net, source=test_node1.node_id) path, weight = dijkstra.shortest_path(destination=test_node3.node_id) assert path is None assert weight == float('inf') def test_all_shortest_paths(): test_node1 = Node(node_id=1, adjacency_dict={2: {'weight': 10, 'status': True}}) test_node2 = Node(node_id=2, adjacency_dict={3: {'weight': 5, 'status': True}, 4: {'weight': 10, 'status': True}, 5: {'weight': 15, 'status': True}}) test_node3 = Node(node_id=3) test_node4 = Node(node_id=4) test_node5 = Node(node_id=5) test_net = Network({test_node1.node_id: test_node1, test_node2.node_id: test_node2, test_node3.node_id: test_node3, test_node4.node_id: test_node4, test_node5.node_id: test_node5}) dijkstra = Dijkstra(graph=test_net, source=test_node1.node_id) shortest_paths = dijkstra.all_shortest_paths() assert shortest_paths[1] == ([1], 0) assert shortest_paths[2] == ([1, 2], 10) assert shortest_paths[3] == ([1, 2, 3], 15) assert shortest_paths[4] == ([1, 2, 4], 20) assert shortest_paths[5] == ([1, 2, 5], 25)
STACK_EDU
By merize147 at 2006-06-29 16:10 Ok so the other day someone asked me to figure something out and here was my reply in case you happen to be interested. The following was based on a Ubuntu 6.06 system but any distro will do as long as you can install the programs. (apt-get install is a wonderful thing) Needed programs: ssh, autossh, and all of their dependencies. Object will be to forward port 6667 to your LUG's group server through a ssh tunnel that will reconnect if severed. ok this is the ideal time to install any missing programs we will be using a key pair to authenticate the ssh session. (a)type 'ssh-keygen -t rsa' (b)make sure it defaults to: '/<user's home dir>/.ssh/id_rsa' (c)no passphrase (not that secure, but easy to setup) (d)you should now have the key pair in the .ssh directory (id_rsa & id_rsa.pub) (e)make a copy of your public key with the name 'authorized_key' (f)'cp /<user's home dir>/.ssh/id_rsa.pub /<user's home dir>/.ssh/authorized_key' (g)copy authorized_key to the .ssh directory of the remote system. (use 'scp' for secure comms) (h)'scp /<user's home dir>/.ssh/authorized_key <remote user>@<ip address>:'/<user's home dir>/.ssh/' (i)you may have to create the .ssh folder on the remote system if it is not there. this happen to me on my OpenBSD test box. verify that ssh work with the new keys. (a)ssh -i /<user's home dir>/.ssh/id_rsa <remote user>@<ip address> (b)when you connect you should have direct access to the system and not be prompted for a passphrase. Time to forward a port (a)add the port forward feature to the ssh command: '-L <local port>:system:<remote port>' (b)'-L 1234:localhost:6667' would be my local system listening on port 1234 and sending requests to port 6667 of the remote's local loopback address (127.0.0.1) (c)if forwarding is enabled on the remote system the you could use the remote system as a stepping stone to another. (d)the new command would be: (e)ssh -i /<user's home dir>/.ssh/id_rsa -L <local port>:system:<remote port> <remote user>@<ip address> (f)ie: 'ssh -i /root/.ssh/id_rsa -L 1234:localhost:6667 email@example.com (g)verify it works (g)1.connect to the remote system (g)2.set your irc client to connect locally on port 1234 which should forward the request to the remote system. Keeping the connection alive (a)Dear fellow admins, <screaming> stop reconfiguring the firewall and killing my open connection </screaming>. Thank you. (b)use the autossh command to monitor the connection and reconnect when needed. (make sure you replace the ssh with autossh) (c)'autossh -i /root/.ssh/id_rsa -L 1234:localhost:6667 firstname.lastname@example.org' (a)typing this out all the time sucks, so write a script and make it executable. This of course is very basic instruction set. Both ssh and autossh have many options to suit your needs, but that is for you to figure out.
OPCFW_CODE
Old scifi tv series where a possessed astronaut tries to figure out a passcode Trying to find name of the show and episode of an old (possibly black and white) sci-fi television show for my dad. From what he can remember the episode is about a couple of astronauts on a planet and one of them leaves the spaceship/base and gets possessed. He comes back to get to attack the other astronaut but the combination to the lock has been changed. The episode ends with the possessed astronaut at the keypad trying every combination 0000, 0001, 0002, etc. Sorry it's not much to go on, but any help would be appreciated. Hi, welcome to SF&F. Do you know if he saw this in the 50s or the 60s? He can't remember when. He thinks it was black and white, but not positive on that. The passcode sounds post 1980 -- most people only started to be familiar with passcodes I think with the advent of ATMs which was probably after 1982 or 1983 -- it was so new that money could be removed multiple times from different locations because no network (i can't even think how that worked -- did they upload every account balance using a diskette every night? u could only use ATMs for your branch? or maybe they had a network but only a nightly batch job) Could this have been a Gerry Anderson show? I remember something similar in either Stingray or Captain Scarlet. @releseabe, I think the early bank cards had a magnetic strip that could record withdrawals. So, if you took your card to another machine, it would read that you had already withdrawn a certain amount that day. And there was a maximum daily withdrawal limit. @pete: i am pretty sure that around 1983 it was possible to withdraw from multiple ATMs and if u only had 200, you could get out more. Now, if today the bank accidentally credits your account with a million bucks (extra) by accident and something like this has happened, spending is illegal and they will legally be able to get it back or you might go to jail for fraud. Same thing in 1983. I assume addressing the problem, perhaps as you describe, was a very high priority although the paucity of ATMs in those days mitigated the problem. Not complete match, but Doctor Who: Harvest of Time has similar scene. I remember this one. I think it's Monsters S3E10, The Waiting Game, from 1990. It isn't astronauts though, it's US Air Force officers inside a nuclear launch facility who live through a nuclear war only to be confronted with vampires who are now free to roam the world, safe under the dark skies of the apocalypse. The ending scene is exactly as you describe it. Full episode: https://www.dailymotion.com/video/x6vuh1v
STACK_EXCHANGE
After creating a database and running queries, you have taken the first steps to become a MySQL beginner developer. However, there is still more to learn and practice to improve your skills and knowledge. Some of the next steps you can take are learning and using the following: Familiarize yourself with the different data types, such as integers, strings, dates, and booleans, and choose the right one for your data. Create and use indexes to optimize the performance of your queries and reduce the load on your database server. Combine data from multiple tables and sources using joins, subqueries, and unions. Use functions, procedures, triggers, and views to encapsulate the logic, automate the tasks, and create reusable components. Use transactions, locks, and isolation levels to ensure data integrity and consistency in concurrent operations. Use backup and restore tools to protect your data from loss or corruption. Use security features, such as users, roles, privileges, and encryption, to protect your data from unauthorized access or modification. Use debugging and testing tools like logs, error messages, breakpoints, and assertions to identify and fix errors in your code or queries. Use documentation and commenting tools, such as comments, diagrams, schemas, and manuals, to explain and document your code or queries. These tasks will expand your knowledge and skills in using Percona Server for MySQL and become more confident and proficient in developing database applications. Review the Percona Server for MySQL documentation for more information. Other Percona products¶ For backups and restores¶ Percona XtraBackup (PXB) is a 100% open source backup solution for all versions of Percona Server for MySQL and MySQL® that performs online non-blocking, tightly compressed, highly secure full backups on transactional systems. Maintain fully available applications during planned maintenance windows with Percona XtraBackup. For monitoring and management¶ Percona Monitoring and Management (PMM )monitors and provides actionable performance data for MySQL variants, including Percona Server for MySQL, Percona XtraDB Cluster, Oracle MySQL Community Edition, Oracle MySQL Enterprise Edition, and MariaDB. PMM captures metrics and data for the InnoDB, XtraDB, and MyRocks storage engines, and has specialized dashboards for specific engine details. For high availability¶ Percona XtraDB Cluster (PXC) is a 100% open source, enterprise-grade, highly available clustering solution for MySQL multi-master setups based on Galera. PXC helps enterprises minimize unexpected downtime and data loss, reduce costs, and improve the performance and scalability of their database environments supporting your critical business applications in the most demanding public, private, and hybrid cloud environments. Advanced command-line tools¶ Percona Toolkit is a collection of advanced command-line tools used by the Percona support staff to perform a variety of MySQL, MongoDB, and system tasks that are complex or difficult to perform manually. These tools are ideal alternatives to “one-off” scripts because they are professionally developed, formally tested, and documented. Each tool is self-contained, so installation is quick and easy and does not install libraries. Get expert help¶ If you need assistance, visit the community forum for comprehensive and free database knowledge, or contact our Percona Database Experts for professional support and services.
OPCFW_CODE
Cube Cluster can be used to distribute model steps across multiple processing cores. These cores can be located on the same computer or on a network of computers. This post documents the steps required to set-up Cluster across a network of computers. This post assumes that your model steps are already set-up to be distributed using Cluster. If not, the model should be reviewed and DISTRIBUTE statements added at appropriate locations/steps in the model flow. For more information please refer to http://community.citilabs.com/t/how-to-set-up-distributemultistep-and-distributeintrastep/344 Step 1: Set-up a shared drive Set-up a shared drive which can be accessed from all the computers in the network. This will be the location from which models should be run. Typically, the folder location on the main computer, where the model run will be started, is set-up as a shared drive. The drive letter for the shared drive should be the same on all computers. All file location references in model script will be pointing to the shared drive path, and the processing cores on networked computers will be reading and writing data to this location. Step 2: Start Cluster nodes Start Cluster nodes manually on all computers using the Cluster node management tool. Cluster nodes cannot be started from Voyager script when using multiple machines. They must be started using the Cluster node management tool on each of the computers. Identify how many Cluster nodes you would like to run on each computer and assign unique process number list for each computer. In the example below, the user has 3 computers with 16, 8 and 6 processing cores. The process list for each computer is set as noted below. The process list should be sequential and non-overlapping across the networked computers. In the Cluster node management tool, (i) Navigate to the model application folder in the shared drive location and enter the Cluster process Id used in the model. The model application folder is the location where the main application in your model is saved. This is the working folder location for the model run, where all Cluster node communications occur. If your model uses the COMMPATH keyword in Distribute statements, to set a folder location outside of the model applications folder, then Cluster nodes should be started in COMMPATH folder. (ii) Enter the process list identified for that computer and start nodes. (iii) Repeat steps (i) and (ii) on all computers with the appropriate process list. For more information on the Cluster node management tool, please refer to http://community.citilabs.com/t/how-to-start-cube-cluster-slave-nodes/362/1 Step 3: Open model On the main computer, open model catalog from the shared drive. This will make sure the model script references the shared drive location. Step 4: Update Cluster keys Update any keys related to Cluster settings. Most models will have an input key which will allow the user to set the total number to processing cores, to be used for a run. Update this key value to match with the process list identified in Step 2. If the maximum number in the process list is 30, set the scenario key value to 30. Step 5: Start model run Start model run from the scenario manager on the main computer. Step 6: Close Cluster nodes After model run is done, Cluster nodes can be closed on each individual computer using the Cluster node management tool.
OPCFW_CODE
external sd problem Posted 29 December 2012 - 09:04 PM Posted 17 January 2013 - 06:46 PM Come on guys I cant b the only person with this problem. Hey guys I have tried to access y external sd drive ad cant seem to read or write anything to it. Ive tried moving apps to it but they disappear. Has anyone else had this problem? Can someone steer me it the right direction. Its really getting on my nerves Posted 26 January 2013 - 07:46 PM Firs I bought my ampe a10 deluxe, single core, from Igogo. Xtrange functions: If I install a program in internal memory it shows the direct access icon but the aplication doesn't exist! When trying with an external SD I can't manage to install the app to that external SD to check if this behaviour changes in this case. In both memories (internal and external SD) I'm able to write data and to keep it whith no loses. My final and fatal problem is that a few days after it refused to start, it gets stuck in the 3 color letters with a flashing white light that doesn't stop. I'm able to switch it off by pressing the power button and start it into emergency mode. I'll try to put a new firmware to see if it works fine... Or I will downgrade it from its 4.0.4 (it came with it, with a nov2012 date of compilation) to a earlier firmware. My frighten is that maybe the problem with it is about a faulty internal memory... I'm also contacting with Igogo to see if it worth the cost of sending it to claim for the 1 year warranty (paying sending and resending) :-( Wish I helped you a little... Posted 26 January 2013 - 07:48 PM Posted 30 January 2013 - 04:48 PM Posted 10 February 2013 - 04:19 AM I puted the new firmware, and it started as new (you know it takes a little longer). It works fine now!!! When there is no a micro SD inserted the "memory" to move apps is the internal one (suposed 16Gb), and when a micro SD is inserted then it says that "ext SD". So this maybe "normal", not perfect, but normal. And the good news are that the apps doesn't longer disapear! I think that wifi reception is better than before the reflashing! Remember that if you screw up your tablet (by your hand, or by a bad factory installation...) and it is able to start into rescue mode (Hold Vol up + On till it starts), you will be able to connect it to your windows pc and use the suite included into the firmware file to reload firmware to it. Good luck people
OPCFW_CODE
Usage of Filters and Ultrafilters I don't know why we need the concept of Filters and Ultrafilters. they just seem nothing, and I don't know where to use them can you tell me where do we use those concepts. It's used in conjunction with Zorn's lemma and hence all of mathematics. try looking at goldblatt's book on nonstandard analysis for a dope application of ultrafilters Filters are very useful in set theory, in logic and model theory and they have their applications to analysis and to topology as well. Not to mention that model theory has its applications to algebraic geometry and algebra as well. All in all this makes filters quite very useful. We can characterize continuity using filters and ultrafilters. We can characterize compactness using ultrafilters. We can construct mathematical structures and prove unprovability and inexpressibility using ultrafilters. We can use ultrafilter to do some sort of amalgamation of structures. This way we can extract properties which happen "almost everywhere" in a uniform way. Filters can be generated from [finitely-additive] measures. This means that in some aspect filters are measures, and sometimes you are dealing with sets which are too large to have real-valued measures in any effective way. There filters make a much much better measurement of sets. (E.g. the club filter on cardinals larger than the continuum.) Generally filters allow us to say when a set is "large enough for our purposes". Then we can ask whether or not things happen on large sets. This is important because in mathematics we often wish to iron out the small pathologies, and in order to do so we need to know that they only occur on inconsequential sets and that they don't occur on large sets. Essentially this is what is common to all the above uses of filters, and probably all the uses of filters. In model theory, one can use ultrafilters to build new models from old ones. I'll draw an example here, but you can find many others in model theory books. Let us construct a non-standard model of Peano arithmetic, non-standard meaning that it doesn't "look like" the set $\mathbb N$ but still satisfies all the axioms of Peano arithmetic. Consider an ultrafilter $\mathcal U$ on $\mathbb N$ which contains all cofinite subsets of $\mathbb N$. Now consider the structure $\mathcal M$ on the base set $\mathbb{N}^{\mathbb N} / \mathcal U$, which is a quotient of the set of all sequences over natural integers, with $0^\mathcal M$ being the equivalence class of $(0,0,\ldots)$, $+^{\mathcal M}$ being addition component-wise, and $\cdot^{\mathcal M}$ is component-wise multiplication. You can check that this satisfies all Peano axioms. Now one can check that this isn't isomorphic to the usual $\mathbb N$, because here the sequence $(0,1,2,3,4,\ldots)$ (or more accurately it's equivalence class within the quotient) is not $0$ and is not some successor of $0$, whereas in $\mathbb N$ every integer is either $0$ or some successor of $0$. Getting this kind of result proves for example that some formulae are not true or not expressible in first-order logic. This is an illustration of item number 3 in Asaf's thorough answer!
STACK_EXCHANGE
Livewire provides Best Cloud Computing training in chennai, The cloud is the provision of on-demand computing resources from applications to data centers over the internet on a pay-for-use basis. The course on cloud computing is intended to support the students and the professionals to become enabled in cloud and plunge into the existing career of cloud computing. This will provide you the knowledge on different clouds like public, private, hybrid and on different services such as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (laas). Cloud Computing Courses In Chennai Life has changed so much after "Cloud Technology" has given unlimited space to store the data. Especially after AWS Cloud Computing was open to even small scale businesses with affordable prices. We have numerous reasons why LIVEWIRE Vadapalani is the best Cloud Computing Courses In Chennai - Amazing experienced Faculties - Each Faculty pass 5 rounds of Interview - Faculty cleared a certification exam on the cloud - Studying ambience - Separate Lab sessions - Separate Classroom Session - Online Video Session - Separate App access for students that can be downloaded from play store - Internationally Valid Certification - Access to Videos and Much More. - Easily reachable Location: Just Opposite to Forum Mall - Placement Assistance for every student of LIVEWIRE - Affordable Course Fee - NASSCOM, NSDC, Skill India Approved Training Centre How Cloud Computing Impacts the IT industry - Reduced IT costs - Collaboration Efficiency - Business continuity - Automatic Updates etc. Why should you learn Cloud Computing? The future of Data Storage all lies in this Cloud Computing, very futuristic, promises a stable career, especially when all of the IT industry is moving top cloud platform. From Everyday mails to National Security all the data is stored using Cloud Technology. Now such Important Technology has to be learned from the best Cloud Computing Training Institute in Chennai. LIVEWIRE would be one of the finest choices for you to learn the best Cloud Computing Courses In Chennai. Who Must Learn Cloud Computing? Anybody who wants to get in data storage and retrieval can consider learning Cloud Computing. Any freshers, who are just out of college can also learn Cloud Computing. A working professional who wants to upskill himself or re skill in his interest should consider Cloud. Working professionals who are working in obsolete technologies like manual testing, Selenium and other technologies, it's a must for them to upskill in this beautiful Technology, that too from the best Cloud Computing Courses In Chennai. - Introduction to Cloud Computing - Creating Instances EC2, EBS - Storage service and content delivery - VPC and Networking Services - Scaling and Load distribution - Identity and Access Management Techniques - Cloud architecture and Design - AWS Risk and Compliance - RDS and REDSHIFT Services - Security on AWS - Application Integration services-SNS, SQS, SWF - Architecture best practice
OPCFW_CODE
Problem with USB driver on Windows 7 installed in Virtual. Box. Hello,my computer is running on kubuntu and I recently installed Windows 7 in Virtual. Box on linux Oracle VM Virtual. Box Manager Version 4. Installing The Windows System Drivers Fail Virtualbox For MacInstalling The Windows System Drivers Fail Virtualbox PortableUbuntu. I want to be able to read data of a USB stick from Windows, but I have some problems with that and I think it comes from Windows and not from the Virtual. Box or linux, but Im not sure. Indeed from the Virtual. Installing The Windows System Drivers Fail Virtualbox' title='Installing The Windows System Drivers Fail Virtualbox' />Box, I am able to select the USB sitck from. However, the USB stick doesnt appear on my Computer where you see the local disk and CD drive. Under Device and Printers, there is a yellow triangular sign with an exclamation mark next to Mass Storage Device. I launched the troubleshoot who gave the following information There is a problem with the driver for USB Mass Stoage Device. Reinstalling. the driver might fix this problem. Installing drivers for a windows 7 guest. Virtualbox guest additions installation fails for shared folder. Unable to install Windows 7 in VirtualBox Manager in. Install-Ubuntu-on-VirtualBox-Step-38-Version-2.jpg/aid737465-v4-728px-Install-Ubuntu-on-VirtualBox-Step-38-Version-2.jpg' alt='Installing The Windows System Drivers Fail Virtualbox' title='Installing The Windows System Drivers Fail Virtualbox' />I click on the Apply this fix button, but it doesnt solve the problem as the next message I get is Troubleshooting was unable to automatically fix all of the issues found. You can find more details below USB Mass Storage. Device has a driver problem. Not fixedNext thing I tried is that under the USB Mass Storage Device Properties window I clicked on Update Driver, I then get Search automotically for updated driver software and finally when the search has finished I get The best driver software for your. Windows has determined the driver software for your device is up to date this is obviously not the case. Under the same USB Mass Storage Device Properties window I also clicked on the Uninstall button, took out the USB stick and put it back again and get messages Installing device software and then Device driver software was not successfully installed. I really dont know what to do anymore I also made all the updates for Windows 7 and restart it to see if it makes any difference, but no it doesnt. It still INSISTS that the driver is under the VirtualBox install. VirtualBox 4. 3. 28 installed on Windows 10 and cant figure out why the install fails and. Asus P5k Premium Windows 8 there. Hello, my computer is running on kubuntu and I recently installed Windows 7 in VirtualBox on linux Oracle VM VirtualBox Manager Version 4. Ubuntu. I want to be. VIRTUAlBOX fatal error during installation on windows 10. VirtualBox 5 installation always fails. I am trying to setup Ubuntu guest OS on a Windows host system. After creating the new virtual machine, I am trying to install the guest additions and I am running sudo. I also tried with another USB stick and my external hard drive, but get the same problem.
OPCFW_CODE
A multinational information technology company based in India, Infosys Limited offers business consulting, IT, and outsourcing services. Founded in Pune and headquartered in Bangalore, Infosys is India’s second-largest IT company. Many students dream work at Infosys. Following are the ways Infosys hires On Campus Drives Off Campus Drives Competition like HackwithInfy, Infytq these are the most common jobs Infosys hires for System Engineer Specialist Programmer Digital Specialist Engineer Candidates who are eligible for Infosys recruitment process will go through following three rounds: - Online Assessment Test: It is divided into some sections with time limits and cutoffs for each. It involves questions to test your reasoning, mathematical and verbal ability, pseudocode, and puzzle solving skills. - Technical Interview: Generally, technical interview questions are based on your resume. Your field of specialization, projects you’ve worked on, obstacles you’ve faced, and how you overcame them. and generally asked about computer science concepts (like Data structures , Operating systems, Computer networks, DBMS, OOPS ) During this round, your problem-solving skills will also be tested. If you have done internships so be ready to answer the questions revolving around your internships and the role you played in them and what you did. - HR Round: If you pass the technical interview, you will be invited to the HR interview round. In this meeting, topics such as notice periods, salaries, designations, locations, shift timings, etc. are likely to be discussed. and You could also be asked common questions like your background, education, hobbies, and even your view of life. And strengths, weaknesses, etc. Why this sheet? If you are preparing for the Infosys interview, this sheet is just for you. It is a collection of almost all coding interview questions asked historically in Infosys. Questions in this sheet are grouped difficulty-wise and cover all major DSA topics like an array, string, graph, tree, LinkedList, etc. Infosys InfyTQ: Infosys came up with the Infosys InfyTQ certification Exam (or generally termed as InfyTQ). Now, you must be curious to know what is InfyTQ Exam! InfyTQ is a certification examination that examines your industry readiness by analyzing your knowledge in programming and databases. The best part about InfyTQ is that upon clearing the examination, you get an opportunity to be interviewed directly for a job at Infosys. Infosys HackWithInfy: HackWithInfy is a competitive coding competition for second-year to pre-final year students across India. In this competition, participants have to solve coding questions in a given time and top performers get a chance to interview Infosys. Aptitude Ability: Hence, aptitude is no such thing to be afraid of during the placement process but yes it should be paid attention because it is the screening test and if you don’t get through this then you will be out of the placement process. So here we have 6 sets to practice your aptitude skills. Verbal Ability:To check your proficiency on English, they will check your grammar knowledge, for we prepared a set of 70 Questions that you can go through. - Database & Management System: A database is a collection of inter-related data which helps in the efficient retrieval, insertion, and deletion of data from the database and organizes the data in the form of tables, views, schemas, reports, etc. - Computer Networks: An interconnection of multiple devices, also known as hosts, that are connected using multiple paths for the purpose of sending/receiving data or media. - OOPS Concepts: As the name suggests, Object-Oriented Programming or OOPs refers to languages that use objects in programming. Object-oriented programming aims to implement real-world entities like inheritance, hiding, polymorphism, etc in programming DataStrcuture & Algorithms: - A data structure is a group of data elements that provides the easiest way to store and perform different actions on the data of the computer. A data structure is a particular way of organizing data in a computer so that it can be used effectively. The idea is to reduce the space and time complexities of different tasks. - The word Algorithm means ” A set of rules to be followed in calculations or other problem-solving operations ” Or ” A procedure for solving a mathematical problem in a finite number of steps that frequently by recursive operations “. You will require DSA to crack the coding rounds of Infosys, here we collected the most asked coding problems in Infosys Interview, all the problems are categorized into three parts, Easy, Medium and Hard as follows. Puzzles are one of the ways to check your problem-solving skills. These are tricky questions that let you think logically. Try to solve these 20 most popular puzzles asked in Interviews
OPCFW_CODE
Unity/C# - Is it possible to connect an Azure (cloud) database, to a Unity game, via EnityFramework? I have a Unity game that requires a cloud database. (Where you can connect to a database from inside of the game.) It doesn't need to anything crazy. Mostly just reading & writing simple columns in a table, and maybe a simple login. Is this possible to do in Unity using the Entity Framework? If not, are there any workarounds? (e.g. Using MySQL instead) Please let me know. Thanks, Benji Here is the workaround using Firebase. Basic steps are: Add Firebase Unity SDK to the project. Create a Firebase project in the Firebase console. Download the config file from Firebase Console. The file for Unity3D might be one or both of the below files: GoogleService-Info.plist (for iOS), Google-services.json (for Android) Then you can access the data manipulation with simple snippet as below: using Firebase; using Firebase.Unity.Editor; public class MyScript: MonoBehaviour { void Start() { // Set this before calling into the realtime database. FirebaseApp.DefaultInstance.SetEditorDatabaseUrl("https://xyz.firebaseio.com/"); // Get the root reference location of the database. DatabaseReference reference = FirebaseDatabase.DefaultInstance.RootReference; } } The data will be in the form of as below, for example. Writing data to db: private void writeNewUser(string userId, string name, string email) { User user = new User(name, email); string json = JsonUtility.ToJson(user); reference.Child("users").Child(userId).SetRawJsonValueAsync(json); } Reading data: reference.GetReference("users") .GetValueAsync().ContinueWith(task => { if (task.IsFaulted) { } else if (task.IsCompleted) { // Do something with snapshot... } }); What is even cool is that Firebase supports user authentication (i.e. logins) and more, which is very handy to Unity3D developers. A tutorial on how to link Unity3D with Firebase is here, though it is not using Firebase database, but Firebase messaging as an example, it might be useful to get you started there. Thanks for the answers, and yes, Firebase is a great solution if you need real-time/live data. (like player positions in a multiplayer online game) However, there's a better (free) approach, if all you need a simple stuff. (like saving a player's high score, and displaying it) (At least that's all I really needed, so here's what I did) Step 1) Create your database (Azure, or whatever) Step 2) Make a website/app, host it, and connect your DB. Step 3) Add API "endpoints" to your app. (which interact with your DB) Step 4) Use Unity's "WWW" class to interact (make RESTful GETs/POSTs) to your endpoints. And that's it! Works like a CHARM, for me! (pretty fast, too!) and how do you prevent someone from decompiling your app to get the API credentials of your app such as your app secret? With that, they will be able to call your APIs and possibly steal data. Firebase has a security model built in to support validation rules with 3rd party auth that lets you completely control what users have access to different parts of your database. Firebase developer here. Thanks for the plug David. You can find more documentation including a guide for working with the Firebase Realtime Database and Unity here
STACK_EXCHANGE
Ben Nadel documents a strange MongoDB error that he was running into when using the post-increment operator with the Java 3.9.1 driver inside Lucee CFML 126.96.36.199: java.lang.IncompatibleClassChangeError – Class org.bson.Document does not implement the requested interface lucee.runtime.type.Collection…. SERP stands for Search Engine Results Page. serpstack is an API that queries the result page of search engines and gives you a clean JSON response. Search engines like Google used to have straight-forward result listing which made scraping them a whole lot easier. Now, there’s videos, images, audio, definition pages and so much more. This makes scraping the modern search engines a nightmare. There’s also the dreaded captcha wall. serpstack makes the problems list above non-existent. serpstack will queries multiple search engines (currently supports Google) and prints the search results as a clear and easy to deal with JSON response. Each result type listed by the results page is highlighted by the API. Create an account at serpstack, you can get started for free. After you sign up, you copy your API key and replace “ACCESS_KEY” below with the new token. By Chris on Code In 2019, DevOps still remains something of a codeword: a sphere reserved to developers trained in writing complicated scripts for tools only they know how to use. Tools whose purpose is to make your life easier with automation, but somehow: take weeks to configure and launch require a designated developer to oversee cannot be easily modified Buddy is a CI/CD tool that doesn’t require DevOps experience and can be used by beginner and expert developers alike. We did that by replacing scripts with preconfigured actions(builds, tests, deployments, etc.), and packing the whole thing in a clear and telling GUI. And making it run deadly fast. Remember the times when setting up remote servers was a chore? Now you can spin a droplet on DigitalOcean in 55 seconds with 1-click. This is what Buddy does to By Dr. Axel Rauschmayer By Matt Raible Gatsby is a tool for creating static websites with React. It allows you to pull your data from virtually anywhere: content management systems (CMSs), Markdown files, APIs, and databases. Gatsby leverages GraphQL and webpack to combine your data and React code to generate static files for your website. Netlify is a hosting company for static sites that offers continuous integration, HTML forms, AWS Lambda functions, and even content management. In this tutorial, I’ll show you how to use Gatsby to create a blog that integrates with Netlify CMS for content. The app you build will support authoring posts in Markdown and adding/editing posts from your browser or via Git! Finally, I’ll show you how to secure a section of your app with Okta. Ben Nadel looks at how to use a ColdFusion Closure to perform a depth-first tree traversal in a way that allows the tree traversal logic to be reused in a variety of contexts in Lucee CFML 188.8.131.52. Closures are so freaking powerful!… Ben Nadel has evolved his understanding of Repositories and Data Access Layers (DAL) over time. While he originally believed these concepts revolved solely around CRUD-type method, he now takes a more simplified and flexible view of these abstractions…. Ben Nadel incorrectly assumes that the isArray() decision function ensures CFML Array member methods in Lucee CFML 184.108.40.206. The problem is one of trust: he had too much trust for data that he did not create…. By Danny Markov We’re kicking off 2020 with a list of some of our favorite web dev libraries, frameworks and tools that you should use in your next project. Continue reading on Tutorialzine. Ben Nadel takes Brad Wood’s original Memory Leak Detector ColdBox module and translates it into a ColdFusion component that can be used in his own application, which uses manually-configured Inversion-of-Control (IoC) and Lucee CFML 220.127.116.11….
OPCFW_CODE
Deploying BOSH Release for Windows This topic describes how to deploy the BOSH Release for Windows to install Windows cells on your Pivotal Cloud Foundry (PCF) deployment. Note: The BOSH Release for Windows is currently in beta. To deploy the BOSH Release for Windows, you must have PCF 1.8 or later deployed to vSphere or Amazon Web Services (AWS). - If your PCF deployment runs on vSphere, you must build your own stemcell by following the steps in the Building a Windows Stemcell topic before performing the procedures below. - If your PCF deployment runs on AWS, you can use the stemcell included in the BOSH Release for Windows, but your deployment must be in Note: Once your Windows cell is running, you must disable FIPS as a Group Policy setting. If you fail to disable FIPS as a Group Policy setting, Garden Windows will not work. Ensure that you created a service network during your Ops Manager installation. A service network specifies a CIDR range within which Ops Manager does not provision VMs. You create a service network by selecting a checkbox in the Create Networks section of Ops Manager. See your IaaS-specific topic for configuring Ops Manager from the Installing Pivotal Cloud Foundry topic for more information. Download all of the BOSH Release for Windows files from Pivotal Network to a single directory. Prepare to SCP onto your Ops Manager VM. - For AWS, perform the following steps: - In the EC2 instances page of your AWS Console, locate the FQDN of the Ops Manager VM. - Locate the ops_mgr.pemprivate key file you used when installing Ops Manager, and ensure that you have added it to your list of private keys with the following terminal command: $ ssh-add ops_mgr.pem - For vSphere, perform the following steps: - In vCenter, locate the FQDN of the Ops Manager VM. - Locate the credentials you used to import the PCF .ovffile into your virtualization system. - For AWS, perform the following steps: In a terminal window, navigate to the directory where you downloaded the BOSH Release for Windows files. For example, if you downloaded the files to the ~/bosh-windows directory, run the following command: $ cd ~/bosh-windows generate_manifest.rb, and your stemcell file to your Ops Manager VM as Note: For AWS, use the stemcell included in the BOSH Release for Windows. For vSphere, use the stemcell you built in the Building a Windows Stemcell topic. The following example securely copies an AWS stemcell: $ scp garden-windows-0.0.6.tgz generate_manifest.rb light-bosh-stemcell-0.0.46-aws-xen-hvm-windows2012R2-go_agent.tgz email@example.com:~ For vSphere, enter the password that you set during the .ovadeployment into vCenter when prompted. Follow the steps in the Log into BOSH section of the Advanced Troubleshooting with the BOSH CLI topic to target and log in to the BOSH Director. The steps vary slightly depending on whether your PCF deployment uses internal authentication or an external user store. After you successfully log in to the BOSH Director, use the bosh upload stemcell YOUR-WINDOWS-STEMCELL.tgzcommand to upload the Windows stemcell to BOSH. Replace YOUR-WINDOWS-STEMCELL.tgzwith the name of your Windows stemcell. $ bosh upload stemcell YOUR-WINDOWS-STEMCELL.tgz Note: For AWS, your deployment must be in eu-west-1to upload the stemcell to BOSH successfully. bosh download manifest YOUR-PCF-DEPLOYMENT YOUR-PCF-MANIFEST.ymlcommand to download the manifest of your PCF deployment. Replace YOUR-PCF-DEPLOYMENTwith the name of your PCF deployment, and YOUR-PCF-MANIFEST.ymlwith a manifest name to use to later steps. $ bosh download manifest YOUR-PCF-DEPLOYMENT YOUR-PCF-MANIFEST.yml Note: You must know the name of your PCF deployment to download the manifest. To retrieve it, run bosh deploymentsto list your deployments and locate the name of your PCF deployment. Use the manifest generation script included in the BOSH Release for Windows to generate a manifest for your deployment. You must specify either awsdepending on your IaaS. The following example uses AWS: $ ./generate-manifest YOUR-PCF-MANIFEST.yml aws > garden-windows.yml In a text editor, modify the generated manifest to replace the network name with the name of your service network. Upload the Garden Windows release to BOSH: $ bosh upload release garden-windows-y-x.tgz Deploy Garden Windows: $ bosh -d garden-windows.yml deploy
OPCFW_CODE
var formula = (function() { var formulas = { 'profession profession': theOneAndTwo, 'profession noun': theOneAndTwo, 'profession city': theOneOfTwo, 'number noun': theOneTwos, 'number profession': theOneTwos, 'noun noun': theOneAndTwo, 'describer place': theOneTwo, 'describer famousPerson': theOneTwo, 'city noun': theOneTwo, 'city place': theOneTwo, 'city noun': theOneTwo, 'city': theOne, 'noun': theOne, 'place': theOne, 'profession': theOne } function theOneAndTwo(pubWords) { return 'The ' + pubWords[0] + ' and ' + pubWords[1]; }; function theOneOfTwo(pubWords) { return 'The ' + pubWords[0] + ' of ' + pubWords[1]; }; function theOneTwos(pubWords) { return 'The ' + pubWords[0] + ' ' + pluralize(pubWords[1]); }; function theOneTwo(pubWords) { return 'The ' + pubWords[0] + ' ' + pubWords[1]; }; function theOne(pubWords) { return 'The ' + pubWords[0]; }; return { makeName: function(formula, pubWords) { return formulas[formula](pubWords) }, random: function() { var keys = Object.keys(formulas) var index = Math.floor(Math.random() * keys.length); return keys[index]; } } }())
STACK_EDU
Associate Product Manager, Enigma Public As an Associate Product Manager, you will be creatively dreaming up how Enigma Public can help connect people to public data reflecting the world around them. You sincerely believe in data’s potential for good and are particularly excited about public data. You’re interested in a position representing varied challenges, from dreaming up public data exploration projects (see Enigma’s previous Labs projects), to helping the Public engineering team execute on new features for Enigma Public to finding new ways to spread the word about public data. At best, you have experience in working with data; either as a researcher or an analyst or in developing applications or data visualizations. If not, you have a sense of what data projects might be most impactful and an ability to communicate clearly. You will be engaging with a wide range of actors within the company and outside of it: government agencies publishing datasets, potential partners in the civic tech ecosystem and engineering teams building data workflows within Enigma. - Familiarity with open data/public data landscape - Creative mindset as to the power and possibilities of public data - Ability to keep a project moving apace - Interest in engaging with data scientists, data journalists and researchers Enigma Public is a platform through which Enigma provides free, non-commercial access to public data to all, serving a community of data science students, journalists, data enthusiasts and more. This role will engage with these users and help others discover how public data can be useful to them. The opportunities are many, and the role has the possibility to be involved with writing about data, visualizing data, even building one’s own projects with public data — in addition to helping to help the Public engineering team stay aligned and on track. The role would sit within Enigma’s Product team, but your background doesn’t need to be in product. We hope that you are excited to help do good with public data. - Analyzing product metrics & KPIs - Engaging with users of Enigma Public - Scouting out new sources of public data - Writing feature specs for Enigma Public engineering team - Thinking up new applications for public data for good Qualities We Value - Collaborative outlook - A strong work ethic and an entrepreneurial drive - Excited by the unknown - Well-spoken whether in print or in front of a potential partner Enigma is a rapidly growing enterprise technology company based in the Flatiron neighborhood of New York City. We are Series B funded, partnering with some of the best investors in the world: New Enterprise Associates, Two Sigma Ventures, Comcast Ventures, Crosslink Capital, American Express Ventures, and others. Founded in 2012, Enigma was started based on the realization that there was tremendous potential in using public data to understand how the world works, but that it was untapped because the data is highly fragmented and disconnected. Enigma set out to change that by building and organizing one of the largest collections of public data in the world, and our big coming out party was winning TechCrunch Disrupts Battlefield in 2013. Our vision has remained steadfast, we want to empower people to interpret, and improve, the world around them. However, we have expanded our ability to realize this vision by combining our massive public data repository with an ecosystem of software and tools designed to link, resolve, enhance and apply data to help global-scale companies take on some of their hardest problems, ranging from preventing money laundering to ensuring patient safety. We are proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
OPCFW_CODE
// // BaseLayer.h // #pragma once #include "KG.h" #include "shape_animation_helper.h" namespace cellophane { namespace ns_layer { template <class Derived> class BaseLayer { public: //-------------------------------------------------------------- // interface //-------------------------------------------------------------- virtual ~BaseLayer() {}; void update(const TimeDiff current_time) { static_cast<Derived*>(this)->update(current_time); }; void draw() { static_cast<Derived*>(this)->draw(); }; void touchDown(ofTouchEventArgs& touch) { static_cast<Derived*>(this)->touchDown(touch); }; void touchMoved(ofTouchEventArgs& touch) { static_cast<Derived*>(this)->touchMoved(touch); }; void touchUp(ofTouchEventArgs& touch) { static_cast<Derived*>(this)->touchUp(touch); }; void fade(const TimeDiff timestamp, lib_animation::Types env, float end); void fade(const TimeDiff timestamp, lib_animation::Types env, KU::Modes mode); void setOrigin(const ofPoint& origin); protected: //-------------------------------------------------------------- // constants //-------------------------------------------------------------- const int kFadeDuration = 200000; //-------------------------------------------------------------- // protected methods //-------------------------------------------------------------- BaseLayer() : alpha_(0.0f), origin_(ofPoint()), animation_(new lib_animation::Animation(KG::kParamSize)) {}; explicit BaseLayer(float alpha) : alpha_(alpha), origin_(ofPoint()), animation_(new lib_animation::Animation(KG::kParamSize)) {}; void updateParams(const TimeDiff current_time); void updateParam(KG::Params param, float value); //-------------------------------------------------------------- // protected members //-------------------------------------------------------------- float alpha_; ofPoint origin_; lib_animation::Animation* animation_; private: BaseLayer(const BaseLayer&); BaseLayer& operator=(const BaseLayer&); }; //-------------------------------------------------------------- // public methods implementation //-------------------------------------------------------------- template <class Derived> void BaseLayer<Derived>::fade(const TimeDiff timestamp, lib_animation::Types type, float end) { lib_animation::Transition trans = lib_animation::createTransition(timestamp, kFadeDuration, type, alpha_, end); animation_->setTransition(KG::kAlphaParam, trans); } template <class Derived> void BaseLayer<Derived>::fade(const TimeDiff timestamp, lib_animation::Types type, KU::Modes mode) { const float end = static_cast<Derived*>(this)->toLayerAlpha(mode); lib_animation::Transition trans = lib_animation::createTransition(timestamp, kFadeDuration, type, alpha_, end); animation_->setTransition(KG::kAlphaParam, trans); } template <class Derived> void BaseLayer<Derived>::setOrigin(const ofPoint& origin) { origin_ = origin; } //-------------------------------------------------------------- // protected methods implementation //-------------------------------------------------------------- template <class Derived> void BaseLayer<Derived>::updateParams(const TimeDiff current_time) { const std::vector<lib_animation::Motion> motions = animation_->update(current_time); for (int i = 0; i < motions.size(); ++i) { if (motions[i].changed) updateParam(static_cast<KG::Params>(i), motions[i].value); } } template <class Derived> void BaseLayer<Derived>::updateParam(KG::Params param, float value) { if (param == KG::kXParam) { origin_.x = value; } else if (param == KG::kYParam) { origin_.y = value; } else if (param == KG::kAlphaParam) { alpha_ = value; } } } // namespace ns_layer } // namespace cellophane
STACK_EDU
Americans have been known to rip off British TV comedies from time to time. For example, ``All in the Family`` was based on the British sitcom ``Till Death Do Us Part.`` And ``Sanford and Son`` was based on the BBC`s ``Steptoe and Son.`` Over the years, however, a handful of British comedies have made their way directly onto American airwaves, the most well-known being ``Monty Python`s Flying Circus`` and ``Benny Hill.`` The result was a phenomenal cult following of these and other British comedy shows, a devotion that is being nurtured by the release of these programs on videocassette. For example, CBS/Fox Video has just added six tapes to its ``Brit Wit`` series, drawing from the BBC shows ``Are You Being Served?`` ``Black Adder`` and ``Yes, Prime Minister.`` The tapes join several other British comedy shows available on video such as ``Fawlty Towers,`` ``Ripping Yarns,`` ``The Two Ronnies`` and the aforementioned ``Monty Python`` and ``Benny Hill.`` The bizarre and somewhat biting humor found in these programs easily cross cultural gaps, say those involved with the genre. ``Like I always say, funny is funny,`` said John Inman, who plays Mr. Humphries in the long-running BBC series ``Are You Being Served?`` ``When I was in the U.S. in April, I saw an audience (watching an episode of the show) and I was amazed at the absolutely huge response. They laughed and knew exactly what we were talking about-apart of one or two little expressions,`` Inman said in a phone interview from England. ``I had my doubts about `Yes, Prime Minister` being successful in the U.S. because your system of government is so different than ours,`` said Nigel Hawthorne, who played Sir Humphrey Appleby in ``Yes, Prime Minister.`` ``But the show does seems to have a very good audience in the states,`` Hawthorne said in a phone interview. ``I think if the programs are written in an intuitive way, human foibles are unveiled and . . . you`ll find it funny.`` In Chicago, much of the affection for these shows was cultivated via public-TV station WTTW-Ch. 11, which has been airing British TV comedies since the mid-1970s, starting with ``Monty Python.`` Over the years, WTTW has regularly aired ``Black Adder,`` ``Yes, Prime Minister,`` ``Fawlty Towers`` and ``Ripping Yarns,`` among many others. ``Are You Being Served?`` is airing on WTTW several nights a week through the summer. Other British comedies have garnered strong audiences via syndication and cable TV (reruns of ``Benny Hill`` are on summer hiatus from WGBO-Ch. 66, where it runs weeknights from 9:30 to 10:30). Here`s what`s on videocassette: - ``Are You Being Served?`` is about the quirky characters working in a British department store. Premiering in 1972, about eight episodes of the show were produced each year through the mid-`80s. - ``Black Adder`` is a satire of different periods of British history, including the 16th Century and World War I. Rowan Atkinson stars as the title character in the series that started on the BBC in 1986. - ``Yes, Prime Minister`` is a BBC political satire on life at 10 Downing St. The show premiered as ``Yes, Minister`` in 1978 and ran as its current version until 1988. - ``Monty Python`s Flying Circus.`` There are 22 volumes of tapes culled from this now-classic BBC show. - ``Fawlty Towers``: Monty Python alum John Cleese stars as the inept manager of a small British hotel in this slapstick comedy that ran briefly on the BBC in 1975 and 1979. - ``Ripping Yarns``-and its sequels, ``More Ripping Yarns`` and ``Even More Ripping Yarns``-is a parody of British adventure tales created by Python veterans Michael Palin and Terry Jones in the late 1970s. - ``Benny Hill``: Seven different hour-long tapes of the late British slapstick comedian`s show, which premiered in England in 1969, are available. The Chicago-based Facets Video catalog carries a large collection of British TV comedies, including such little known programs as ``The Young Ones,`` ``Bread,`` ``Butterflies`` and ``Dad`s Army`` and ``Steptoe and Son.`` Call 312-281-9075 for more information. The Movies Unlimited catalog also carries an extensive listing of British comedies (call 1-800-523-0823), as does Ontario-based BFS Video (1-800-268-3891).
OPCFW_CODE
 krusader (re @IrcsomeBot: <frg> Hi, kubuntu 23.04 : any way to get a files explorer having a left panel with ALL computer tree (including NAS, USB keys...) and a place for my favorites shortcuts to folders and of courses tabs on right panel with each opened folders ? Dolphin is pretty but doesn't check all I want)  sudo apt install krusader  and enjoy it [11:56] <BluesKaj> Hi all [13:08] <cparras> - [14:48] <mmikowski> Hi BluesKaj! [14:48] <BluesKaj> hi mmikowski [14:49] <mmikowski> Good too see you. Just thought I'd say good morning, West Coast time :) [14:51] <BluesKaj> yeah still morning here in northern Ontario [15:02] <beadesroches2> bonjour tout le monde [15:04] <BluesKaj> bonjour beadesroches2 [15:23] <Simba> Hello [15:24] <Simba> What Males Kubuntu AWSOME!? [15:28] <Utkojhamela> It is not awsome. It has kde plasma installed as default Desktop Environment thats it. [17:51] <moritz_> Jo [17:52] <moritz_> Hope u all are safe and sound !  /help@join_captcha_bot [19:31] <templer> Hi All, my kubuntu app launcher menu is not opening browsers links is this normal to happen or can I fix it?  4w21  hi  Hello good, a question. and thanks in advance for the help. I have a NAS and I like to see the thumbnails of the videos I have on it before playing the files. Do you know if it is possible to see these thumbnails normally when I access the videos through samba? [23:56] <salapin> Hello. Thank you in advance for your help. I have a NAS and I have multimedia content, among other things videos, when I access my network directory where I have the videos via SMB, I cannot see the thumbnails of these. Can I do something to solve it? I would like to see the thumbnails of the videos [23:59] <salapin> Can any channel user help me?
UBUNTU_IRC
import sys import os import time from filename import Filename try: from slugify import Slugify except ModuleNotFoundError: print('Import error') class Renamer: def __init__(self, command, items): # Remove the first item, # since the first item passed is the executed python script self.items = self.create_filename_objects(items[1:]) self.command = command self.converted_name = [] def execute(self): if len(self.command) == 1: self.slugify() self.print_list() else: command, text = self.split_command_text() if command == 'p': self.prefix(text) self.print_list() else: self.suffix(text) self.print_list() self.rename_items() def slugify(self): for i in self.items: if not i.slug_path: i.set_slug(command='slug', text='') def prefix(self, pre_text): for i in self.items: i.set_slug(command='prefix', text=pre_text) def suffix(self, suff_text): for i in self.items: i.set_slug(command='suffix', text=suff_text) def split_command_text(self): c_list = self.command.split(' ') return [ c for c in c_list if len(c) > 0 ] def save_names(self): pass def create_filename_objects(self, items): return [ Filename(i) for i in items ] def print_list(self): for i in self.items: print(i.slug_path) def rename_items(self): for i in self.items: try: os.rename(i.id, os.path.join(os.getcwd(), i.slug_path)) i.id = os.path.join(os.getcwd(), i.slug_path) except: print('Rename error') print(i.id, os.path.join(os.getcwd(), i.slug_path)) time.sleep(5) def set_command(self, command): self.command = command
STACK_EDU
- Recognise that the 3 year workplan is no longer a mandatory activity, and we have decided that we will continue tracking it, purely as a way to track our progress to normative for FHIR Resources and projects (so we can know when our project PSS dates are coming up. - The 3 year plan was revised Some slightly even lesser excellent minutes by 22485 - Comments updated for further discussion with the Tracker submitter. The ServiceType property needs to be consistent with the Appointment property of the same name and update its cardinality to be multiple. These changes would be covered by the conditions and procedures that are connected to the encounter. Billing would then run from ChargeItems and the ranked Discharge Diagnosis etc based on the various jurisdiction specific business rules. Will take this up again on conference calls when we have more information, and attendance with additional EHR knowledge 14451 - old tracker from 2018. Suggestion to have people assigned to updates on different resources, so that we don't do updates on the same resource. Motion: No longer seeing any warning related to workflow in the QA repost, so closing the issue. Motion by Brian, second by Simone: 7-0-0 20479 - Clarification about subject in Encounter Need further suggestions from Lloyd and Grahame, as well as EHR developers to find out how to move forward with this. 22497 - Detail description of Encounter.serviceProvider references an unknown example Motion to approve to add the example of colonoskopi. Motion by Brian, second by Christian - 6-0-0 23764 - Resource Encounter - Update Mappings(v2) Pushed to when Cooper and Alex are present. 23837 - Encounter.participant.type name is confusing Comment added to the Tracker: Need to clarify if this is intended to cover roles or types. Currently is bound to the V3 Partipant Type set. (which includes concepts like admitter, escort, primary performer) Where there other similar participant properties contain a role, which is bound to the participant role valueset which includes concepts like surgeon, dental assistant, urologist, public health nurse. Will seek wider input on which of these concepts is desired to be covered in this case. 17414 - PractitionerRole extension for FHIR STU3 - Brian Postlethwaiteto document what standard Extension look like (tracker 17414) 22114 - Make Practitioner.communication consistent with Patient/RelatedPerson.communication - Brian Postlethwaitewill take Tracker 22114 to Zulip for further discussion Insert link to attendance sheet
OPCFW_CODE
This topic is to discuss the following lesson: may i know what u have updated in this lesson compared to previous article ?? what tips or practices do you use to search for root bridges quickly in a network with 290 switches In order to find the root bridge, you can issue the show spanning-tree command, and this will show you the root bridge for each VLAN configured on the switch. You’ll get something like this: SW2#show spanning-tree vlan 10 VLAN0010 Spanning tree enabled protocol ieee Root ID Priority 24586 Address 5254.001a.935a Cost 4 Port 1 (GigabitEthernet0/0) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 28682 (priority 28672 sys-id-ext 10) Address 5254.0015.bc74 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Aging Time 300 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Gi0/0 Root FWD 4 128.1 P2p Gi0/1 Desg FWD 4 128.2 P2p From the above output, you can see that for VLAN 10, the root bridge has an address of 5254.001a.935a. You also know which is the root port. Now if you know nothing else about your network of 290 switches, it may be hard to find that particular switch. What you can do is take a look at the cost to the root bridge, and this will give you an idea of how far away it is. In this case, the cost is 4. For more info on how to interpret these costs, take a look at the NetworkLessons note on STP cost calculation methods. Once you know the cost and the root port from the output, you can determine how far away the root bridge is, and via what path you can get there. You can trace your way back, from switch to switch, until you reach it. If you have no other information, that would be the way to find the root bridge. Now having said that, even if you have 290 switches it is unlikely that the root bridge for any particular VLAN would be very far away from any switch. An STP tree should never be more than six or seven switches in diameter, and a network of 290 switches should be subdivided into several network segments anyway. A purely Layer 2 network of so many switches would not be functional. But in most networks of that size, you would have some sort of monitoring system, so if you learn the MAC address of the root bridge, you can then search for it easily within that monitoring system. And in most such networks, you would manually configure the switches in such a way so that a specific device would become the root bridge, thus any (responsible) administrator should know which switch that is… I hope this has been helpful!
OPCFW_CODE
04-14-2011 11:52 AM I put the xxx.depot.gz files in /usr/temp and give this as the Source location for depot files. I have even unzipped them so they are xxx.depot files and still no joy. How do I create a depot? It seems to be some sort of magical directory where all sw-commands look for files. I have read and studied a good paper called "Software Depot Package Builder", but it too assumes one already has a depot to begin with. I do not. This is a bare bones recently built HPUX 11.00 system with no new software attached because I cannot find a depot anywhere. I would think that if I FTP'ed a xxx.depot.gz file to the system, I would have a depot file, but they are not recognized as such. Can anyone help answer what I expect is a dumb question once I see an answer? Solved! Go to Solution. 04-14-2011 10:04 PM Before using it with swinstall or swcopy, you must uncompress it first: The result will be a .depot file in uncompressed form, which should be usable with swinstall or swcopy directly. Note that the sw* tools will want a _full_ pathname of the depot, since they pass the actual work to the swagentd daemon, which does not have the same current working directory as your login session. So, to list the contents of the depot: swlist -s /full/path/of/your.depot (you can add options like "-l product", "-l fileset" or "-l file" to get more detailed listings) To install software from a depot: swinstall -s /full/path/of/your.depot To install *everything* from a depot: swinstall -s /full/path/of/your.depot \* You should also know that a depot can have two forms: a file/tape depot, or a directory depot. Before CD-ROMs, HP-UX software was distributed on magnetic tapes. The .depot file format is essentially a tape image file. A file depot can be used for local installations and swcopy operations only: it cannot be accessed remotely from another HP-UX system. A file depot can be converted into a directory depot with swcopy: swcopy -s /full/path/of/your.depot \* @ /full/path/of/directory_depot If a directory depot does not exist, swcopy will automatically create it. You can add the contents of multiple depot files to a single directory depot, and then install everything with only one swinstall operation (and with only one reboot, if one is necessary). A directory depot can also be accessed remotely, if its swacl permissions allow remote access (and by default, they usually do). To install software from a remote directory depot, just add "remote_hostname:" to the source specification: swinstall -s remote_hostname:/full/path/of/directory_depot By the way, you don't need a graphics card on your system to use SAM or any X Window System applications: if you have another system that can run X server software (e.g. a Windows workstation with ReflectionX or free Xming), you can use it as your display. SAM and the swinstall tools also have a text-based menu interface: if your DISPLAY environment variable is not set and your TERM variable correctly specifies the type of your terminal or terminal emulation, the swinstall tools and SAM should automatically switch to using an ASCII-art based user interface. 04-16-2011 02:25 AM Why would you use SAM when you can use swinstall CLI? >I have even unzipped them so they are xxx.depot files What does "file xxx.depot" show? It should show "tar" for depots. >How do I create a depot? You don't. They should already be depots. >paper called "Software Depot Package Builder", but it too assumes one already has a depot to begin with. I do not. SPB and swpackage create depots. swcopy copies them. >if I FTP'ed a xxx.depot.gz file to the system, I would have a depot file Yes, or a gzipped one. >MK: want a _full_ pathname of the depot, since they pass the actual work to the swagentd daemon I've always assumed that if it wasn't absolute, it was the name of a machine. Not because of swagentd(1m). >the swinstall tools also have a text-based menu interface Better to just use the CLI. 04-19-2011 06:31 AM I had no depots on my system. In other words, swlist -d depot returned none. I tried swcopy to no avail. It kept saying there were no depot fils at my source. (I thought swcopy was for copying existing depots files only). Anyway I used swpackage to cteate a depot and it succeeded. I thank all of you for responding. 04-19-2011 05:46 PM swcopy -x enforce_dependencies=false -s /tmp/PHXX_1234 \* @ /var/tmp/mydepot The directory (/var/tmp/mydepot) is created automatically and the depot file is then added. You then repeat for additional depots. This is how you merge many patches together to form a single installation depot. Don't forget the \* (or "*") that specifies use everything in the depot. 04-23-2011 08:57 AM >Anyway I used swpackage to create a depot and it succeeded. Created from what? You should only use swpackage to create tape depots or depots from raw files. 04-25-2011 09:37 AM
OPCFW_CODE
How can I achieve PIL colorize functionality? Using PIL, I can transform an image's color by first converting it to grayscale and then applying the colorize transform. Is there a way to do the same with scikit-image? The difference with e.g. the question at Color rotation in HSV using scikit-image is that there the black stays black while in PIL colorize function, I can define both where I want black and white mapped to. I think you want something like this to avoid any dependency on PIL/Pillow: #!/usr/bin/env python3 import numpy as np from PIL import Image def colorize(im,black,white): """Do equivalent of PIL's "colorize()" function""" # Pick up low and high end of the ranges for R, G and B Rlo, Glo, Blo = black Rhi, Ghi, Bhi = white # Make new, empty Red, Green and Blue channels which we'll fill & merge to RGB later R = np.zeros(im.shape, dtype=np.float) G = np.zeros(im.shape, dtype=np.float) B = np.zeros(im.shape, dtype=np.float) R = im/255 * (Rhi-Rlo) + Rlo G = im/255 * (Ghi-Glo) + Glo B = im/255 * (Bhi-Blo) + Blo return (np.dstack((R,G,B))).astype(np.uint8) # Create black-white left-right gradient image, 256 pixels wide and 100 pixels tall grad = np.repeat(np.arange(256,dtype=np.uint8).reshape(1,-1), 100, axis=0) Image.fromarray(grad).save('start.png') # Colorize from green to magenta result = colorize(grad, [0,255,0], [255,0,255]) # Save result - using PIL because I don't know skimage that well Image.fromarray(result).save('result.png') That will turn this: into this: Note that this is the equivalent of ImageMagick's -level-colors BLACK,WHITE operator which you can do in Terminal like this: convert input.png -level-colors lime,magenta result.png That converts this: into this: Keywords: Python, PIL, Pillow, image, image processing, colorize, colorise, colourise, colourize, level colors, skimage, scikit-image. I only used PIL to save a copy of the starting image and to save a copy of the resulting image so I could show both in my answer. The code will run just fine without both PIL interactions. I marked the answer as being a solution as it shows what to do, but it should be pointed out that there's issues with the code if you really try to open an image from skimage (and potentially from PIL, didn't test that). Mainly to do with array dimensions being improperly accounted for, but easy enough to fix You are most welcome to post any improvements you have as an answer - I’m always happy to learn and be shown better ways:-)
STACK_EXCHANGE
''' (1) Check if how many data I've scraped before are still there (2) If I can find some way to match those missing ones ''' import eb_passwords from ebird.api import Client, get_visits import datetime api_key = eb_passwords.ebird_api_key locale = 'zh' client = Client(api_key, locale) import sys import os import django from django.conf import settings sys.path.append(os.path.abspath('ebirdtaiwan')) from ebirdtaiwan.settings.base import DATABASES, INSTALLED_APPS settings.configure(DATABASES=DATABASES, INSTALLED_APPS=INSTALLED_APPS) django.setup() from fall.models import AutumnChanllengeData ################################# #### TEST 1 #################### ############################### ''' today = datetime.date.today() db_data = AutumnChanllengeData.objects.all() for d in range(1, today.day+1): D = datetime.date(2018,10,d) print('*****************************') print(D) api_data = client.get_visits('TW', date=D) db_checklists = AutumnChanllengeData.objects.filter(survey_datetime__day=D.day).values_list('checklist_id', flat=True) api_checklists = [] for data in api_data: api_checklists.append(data['subID']) n_db_missing = 0 for cl in api_checklists: if cl not in db_checklists: print(f'Missing checklist in db: {cl}') n_db_missing += 1 n_api_missing = 0 for cl in db_checklists: if cl not in api_checklists: print(f'Missing checklist in api: {cl}') n_api_missing += 1 print(f'Total data in api: {len(api_checklists)}') print(f'Total data in DB: {len(db_checklists)}') print(f'data in api but missing in db: {n_db_missing}') print(f'data in db but missing in api: {n_api_missing}') ''' ''' lots of data disappeared in ebird api from get_visits... why? (1) cid is changed? (2) it is just removed? use other way to check if it is still there!? ''' ''' maybe.... the maximun length of data return is fixed... maybe use smaller region unit and repeat can fix the issue... ''' ############################################################ #################### TEST 2 ############################# ########################################################## region_codes = { 'TW-TPE' : '台北', 'TW-TPQ' : '新北', 'TW-TAO' : '桃園', 'TW-HSQ' : '新竹', 'TW-MIA' : '苗栗', 'TW-TXG' : '台中', 'TW-CHA' : '彰化', 'TW-NAN' : '南投', 'TW-YUN' : '雲林', 'TW-CYQ' : '嘉義縣', 'TW-TNN' : '台南', 'TW-KHH' : '高雄', 'TW-PIF' : '屏東', 'TW-TTT' : '台東', 'TW-HUA' : '雲林', 'TW-ILA' : '宜蘭', 'TW-PEN' : '澎湖', 'TW-KIN' : '金門', 'TW-LIE' : '連江', 'TW-CYI' : '嘉義市', 'TW-KEE' : '基隆', } ''' nd = 0 for k in region_codes: data = client.get_visits(k, date = datetime.date(2018,10,1)) nd += len(data) twdata = client.get_visits('TW', date = datetime.date(2018,10,1)) print(f'every county {nd} VS TW {len(twdata)}') records = get_visits(api_key, 'TW', '2018-10-01', max_results=200) print(f'another way to get TW data: {len(records)}') ''' ''' records = get_visits(api_key, 'TW', '2018-10-01', max_results=200) print(f'2018-10-01: {len(records)}') records = get_visits(api_key, 'TW', '2018-10-02', max_results=200) print(f'2018-10-02: {len(records)}') records = get_visits(api_key, 'TW', '2018-10-03', max_results=200) print(f'2018-10-03: {len(records)}') records = get_visits(api_key, 'TW', '2018-10-04', max_results=200) print(f'2018-10-04: {len(records)}') records = get_visits(api_key, 'TW', '2018-10-05', max_results=200) print(f'2018-10-05: {len(records)}') records = get_visits(api_key, 'TW', '2018-9-01', max_results=200) print(f'2018-9-01: {len(records)}') records = get_visits(api_key, 'TW', '2018-9-02', max_results=200) print(f'2018-9-02: {len(records)}') records = get_visits(api_key, 'TW', '2018-9-03', max_results=200) print(f'2018-9-03: {len(records)}') records = get_visits(api_key, 'TW', '2018-9-04', max_results=200) print(f'2018-9-04: {len(records)}') records = get_visits(api_key, 'TW', '2018-9-05', max_results=200) print(f'2018-9-05: {len(records)}') records = get_visits(api_key, 'TW', '2018-8-01', max_results=200) print(f'2018-8-01: {len(records)}') records = get_visits(api_key, 'TW', '2018-8-02', max_results=200) print(f'2018-8-02: {len(records)}') records = get_visits(api_key, 'TW', '2018-8-03', max_results=200) print(f'2018-8-03: {len(records)}') records = get_visits(api_key, 'TW', '2018-8-04', max_results=200) print(f'2018-8-04: {len(records)}') records = get_visits(api_key, 'TW', '2018-8-05', max_results=200) print(f'2018-8-05: {len(records)}') ''' def CompareCountySep(): for y in [2018,2019,2020]: for m in [8,9,10]: for d in [1,2,3,4,5]: D = datetime.date(y,m,d) print('*********************') print(D) nd = 0 for k in region_codes: data = client.get_visits(k, date = D) nd += len(data) all_nd = len(client.get_visits('TW', date = D)) print(f'sep combine {nd} : no sep {all_nd}') if __name__ == '__main__' : CompareCountySep()
STACK_EDU
This documentation site has been moved. Please refer to its new location in the future. For usage and structure information on the Python interface that builds on top of ROS, check out the Python Demos page. Further documentation of the Python API’s functionality can be found on this page. Note that you can check the source code methods’ docstrings for information on each method. End-effector poses are specified from /<robot_name>/ee_gripper_link (a.k.a the ‘Body’ frame) to /<robot_name>/base_link (a.k.a the ‘Space’ frame). In the code documentation, this transform is knows as T_sb (i.e. the transform that specifies the ‘Body’ frame ‘b’ in terms of the ‘Space’ frame ‘s’). In the image above, you can see both of these frames. The X axes are in red, the Y axes are in green, and the Z axes are in blue. The rotation and translation information is stored in a homogeneous transformation matrix. In a homogeneous transformation matrix, the first three rows and three columns define a 3-dimensional rotation matrix that describes the orientation of the ‘Body’ frame with respect to the ‘Space’ frame. The first three rows and the fourth column of the matrix represent the translational position (i.e. xyz) of the ‘Body’ frame with respect to the ‘Space’ frame. The fourth row of the matrix is always [0 0 0 1] for matrix multiplication purposes. You will see two other homogeneous transformation matrices in the code: T_sd and T_sy. T_sd defines the desired end-effector pose with respect to the ‘Space’ frame. This transformation is used in methods like set_ee_pose_matrix, where a single desired pose is to be solved for. T_sy is a transform from the ‘Body’ frame to a virtual frame with the exact same x, y, z, roll, and pitch as the ‘Space’ frame. However, it contains the ‘yaw’ of the ‘Body’ frame. Thus, if the end-effector is located at xyz = [0.2, 0.2, 0.2] with respect to the ‘Space’ frame, this converts to xyz = [0.2828, 0, 0.2] with respect to the virtual frame of the T_sy transformation. This convention helps simplify how you think about the relative movement of the end-effector. The method set_ee_cartesian_trajectory uses T_sy to command relative movement of the end-effector using the end-effector’s yaw as a basis for its frame of reference. The Python API uses four different timing parameters to shape the time profile of movements. The first two parameters are used to determine the time profile of the arm when completing moves from one pose to another. These can be set in the constructor of the object, or by using the - moving_time - duration in seconds it should take for all joints in the arm to complete one move. - accel_time - duration in seconds it should take for all joints in the arm to accelerate/decelerate to/from max speed. The second two parameters are used to define the time profile of waypoints within a trajectory. These are used in functions that build trajectories consisting of a series of waypoints such as - wp_moving_time - duration in seconds that each waypoint in the trajectory should move. - wp_accel_time - duration in seconds that each waypoint in the trajectory should be accelerating/decelerating (must be equal to or less than half of wp_moving_time). set_ee_pose_matrix allows the user to specify a desired pose in the form of the homogeneous transformation matrix, T_sd. This method attempts to solve the inverse kinematics of the arm for the desired pose. If a solution is not found, the method returns False. If the IK problem is solved successfully, each joint’s limits are checked against the IK solver’s output. If the solution is valid, the list of joint positions is returned. Otherwise, False is returned. If an IK solution is found, the method will always return it even if it exceeds joint limits and returns False. Make sure to take this behavior into account when writing your own scripts. Some users prefer not to think in terms of transformation or rotation matrices. That’s where the set_ee_pose_components method comes in handy. In this method, you define T_sd in terms of the components it represents - specifically the x, y, z, roll, pitch, and yaw of the ‘Body’ frame with respect to the ‘Space’ frame (where x, y, and z are in meters, and roll, pitch and yaw are in radians). If using an arm with less than 6dof, the ‘yaw’ parameter, even if specified, will always be ignored. When specifying a desired pose using the methods mentioned above, your arm will its end-effector to the desired pose in a curved path. This makes it difficult to perform movements that are ‘orientation-sensitive’ (like carrying a small cup of water without spilling). To get around this, the set_ee_cartesian_trajectory method is provided. This method defines a trajectory using a series of waypoints that the end-effector should follow as it travels from its current pose to the desired pose such that it moves in a straight line. The number of waypoints generated depends on the duration of the trajectory (a.k.a moving_time), along with the period of time between waypoints (a.k.a wp_period). For example, if the whole trajectory should take 2 seconds and the waypoint period is 0.05 seconds, there will be a total of 2/0.05 = 40 waypoints. Besides for these method arguments, there is also wp_moving_time and wp_accel_time. Respectively, these parameters refer to the duration of time it should take for the arm joints to go from one waypoint to the next, and the time it should spend accelerating while doing so. Together, they help to perform smoothing on the trajectory. If the values are too small, the joints will do a good job following the waypoints but the motion might be very jerky. If the values are too large, the motion will be very smooth, but the joints will not do a good job following the waypoints. This method accepts relative values only. So if the end-effector is located at xyz = [0.2, 0, 0.2], and then the method is called with ‘z=0.3’ as the argument, the new pose will be xyz = [0.2, 0, 0.5]. End-effector poses are defined with respect to the virtual frame T_sy as defined above. If you want the end-effector to move 0.3 meters along the X-axis of T_sy, I can call the method with ‘x=0.3’ as the argument, and it will move to xyz = [0.5828, 0, 0.2] with respect to T_sy. This way, you only have to think in 1 dimension. However, if the end-effector poses were defined in the ‘Space’ frame, then relative poses would have to be 2 dimensional. For example, the pose equivalent to the one above with respect to the ‘Space’ frame would have to be defined as xyz = [0.412, 0.412, 0.2]. Tips & Best Practices The recommended way to control an arm through a series of movements from its Sleep pose is as follows: - Command the arm to go to its Home pose or any end-effector pose where ‘y’ is defined as 0 (so that the upper-arm link moves out of its cradle). - Command the waist joint until the end-effector is pointing in the desired direction. - Command poses to the end-effector using the set_ee_cartesian_trajectorymethod as many times as necessary to do a task (pick, place, etc…). - Repeat the above two steps as necessary. - Command the arm to its Home pose. - Command the arm to its Sleep pose. You can refer to the bartender script to see the above method put into action. If using a 6dof arm, it is also possible to use the set_ee_cartesian_trajectory method to move the end-effector along the ‘Y-axis’ of T_sy or to perform ‘yaw’ motion. Some functions allow you to provide a custom_guess parameter to the IK solver. If you know where the arm should be close to in terms of joint positions, providing the solver with them will allow it to find the solution faster, more robustly, and avoid joint flips. The end-effector should not be pitched past +/- 89 degrees as that can lead to unintended movements.
OPCFW_CODE
I got myself an issue assigned on Friday, but did not start working on it until today, but fret not, I've spent a lot of time with Kiwi's codebase today and I think I have a preliminary understanding of how to write most of the other testcases. I'll try to remember as much of what I did as I can. The issue I took up is of writing automated testcases for one of kiwi's submodules (issue, for reference). The issue wants us to take the testexecutionstatussubmodule and write tests for it. Luckily for me the file only has a single function. So this is what I'm supposed to do: - Create a test file for - Choose one funtion. - Make a test class for it. - Initialize all the pre-requiste objects. - Check all the possible use cases of the function. Which sounded pretty easy at a first glance, as it usually does, but ain't, especially if you have no idea what's happening in the code. I used the code written for a similar issue (different module) as reference, and there were plenty of other tests for different modules in the While I had understood what to do, I didn't know how to do it. Digging around a little more, I started seeing a common pattern when it came to initializing all the pre-requisite objects. All the objects being initialized were using factories. I found the factories.pyfile, and sure enough this is where the objects being as prerequisites were being defined. I tried looking for a class for TestExecutionStatus, but there wasn't one. Hmm, couldn't figure out what to do. Then I remembered that the project maintainers had told me to look at the individual tables and also how all of them related to each other. I started looking for something along the lines of TestExecutionStatus and found it quickly as you can see below. The image tells me that TestExecutionStatus is a foreign key in the TestExecution model, which there exists a factory for, Bingo!. Now to writing the actual tests, which I'll write about in the next post. Other than that I spent a lot of time yesterday trying to figure out how to run the test file faster. Whenever I want to run my test file I do ./manage.py test --pattern="test_testexecutionstatus.py" This does three things: - Create a new test database - Run all the migrations on it - Run the test This works, but each test run takes about 62 seconds, which is a little too long. I run whatever I write pretty often and this impedes progress and concentration severely, so I was looking for ways make it faster. First I used the --keepdb flag, this reuses the test databaase, but no significant difference in the total time. Then I tried the -v 3flag, to have a verbose looking output and it was clear that there were too many migrations, so it was the second step that was causing the delays. If I could make the migrations on the test database presist for each test run, I could significantly reduce the test time. Sadly, I haven't been able to find a solution to it yet. I've tried switching databases from SQlite to Postgres, turning off the [Migrate](https://docs.djangoproject.com/en/3.1/ref/settings/#std:setting-TEST_MIGRATE) flag, but have either run into problems, or it hasn't helped, I'll update if and when I find a fix. In dotfile news, I have started using ripgrep(using vim-ripgrep) to search for text in files. It has been very useful and I'd highly recommend it for people trying to parse and understand massive codebases, it's very quick and accurate. If you're interested in why ripgrep is so fast, you can read the blog post by the author himself (haven't read it myself :| it's just too big and complicated). I've also switched out Delimitmate in favour of Auto-Pairs after reading this comparision, and using vim-closetag for having auto html tag completion. That about sums it up! Yeah, long day :)
OPCFW_CODE
Does the store buffer hold physical or virtual addresses on modern x86? Modern Intel and AMD chips have large store buffers to buffer stores before commit to the L1 cache. Conceptually, these entries hold the store data and store address. For the address part, do these buffer entries hold virtual or physical addresses, or both? I think a store uop has to check for a legal address during execution; that means reading the TLB. It seems crazy to discard this and force the store buffer to redo virt->phys as it commits to L1d cache. So I think we can rule out storing only virtual. Also that would make store-forwarding correctness hard for cases where the same phys page is accessed via two virt addresses. I think x86 guarantees you see your reloads see your own recent stores even in that case. I'm not sure why it would be useful to keep the virtual once you have physical; I don't think store-forwarding can probe only on virtual address, although probing first on virt then again on physical to save latency is plausible. @PeterCordes I think you are correct: both the PA and the VA are stored in the SB. This image seems to confirm this. The PA is filled after the TLB lookup which is done in parallel with, among others, store forwarding lookup on the lower 12 bits of the address (loose-net check). That's why we have 4K aliasing. I just remembered my question about fallout have these details. @MargaretBloom: Note that the low 12 bits of the physical address are also the low 12 of virtual. You don't need to separately store the virtual low 12 in the SB, just check low bits of loads against phys addresses in the SB. But good point about 4k aliasing and the loose-net check happening in parallel with TLB access for loads. @PeterCordes I think the VA is stored in full length and the PA is missing the lower 12 bits (probably the TLB never deals with those bits). But it's the same, they are just not stored twice. @MargaretBloom: Any idea why the VA page bits would be stored at all? Maybe I'm missing something, but I don't see any obvious use for them in the SB. @PeterCordes - the patent that Margaret linked is pretty clear that the entire VA is stored, but it isn't exactly clear why. One thing it mentions is that the VA is available and stored 2 cycles earlier than the PA, so maybe it's to enable fast store forwarding (i.e., if the VAs match, the store definitely forwards), falling back to a slower path to handle unusual VA aliasing cases. There is another patent where they talk about fine-net and coarse-net stuff (also the store forwarding spectre paper) which probably clarifies. That patent also mentions one mechanism for split line stores: Additionally, if a store instruction involves storing data to memory locations spanning two cache lines, the MEU signals the data cache memory, and the STD and STA operations are driven to the data cache memory twice, with the data size and the physical address being adjusted accordingly the second time. FWIW this patent is quite old (1997) and refers to an old 32-bit uarch with 12 store buffer entries so things may have changed a lot in the meantime. @PeterCordes I think it enables fast store forwarding, it is used in the check algorithm in my fallout question (where the check is made in three steps: lower 12 bits, upper VA, upper PA). @Margaret: Ok, invlpg is serializing, so yes I guess VA can be sufficient to detect store-forwarding if we require OSes to use it carefully. (Presumably x86 doesn't guarantee what happens if you modify a PTE without doing invlpg. A TLB entry could be evicted and replaced while a store was still in flight, leading to spurious store forwarding for a load, making it effectively access the old physical page, even if it runs after loads that access non-forwarded data from the new physical page.) I wondered if the Core 2 TLB-handling "errata" kerfuffle (https://www.realworldtech.com/forum/?threadid=78469&curpostid=78455 / https://www.zdnet.com/article/linus-contradicts-openbsd-founder-on-intel-tlb-issue/) was related to this possibly-surprising effect, but IDK the details of that.
STACK_EXCHANGE
We place strict controls over our employees’ access to the data you and your users make available via the Revelation Pets services. The operation of the Revelation Pets services requires that some employees have access to the systems which store and process Customer Data. For example, in order to diagnose a problem you are having with the Revelation Pets services, we may need to access your Customer Data. All our employees and contract personnel are bound to our policies regarding Customer Data and we treat these issues as matters of the highest importance within our company. The following security-related audits and certifications are applicable to the Revelation Pets services: PCI: Revelation Pets is compliant with the Payment Card Industry Data Security Standards. We use a third party to process credit card information securely (Braintree payments) The environment that hosts the Revelation Pets services maintains multiple certifications for its data centers, including ISO 27001 compliance, FedRAMP authorization, PCI Certification, and SOC reports. For more information about their certification and compliance, please visit the AWS Security website, AWS Compliance website Deletion of Customer Data Revelation Pets provides the option for business Owners to delete Customer Data at any time during a subscription term. Data Encryption In Transit The Revelation Pets services support secure cipher suites and protocols to encrypt all traffic in transit. We understand that you rely on the Revelation Pets services to work. We're committed to making Revelation Pets a highly-available service that you can count on. Our infrastructure runs on systems that are fault tolerant, for failures of individual servers. Our operations team tests disaster-recovery measures regularly and staffs an around-the-clock on-call team to quickly resolve unexpected incidents. Customer Data is stored redundantly at multiple locations in our hosting provider’s data centers to ensure availability. Customer Data and our source code are automatically backed up nightly. The Operations team is alerted in case of a failure with this system. Incident Management & Response In the event of a security breach, Revelation Pets will promptly notify you of any unauthorized access to your Customer Data. Revelation Pets has incident management policies and procedures in place to handle such an event. External Security Audits We contract with respected external security firm (Security Metrics) who perform regular audits to monitor services for new vulnerabilities discovered by the security research community. Revelation Pets divides its systems into separate networks to better protect more sensitive data. Systems supporting testing and development activities are hosted in a separate network from systems supporting Revelation Pets' production website. Administrative access to systems within the production network is limited to those engineers with a specific business need. Network access to Revelation Pets' production environment from open, public networks (the internet) is restricted. Only a small number of production servers are accessible from the internet. Only those network protocols essential for delivery of Revelation Pets' service to its users are open at Revelation Pets' perimeter. Revelation Pets deploys mitigations against distributed denial of service (DDoS) attacks at its network perimeter. Changes to Revelation Pets' production network configuration are restricted to authorized personnel. Revelation Pets logs, monitors, and audits system calls and has developed rules and automation for system calls that indicate a potential intrusion.
OPCFW_CODE
We ramped up remote work before the pandemic to expose inefficiencies in communication and improve the lives of our colleagues who commute through an unreliable public transportation system. We figured we'd have to do it, so we might as well make learn how to. Our hypotheses was that cumulative fatigue with respect to days of the week was non-linear, and removing a couple of days to work remotely would reset us and improve recovery. We experimented with the first day of the week remotely. We thought it would be a nice transition, as people would spend the week-end, work remotely the first day of the week, then work at the office the following days. We noticed improvements in morale and energy levels. We also experimented with working remotely the whole week to learn what the bottlenecks were. This lead us to improve our technical writing and information dissemination: use helpful templatse to write clear issues that allow everyone to understand and be on the same page asynchronously, and everyone had access to these issues and knew where we were at. We haven't changed our stack: we use GitLab for repository management and issue tracking, Slack for messaging. One addition is Jitsi for video calls, but we're not using it more than before the pandemic. I'll die on that hill. There's work to be done, and nobody messes with the flow of people. We aggressively keep that under control not to disturb colleagues. We amortize that by taking notes and dispatching them, and by recording the videos for others to view ad libitum. Ideally, I want to have colleagues request a Jitsi session just to chat a bit and have a friendly conversation. The feedback is that colleagues say they now have long stretches of time to focus. They say that they miss the interactions. I'd rather have people say they miss the interactions on a call (i.e: demand), than having to take a low entropy call. One thing we've been doing from the beginning, before the pandemic, is get our colleagues' feedback: how are you feeling? How is working the first day remotely going for you? What do you like, what don't you like? How was working for one week remotely for you? After the pandemic: How are you today? Are you okay? What do you have trouble with? How can we help? These questions were important to correct course. Everyone has a different situations. Some people have less space than others, some don't have a dedicated space for work, some struggle with interruptions, and others find it hard to focus. It was important to talk these things through, encourage people, make it clear that it is an adaptation period and that everyone will find their rythm. It was also necessary to set expectations: I returned from Paris in late January and it was clear that it was only a matter of days before cases would pop up. We made the decision to start working remotely when the case count was 17. These cases were concentrated in a city where two colleagues lived, and they took the train to come in. It was an unacceptable risk and we made the decision to eliminate that risk until we knew more about the virus. Personally, I set my timeline in February that this whole thing would last at the very least one year. Having a "one year" timeline helps completely ignore press releases and government agencies roller-coaster expectation management, and helps setting colleagues expectation: we're in this for the long run, make your arrangements. This allows them, for example, to relocate to some other place and be with their family, which they cannot do if they don't know if they're going to be called in a week later. This also helps with finances so they can make the decision not to renew a lease, for example. We also make a clear committment that the safety of our colleagues is very important, and as long as this is not sorted out, we'll continue to work as we are. Some colleagues were worried as their friends who worked at other companies were called back after the government started pushing for "reopening", and the point that this wasn't going to happen was important to make. I recently switched to OBS Studio for screen-recording. I used Kazam before to record screens for presentations, but its audio stopped working on Ubuntu 20.04. I applied a patch I found on their bug tracker (replace time.clock() with time.perf_counter()) and it got the sound to work, but it has an issue when the sink is a Bluetooth headset. Happy to answer more questions, as it is working for us very well and we're actually working better now.
OPCFW_CODE
Spring Integration with Java CMS This project covers the detailed design for the Back Office Services (bos) for the Content Management System (CMS) system [url removed, login to view] [url removed, login to view];view=article&id=69:sac-back-office&catid=55:sac-back-office-workspace&Itemid=70 Winning bids will be judged on the following questions (please answer as part of your response) [url removed, login to view] experience with Spring and Active MQ [url removed, login to view] of the approach or similar project to show understanding of requirements [url removed, login to view] of company and size that will be working on this project [url removed, login to view] and SOA experience and production projects in either Java or SOA The Back-office CMS services are implemented using the Spring Integration Project at Springsource. [url removed, login to view] Deliverable 1 – Web Services – for content and documents Create web service with 10 operations. First service for Content Creation including Create, Update, Read, Delete and Listing (query). Second service is for Documents, with Create, Update, Read, Update and Listing (query) Deliverable 2 – Document meta data grouping Plugin Ability to add meta data to documents which groups the documents together. Meta data should include attribute name vales. Group ID, priority. And default. An example of implementation will be to upload 5 product images and have them grouped via meta data. Then through a Listing call gain the listing of the images to be included in website. Deliverable 3 Meta Data Folder Taxonomy Structure Plugin Create a new structure tree that builds dynamically based on the Meta data values. For example a meta data field named Year, that has two documents with meta data filled in as 2002, and 2004 would result in a logical folder structure with Year, containing 2 folders 2002 and 2004. There will be multiple taxonomy trees with rules for children and order of children. The Taxonomy Tree is based on Site Host with admin privilege roles to add/remove. Deliverable 4 LDAP integration for Enterprise CMS Integrate to the Hosted CMS to OpenLDAP for out standard security model consisting of 12 standard roles. Deliverable 5 – Default Configurations Deliver a load script that will load our CMS with default information. This will include a default 4 companies for testing and users with 12 roles. These 4 companies will be set up as host and have their own logo and CSS based on Login. 13 freelanceria on tarjonnut keskimäärin %project_bid_stats_avg_sub_26% %project_currencyDetails_sign_sub_27% tähän työhön We possess extensive experience of developing numerous high-end websites and are highly organized and adept at meeting tight deadlines that are so common in this industry. Please see PMB for more details. We are an information security with proven experience in application development in spring framework, Portals, content management , Middleware (Sonic MQ, Active MQ) etc. Please check PMB for more details. -Hemal I am Expert in Website Designing,Graphic Designing,Dynamic Web Development, E-Commerce Development, Web promotion and SEO with more than 9 Years Experience.
OPCFW_CODE
So I pretty much lost a day and half trying to sort out certificates for one of my vCenter clusters. I’ve read through many VMware KB’s different blogs and articles on how to have a valid certificate for my vCenter servers, but didn’t quite find a definitive one. So I’m hoping that what I did early in the week will be helpful. It will also serve as a reminder for me on how I did this. As of March 7, 2019 I am running VCSA 6.0 Build 9291058, I also have Microsoft CA to generate and issue certificates. From the VCSA, I logged in as root and ran the following command This will bring up the vSphere 6.0 Certificate Manager Select Option 1 to replace machine ssl certificate with custom certificate. Follow the prompts to enter credentials for the SSO and vCenter server. After a successful login, select option 1 to generate certificate signing request(s) and key(s) for machine SSL certificate You’ll then be prompted for a location to save the CSR(s) and Private Key(s). Enter a desired location and select whether or not to reconfigure certool.cfg. If you already have one configured, you can select no, to save some time. Otherwise select Y and enter the appropriate values. After the csr is generated, select Option 2 to exit the certificate manager Next up, you’ll need to go to your Microsoft CA to request a certificate. Open the CSR in a text editor and copy it into the field, add any attributes if neccesary: After you submit the request, open the .cer in a text editor and copy that into a text editor on your VSCA. Save the file with a meaningful name, ie; hostname.cer Next, you’ll need to create a signed chained cert. You’ll need the cer that was just created, as well as any intermediate CA certs and the root cert. For my purposes, I exported the certs from the built-in windows certificate management console. Selected the intemediate and root certs, and exported them. Be sure to download the cert in Base64 format After you have all the certs, copy it to your vCenter, and then you’ll need to create a signed chained cert in this order: <generated CA cert> <intermediate cert> <root cert> cat certnew.cer intermediateCA.cer RootCA.cer > name_of_signing_chain.cer After creating the chained cert, run the certificate manger again Select Option 1 Enter credentials for SSO and select Option 2 to import certs. Provide the location of the generated cert, the private as well as the chain cert. This will take a minute or two. After the certs are successfully imported, you’ll valid SSL cert for you vCenter.
OPCFW_CODE
Terraform Import with index Hi, Have you tried using the terraform plugin with the import command with an index on the resource like this: task: TerraformTaskV3@3 displayName: 'Terraform import' inputs: provider: 'azurerm' command: 'custom' workingDirectory: '$(System.DefaultWorkingDirectory)' commandOptions: '--var-file myvarfile.tfvars azurerm_route_table.tr-tblroute["index"] /subscriptions/id/resourceGroups/rg/providers/Microsoft.Network/routeTables/tr-routetable' outputTo: 'console' customCommand: 'import' environmentServiceNameAzureRM: serviceName Looks like the plugin doesn't allow putting an index on the option line? Is it a bug? thanks More details about my tests : commandOptions: '--var-file NPR.AZ.COMMUN.01.tfvars azurerm_route.tr-route["pr-svcpartage-commun-001"] ${{ parameters.resourceId }}' output : /usr/bin/terraform import --var-file NPR.AZ.COMMUN.01.tfvars azurerm_route.tr-route[pr-svcpartage-commun-001] ... Index brackets must contain either a literal number or a literal string. If I add " like this : commandOptions: '--var-file NPR.AZ.COMMUN.01.tfvars azurerm_route.tr-route["pr-svcpartage-commun-001"] ${{ parameters.resourceId }}' output: /usr/bin/terraform import --var-file NPR.AZ.COMMUN.01.tfvars azurerm_route.tr-route[\pr-svcpartage-commun-001"] ... the first " of the index has disappeared ans I dont know why Thanks for reporting, we will look into this and get back to you sooner. Hi. Have you tried to escape the quotes? like commandOptions: '--var-file myvarfile.tfvars azurerm_route_table.tr-tblroute["index"] Yes I have tried like this : commandOptions: '--var-file NPR.AZ.COMMUN.01.tfvars azurerm_route.tr-route["pr-svcpartage-commun-001"] ${{ parameters.resourceId }}' and the output is : usr/bin/terraform import --var-file NPR.AZ.COMMUN.01.tfvars azurerm_route.tr-route[\pr-svcpartage-commun-001"] ... the first " of the index has disappeared ok. that is strange. let me look around and see what I can come up with. I think the issue here is that the extension use string input using another tool lib , azure-pipelines-task-lib/task, to parse the quotes ETC. Ok found another reported issue on azure-pipelines-task-lib. that stated if you want literal "b" you should escape it like this "\"b\"" that would mean commandOptions: '--var-file myvarfile.tfvars azurerm_route_table.tr-tblroute["\"index\""] Nice! It works with the way you wrote the index: ["\"index\""] Do you plan to correct this problem or I must thinking to write the " with the way you indicated to me? Thanks! As this is not specific to the extension but rather behavior of the azure-pipelines-task-lib we will not fix this. I will however updated the documentation on escaping quotes.
GITHUB_ARCHIVE
How to pass parameters to triton functions? I am very confused about how to pass arguments to the triton kernels. Please see the example below. Could you please let me know: 1) why I cannot pass integers by positional arguments, such as "a"? 2) why the reshape function does not take external arguments, such as "c" and "d"? import triton import triton.language as tl import torch @triton.jit def test(A, a, **meta): b = meta['b'] c = meta['c'] d = meta['d'] n1 = tl.arange(0, a) # this leads to error n2 = tl.arange(0, b) # this works fine n3 = tl.reshape(n2, [c, d]) # this leads to error x = torch.zeros(10, 10, device='cuda') test[(1,)](x, 6, b=6, c=2, d=3) Hey, First of all, block sizes must always be powers of two. Seems like it doesn't error where it should now. Second, they must be compile time constants. The way things work in Triton right now is that positional arguments are variables, and keyword arguments are metaparameters, i.e., compile time constants. In addition, there is also a bug in Triton right now, which makes compile-time constant lose their constant quality upon assignment inside kernels :( so although n3 = tl.reshape(n2, [c, d]) doesn't work, doing n3 = tl.reshape(n2, [meta['c'], meta['d']]) should. I hope this bug can be resolved soon, but it's actually pretty hard to do. I'm currently thinking about the best way to resolve ambiguity with compile-time constants. Maybe through a triton.meta module. Thanks! You mentioned that "block sizes must always be powers of two", which however I am not absolutely sure about. Do you mean I should make a power of two in n1 = tl.arange(0, a)? By the way, I think it would it would be very helpful for the users if the tutorial can explain the concept of "compile time constants" and its difference from a variable. Do you mean I should make a power of two in n1 = tl.arange(0, a) (for example a=8 instead of 6)? Yep By the way, I think it would be very helpful for the users if the tutorial can explain the concept of "compile time constants" and its difference from a variable. That's true, I'll add it :) Thanks for the feedback Here is another thing: I want to perform a correlation operation on a 4D tensor X with shape [B, C, H, W], where B is the batch size, C is the channel, H and W are height and width, respectively. The output Y is another 4D tensor with shape [B, H, W, k*k], where k is the correlation window size. For example, Y_{b, h, w, k_h, k_w} Sorry if I made the question too complex to reply to. Please let me know if I can provide more details. I'll be creating a separate issue that specifically seeks to address the legitimate confusion between variables and compile-time constants. For your correlation problem, can you open a separate issue? Thanks!
GITHUB_ARCHIVE
Microsoft is in the middle of a migration, too. The service and software providers who migrate 3000 sites -- or just support homesteaders with a lot of Windows -- can roll their eyes at all the changes. But the shift from XP to Windows 7 is much bigger a deal than everyday security patches and product updates. Right? Well, not so much. Over and over we've found that the 3000 site which has embraced Windows as a replacement doesn't perceive XP as a lame duck. At True Value Hardware Canada, for example, IT Director Tim Boychuk said the Microsoft announcements of end of XP life haven't changed his strategy. "The majority of our production systems are XP," he said. "We're in the prototype stages of testing Windows 7 with [installed ERP solution] Microsoft Dynamics. If [Microsoft] does an announcement of end of support, they have extended it." The latest extension was announced last August; XP now has a 2014 end date. This is practical and cost-effective IT management, the execution of "not broke, don't change it" strategy. Microsoft's latest announcement puts the third extension onto ending the life of XP Service Pack 2, with a new date of July 13. Online support is available after that, but extended support via Microsoft ends this summer. The simplest way to stick with Microsoft support is to upgrade clients to SP3. This extension strategy from Microsoft doesn't change the fact that the desktop OS that links with server apps like Dynamics is well beyond Redmond's plan for retirement. (If that reminds homesteading 3000 managers of the string of HP support extensions for MPE/iX, perhaps it says something about keeping to your own schedule, rather than following the vendor's plans.) XP still has hundreds of thousands of support experts available for hire, given that it was shipped with millions of PCs over the last nine years. That's a different picture than seeing the 3000's ecosystem pared down over the same period. Clearly, XP use can be a lower risk than MPE/iX deployment, until you look at browser support in XP. Explorer 6 has a long list of security deficiencies that require patches, and IE 6 has been an essential in the XP experience. Boychuk's operations are moving to Windows 7 to address this, all the while keeping XP in mission-critical use. He's among the many 3000 sites that moved away from HP completely in their migration, following his shop's expertise. "We didn't have much Unix experience here," he explained. Today, IBM's servers power a Microsoft Windows Server 2003 environment. Add Microsoft's app and subtract the HP support contracts, and this becomes a customer who HP lost in its migration push. With 700 dealers and independent hardware suppliers to serve, this is not a small site, either. One of the key elements of True Value's 3000 installation made the transition to Windows. Hillary Software's byRequest is still in use by those dealers, serving up 400,000 reports via email, fax or Web interface. Just like the Dynamics 2003 server app, byRequest doesn't care if a PC runs with the on-its-way out XP. It doesn't require Windows 7, but the more current Dynamics version will need 7. Risk always lies in the eye of the IT manager, beholding his choices independently.
OPCFW_CODE
UI and business logic or the chicken and egg problem Recently I'd faced the challenge of wiring up a newly implemented (by another developer) UI layer of a web application to the application itself. The UI solution was very complex with lots of cross-event dependencies and a fancy labyrinth of nested template references (who knows Angular can definitely relate). The reason why the author hadn't finished up that task himself was apparent: they needed to connect this UI to an already existing feature (out of his pure-UI area of responsibility) and refactor the underlying business logic layer without any domain awareness. The situation was complicated by using dynamic reactive forms for this feature. So for some reason, the idea of implementing the bare UI solution (HTML+CSS basically, through custom shared UI library) and passing it for further revitalization was considered a good one by our Product Owner. That's how I ended up in front of my IDE in complete frustration. Long story short, I had finished this tedious mind-blowing chunk of work, and it started to look and function as intended again, but with a new shiny layout and styles. The predictable aftermaths were: - extra time spent by the first developer to mock functionality for the newly implemented UI layer - additional time spent by the second developer (me) to wrap his head around the new contraptions, and their mock data flow - complication of business logic to adjust it to UI scaffolding Needless to say, I was angry with all these consequences. This case made me think a lot about the correct and appropriate flow of developing application features, both in general and with Angular in particular. In my humble opinion, the following summary should be added as a de-facto standard for software development processes. Probably even written down in the corresponding guidelines (documentation) of each specific project. Okay, to not be like this, let me just suggest it as a good practice for working on existing front-end applications (i.e., refactoring). So here they are, three simple yet powerful rules of approaching a feature refactoring: - Always compile detailed business requirements ahead of development to avoid deep rework - Don't separate work on interconnected layers of a single feature between different developers (unless it's pair programming) - Start from mechanics, finish with UI, not vice versa These three rules will allow a project manager (or a team lead, a product owner, or whoever it might be) to shine and their team to be productive, efficient, and fast. Moreover, they prevent burnout and lower blood pressure, it's verified. Software development should be a mindful process not only as a whole but also in its parts. That's how we put all these separated concerns together. Cover photo by Daniel Tuttle on Unsplash
OPCFW_CODE
Thanks for chiming in. Replies inline below: On Mon, May 23, 2011 at 17:02, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > On Mon, May 23, 2011 at 02:39:39PM -0700, Nuno Subtil wrote: >> I have an MD RAID-1 array with two SATA drives, formatted as XFS. > Hi Nuno. it is probably best to say this at the start, too: >> This is on an ARM system running kernel 2.6.39. > So we know what platform this is occurring on. Will keep that in mind. Thanks. >> Occasionally, doing an umount followed by a mount causes the mount to >> fail with errors that strongly suggest some sort of filesystem >> corruption (usually 'bad clientid' with a seemingly arbitrary ID, but >> occasionally invalid log errors as well). > So reading back the journal is getting bad data? I'm not sure. XFS claims it found a bad clientid. I'm not too versed in filesystems to be able to tell for myself :) >> The one thing in common among all these failures is that they require >> xfs_repair -L to recover from. This has already caused a few >> lost+found entries (and data loss on recently written files). I >> originally noticed this bug because of mount failures at boot, but >> I've managed to repro it reliably with this script: > Yup, that's normal with recovery errors. >> while true; do >> mount /store >> (cd /store && tar xf test.tar) >> umount /store >> mount /store >> rm -rf /store/test-data >> umount /store > Ok, so there's nothing here that actually says it's an unmount > error. More likely it is a vmap problem in log recovery resulting in > aliasing or some other stale data appearing in the buffer pages. > Can you add a 'xfs_logprint -t <device>' after the umount? You > should always see something like this telling you the log is clean: Well, I just ran into this again even without using the script: root@howl:/# umount /dev/md5 root@howl:/# xfs_logprint -t /dev/md5 data device: 0x905 log device: 0x905 daddr: 488382880 length: 476936 log tail: 731 head: 859 state: <DIRTY> LOG REC AT LSN cycle 1 block 731 (0x1, 0x2db) LOG REC AT LSN cycle 1 block 795 (0x1, 0x31b) I see nothing in dmesg at umount time. Attempting to mount the device at this point, I got: [ 764.516319] XFS (md5): Mounting Filesystem [ 764.601082] XFS (md5): Starting recovery (logdev: internal) [ 764.626294] XFS (md5): xlog_recover_process_data: bad clientid 0x0 [ 764.632559] XFS (md5): log mount/recovery failed: error 5 [ 764.638151] XFS (md5): log mount failed Based on your description, this would be an unmount problem rather than a vmap problem? I've tried adding a sync before each umount, as well as testing on a plain old disk partition (i.e., without going through MD), but the problem persists either way. > $ xfs_logprint -t /dev/vdb > data device: 0xfd10 > log device: 0xfd10 daddr: 11534368 length: 20480 > log tail: 51 head: 51 state: <CLEAN> > If the log is not clean on an unmount, then you may have an unmount > problem. If it is clean when the recovery error occurs, then it's > almost certainly a problem with you platform not implementing vmap > cache flushing correctly, not an XFS problem. >> I'm not entirely sure that this is XFS-specific, but the same script >> does run successfully overnight on the same MD array with ext3 on it. > ext3 doesn't use vmapped buffers at all, so won't show such a >> Has something like this been seen before? > Every so often on ARM, MIPS, etc platforms that have virtually > indexed caches. > Dave Chinner
OPCFW_CODE
Here you can find the submissions to the challenge, along with link to the code hosted on GitHub. Not all submissions received are listed here. If a submission does not appear to include any Equihash code, then it is not published here and was not considered for judgment unless there was code added before the deadline. Submissions are listed in no particular order. PIMP team has created your favorite mining software such as BAMT 2, PiMP, PoolManager, SeedManager, FarmWatcher, and Miner.farm. With world-class support, business dev, and server experts. By miners, for miners. Install your GPU drivers as you normally would and reboot. You’ll know your GPU has been recognized correctly if you go to Device Manager (search it in Windows search bar) you don’t see any warning marks on your GPUs. They should look like this: Block A block is a unit of the code the comprises the bockchain. It is the record of transactions that have occurred since the last block was created and a confirmation to previous transactions. Each block links to the block before it, thus creating a full chain back to the original or “genesis” block. The other thing is wallet that you need to complete the mining process. You require a wallet to receive, send and store your funds. There are multiple GPU wallets available and you can choose the right one. You can compare the features and reviews. Buying gear and mining cryptocurrency with it allows you to own an income-producing asset in the gear itself, with aftermarket resale value holding up very well and even appreciating. GPUs purchased for mining in 2015 and 2016 were often sold at a profit in 2017 due to high demand in the market! Run with ‘-h’ parameter to learn possible commands. By default, miner uses all CPU cores and no CUDA devices. You need to explicitly enable CUDA devices (setting ‘-cd’ parameter). If you wish to mine only with CUDA and no CPU, set ‘-t 0’. There’s no payout. Reach 700, took a screen shot, but no income. If I could give lower than one star I’d do it. It also forces to open up a coinbase account specific for this app even if I already have one. Hey Charlie, OK, I expect you’re checking that out on the South African Bitmart.co.za site which sells miners and other crypto gear? It’s their name for 6x 1080 GPUs in a re-assembled rig… So sure, it’ll be good for mining zCash, as that’s its intended purpose after all and the 1080 GPU is a good miner. That said, electricity in SA has become pretty expensive and Bitmart’s prices are fairly steep. Their setup goes for R88k but I see that a single GT 1080 (basic model, not the Ti) goes for about R10k on Evetech: https://www.evetech.co.za/PC-Components/buy-nvidia-geforce-gtx-graphics-cards-47.aspx You should check which… Read more » Building a large ZEC position through mining now may allow you to take advantage of price appreciation in the future with less risk than you’d face by just buying ZEC. Let’s break this idea down a bit further. Sure, it’s easy. Set the Trezor up as per its documentation, making sure to accurately record and securely store the seed phrase as your top priority. Of course, you must also remember / record your PIN. It is no secret that AMD holds a solid leadership position in the world of cryptocurrency mining. For many of our readers, NVIDIA GPUs have a good mix of gaming and deep learning training capabilities. After our extremely popular Monero mining series, which works well on AMD GPUs and Intel/ AMD CPUs, we started looking for a cryptocurrency mining application for NVIDIA GPUs. We re-discovered Zcash mining. FPGA Field Programmable Gate Array, an FPGA is the former king of the Bitcoin mining world. An FPGA is an integrated circuit whos function can be changed as it can be reprogrammed. This makes if more versatile than an ASIC, but far less efficient in its ability. They enjoyed a short time between GPUs and ASICs as the most efficient way to mine. For further details, please see our hardware page. Choose and download an appropriate mining program that supports, a: your desired coin (Zcash), b: your GPU (GTX 1080), c: your operating system (Windows 10), and sometimes d: your pool (rarely but does occasionally happen). Before we begin, it is important to note that the responsibility of mining rests only with the miners themselves. Caution is required when running a computer and using high power for a long period of time. We have never heard of case where a miner burned a house down, but computers can certainly smoke. In addition, no one is guaranteed to see profits due to fluctuations in exchange rates and a change in mining difficulty level and performance.
OPCFW_CODE
Warning: Declaration of syntax_plugin_alertbox::handle($match, $state, $pos, &$handler) should be compatible with DokuWiki_Syntax_Plugin::handle($match, $state, $pos, Doku_Handler $handler) in /homepages/38/d684553722/htdocs/clickandbuilds/spatialecology_wiki/lib/plugins/alertbox/syntax.php on line 22 Warning: Declaration of syntax_plugin_alertbox::render($mode, &$renderer, $data) should be compatible with DokuWiki_Syntax_Plugin::render($format, Doku_Renderer $renderer, $data) in /homepages/38/d684553722/htdocs/clickandbuilds/spatialecology_wiki/lib/plugins/alertbox/syntax.php on line 45 Warning: Declaration of syntax_plugin_bliki::handle($match, $state, $pos, &$handler) should be compatible with DokuWiki_Syntax_Plugin::handle($match, $state, $pos, Doku_Handler $handler) in /homepages/38/d684553722/htdocs/clickandbuilds/spatialecology_wiki/lib/plugins/bliki/syntax.php on line 75 Warning: Declaration of syntax_plugin_bliki::render($mode, &$renderer, $data) should be compatible with DokuWiki_Syntax_Plugin::render($format, Doku_Renderer $renderer, $data) in /homepages/38/d684553722/htdocs/clickandbuilds/spatialecology_wiki/lib/plugins/bliki/syntax.php on line 379 wiki:basicgrass7 The object of this page is to start using Grass and to get familiar with some general GIS operations. We are going to use a command line approach. This is to enable carrying out stand-alone processes in the future, and understand step by step each function and the options per function which are available. Let´s start using grass. Every Grass project has a predefined data structure GISDBASE - Grass data are stored in a directory referred to as a DATABASE “GISDBASE”. This directory has to be created with mkdir or a file manager, before starting to work with GRASS. Within this DATABASE, the projects are organized by project areas stored in subdirectories called LOCATIONs. LOCATION - A Location is defined by its coordinate system, map projection and geographical boundaries. The subdirectories and files defining a LOCATION are created automatically when GRASS is started for the first time with a new LOCATION. MAPSET - Locations can have many MAPSETs. Each MAPSET is a LOCATION's subdirectory. A new MAPSET can be added at GRASS startup. A common problem of basic users is not really using grass but starting a grass section. The main reason is due to the grass data structure and the way a grass section has to be set up before starting! To remove a MAPSET, remove it's directory using your file manager or by rm -rf /path/./mapsettodelete The wxGUI graphical user interface provides options to rename/remove LOCATIONs and MAPSETs. There are several ways to use and open Grass. The simplest way is to open a terminal and type: A second option is to use the Python GUI by typing in the terminal: the Graphical User Interface GUI will ask you to define the GISDBASE, LOCATION and MAPSET to use. If you want you can select the Location wizard to create a new location with newest projection parameters or the Create mapset button to create a new mapset inside a pre-existing location. To enter grass in command line: grass74 -text ~/ost4sem/grassdb/europe/PERMANENT Using the above command line we have already entered into the Grass environment with the GISDBASE, LOCATION and MAPSET defined by the ~/ost4sem/grassdb/europe/PCEM path Once you are running Grass through the bash shell terminal you can always start the graphical user interface with: g.gui wxpython & GRASS COMMAND STRUCTURE class type of command d.rast: views raster map d.vect: views vector map db.select: selects value(s) from table general file operations g.rename: renames map i.smap: image classifier map creation format ps.map: map creation in Postscript raster data processing r.buffer: buffer around raster features r.mapcalc: map algebra raster voxel data processing r3.mapcalc: volume map algebra vector data processing v.overlay: vector map intersections For detailed instructions on grass command syntax and use, go to the GRASSonline manual or type man and function name. As an example Grass working environment The g.gisenv command informs you of your current grass environment settings If you started grass correctly you should visualize the following lines on your terminal Running grass through the bash shell terminal allows you to use all command line functionality of both grass and shell. As an example you can type and visualize all files available in your shell current working directory. This means that all your output files produced by bash command lines functionality will be saved in the current working directory (if not specified differently). Many non-geographical grass output features such as text file reports or images will be saved as well in the current working directory. The GRASS 7.4.0 (europe):~ > inform you are currently working in the home folder. See for double checking the same information using a bash command instead of the grass g.gisenv command. It is a good working habit within Grass to set bash shell working directory the same as your GRASS LOCATION folder. On the terminal you will no longer see GRASS 7.4.0 (europe):~ > GRASS 7.4.0 (europe):~/ost4sem/grassdb/europe > and you will be aware tif your current bash shell working directory match your grass location directory. We have explained that grass projects can be organised in MAPSETs by users, by themes, by extent or locations, and grouped within the same grass LOCATION. This LOCATION will group several MAPSETS defined by a common projection and is able to access a common set of maps in the PERMANENT MAPSET forlder. When we work in a specific MAPSET we will have no rights to write or delete maps in a different MAPSET. For doing so you will have to change your working MAPSET directory and then delete or produce new maps. The g.mapset command allows you to change the Grass working directory and successively generate, delete or modify maps as you wish. From grass 6.3 only the -c flag creates a new mapset if it doesn't exist.; -l List available mapsets (!) To list your available maps: g.list type=vect -p g.list vect -p g.list rast -p The user can add, modify, and delete data layers that exist under his current mapset. Although the user can also access (i.e., use) data that are stored under other mapsets in the same GRASS location, the user can only make permanent changes (create or modify data) located in the current mapset. Now we can access the fnfpc_crop mapset and eventually copy a map from the PCEMstat mapset to our current mapset directory using the g.copy function: g.list rast -p And delete a map with the g.remove command g.remove -f type=rast name=fnfpc_crop You can access but not delete or modify a map in a different mapset from your current g.remove -f type=rast name=fnfpc_crop@PCEMstat We have to be careful in Grass to understand the possible differences existing in the same MAPSET between the whole extent and resolution of the MAPSET itself, the extent, resolution and geographic location of our working region, and the extent and location of what we are visualising. In GRASS, a region refers to a geographic area with some defined boundaries, based on a specific map coordinate system and map projection. Grass region definition and details. This crucial Grass setting will allow us to define within MAPSET settings a particular region of work. Once defined the grass region, grass modules (or programs) will operate within this region. The user can create, modify, and store as many geographic region definitions as desired for any given mapset. However, only one of these geographic region definitions will be current at any given moment, for a specified mapset. GRASS programs that respect the geographic region settings will use the current geographic region settings. To query your current region settings type and to reset region settings to their original extent type -p stands for “print” ; -d stands for default You can modify your default g. region with a -s flag within the PERMANENT directory. Now we will define a new study area for the Scandinavia region. You will visualize the current and newest region settings saved as scandinavia If you open the GRASS gui You can visualize the Computational Region in the Display menu by cliking “Show computation Extent” and you will see the Scandinavia area within a red frame. Back to GRASS if you reset the default grass region Clipping maps and changing resolution use g.region allow us to do two basic GIS function: resample and clip. We would like to have 3 new maps of forest/non-forest percentage clipped and resampled with a different resolution from a forest/non-forest map. The original forest/non-forest map has the European extent and 1km resolution. We need to generate: Italian extent map at 20km resolution, Alpine extent map at 10km resolution Alpine and Carpathians extent map at 5km reslution. In the europe LOCATION different g.region exists: The g.region are saved in the following folders and named as follow: We now resample the g.region at 20km using the res=new_res option and will set 75 x 57 pixels of 20k resolution
OPCFW_CODE
This past weekend, Chloe McAteer, Eamon Compston and myself took part in the AINI hackathon - the largest that Northern Ireland has ever seen! The hackathon was ran across two locations, Belfast (QUB) and Derry (UU) - with over 200 people taking part in total. The theme was utilising AI for good and building innovative solutions that would benefit the world or humanity. It all kicked off on Saturday, we had arrived in the morning and spent no time messing around - we quickly grabbed a project room as we thought this would be the best environment to allow us to brainstorm, whiteboard and get the creative juices flowing. Post-its and whiteboard markers at the ready, we proceeded to the opening address which noted some important resources including the judging criteria that would be used the next day. It was completely transparent and everyone had access to give themselves the best chance of competing in the competition! After a quick boost of caffeine, we were good to go. The brainstorming commenced. We wanted to align our idea as closely with the theme as possible, building a solution that will have a positive impact on humanity. After some initial research, we had discovered that an ongoing problem that is widespread throughout every industry is that human bias (intentional and unintentional) plays a significant part in the initial screening of job applicants - specifically bias in the shortlisting of CVs. We had found some staggering statistics that really hammered home that this is truly a major issue. How could we use our skills to tackle this? Could we create a viable solution that could positively impact humanity and potentially remove this bias? How feasible is a solution for this problem? The event was just over one day long, but we had ambitious plans. Enter DiverseCV 📄 We wanted to craft an experience that would level the playing field for both employers and potential candidates, ensuring that candidates could trust that they were being treated fairly and would get the opportunity to progress in the recruitment process based on their skills and those alone. For a team of three, we had quite a range of different skills, ranging from front-end development to machine learning & data science. We decided to utilise natural language processing to build an engine that would process CVs and remove anything that could pose as a potential flag for human bias, for example: gender, race and ethnicity. We then wanted the engine to be wrapped up and exposed as a simple API to allow different clients to interact with. As with most of the ML/AI world, there is heavy usage of Python - so we decided this would definitely be the path of least resistance. Settling for Django as our web application framework and PostgreSQL as our persistence layer, we were ready to get cracking. However, when it came to pitching we didn’t want to just showcase a Postman request or cURL command - we wanted something a bit more substantial, so we decided on building a simple React application to interact with our API. Let’s get engineering 🛠 For the front-end, we wanted to showcase the perspective from both types of users using the service - the applicants uploading their CV and the company viewing the applicants anonymous CVs. We built a simple interface that allowed the applicant to upload their CV before being sent off to the backend service. The flip side of the UI was a simple accordion that displayed each of the anonymous CVs with only the relevant UUID indicator that linked to the submission stored in the database. For the backend, we simply exposed a few endpoints within Django that interacted with the database but crucially invoked the natural language processing functionality before storing the documents in an S3 bucket. By Saturday evening, we were all exhausted - it had been incredibly productive but we needed rest. Closing the laptops and grabbing a beer we had called it a night. Having managed to get both services up and running we were happy with our progress. Sunday morning. Time to pitch our idea and progress. Since the judging criteria was completely transparent, we wanted to ensure that we were meeting each of them - we gathered in our project room and started progressing through each of the criteria along with putting some slides together for our idea. We really wanted to hammer home how much of an issue this was along with how viable our solution was. Critically, at the initial opening of the hackathon, it was stressed that the judges want to see the solution that has been built - not just a problem pitched with a fleshed out idea for a solution. After fuelling up with some coffee, we quickly merged the two services together and began practicing our pitch. Given a strict five minute slot, we highlighted the different aspects of our solution, both technical and general feasibility. We also highlighted that the solution was viable to integrate into different existing platforms that deal with recruitment. Whilst we pitched the idea with the different use cases and technical complexities, we wanted to highlight that our goal ultimately, was helping to level the playing field - giving everyone the fair opportunity to be judged for their skills and their skills alone. At the closing address of the hackathon, we were absolutely ecstatic to be awarded 3rd prize! There were some incredibly innovative uses of AI for good throughout the weekend, ranging from in-depth data science and analysis to real-time translation of sign language. Huge thanks to the organisers, @AINI community - overall, it was a brilliant weekend and thoroughly enjoyable. Looking forward to the next one!
OPCFW_CODE
+ 0 Votes So have you looked in the WiFi Access Point OH Smeg February 21, 2013 at 9:24am PST And Bridged the Wireless and Wired LAN's together. By default these are separated to prevent cross talk. Col + 0 Votes Our network in short... codyd56 February 22, 2013 at 4:16am PST I have the fiber coming into the building which is then connected to my wireless router (which acts as my DHCP server) which has is connected to the Cisco Small Business unmanaged switch. Everything from the building that is ethernet connected is connected to that switch, and anything wireless connects to the router. + 0 Votes Can you ping the workstations? kmthom February 21, 2013 at 11:19pm PST Try pinging each workstation and device from one of the machines. Also, are you using an Active Directory or work-groups type setup? If you are not sure what Active Directory or a domain controller is, you are probably using work-groups. Depending on your version of windows, you can do a simple Google search to find out how to change the computer names/work-groups. Try putting them all within the same group and test it. Make sure the individual computer names are different however! Let us know how it goes, -=K=- + 0 Votes Try entering the full path to a known host in the network wellington.it February 22, 2013 at 12:10am PST This comment builds on kmthom's comment. You didn't say what version of Windows you are working on and if the the other members of the network are the same version. Try pinging to a host name where you are sure of the exact host name and the shared drive/directory. If successful with the ping test, trying pathing in the exact address to the share from an explorer address bar (\\hostname\DriveOrShare). I have experienced this exact problem in the past and when I put the full path to the share , Windows all of a sudden sees the host AND all of the other hosts on the network. + 0 Votes No luck so far... codyd56 February 22, 2013 at 12:11am PST I am able to ping everything on my network from my laptop EXCEPT for that computer. I believe I am using workgroup setup and yes the new computer is in the same workgroup. There's only about a dozen devices in this network so I'm fairly confident there are no computers with repeated names. My father is an network admin and he has suggested setting up a domain several times but I still have no idea what that is or if it will help, and I haven't really had time to research it all that much. As for the bridge between the wireless and LAN, I am looking in the internet-based control panel and I do not see anything that jumps out at me as what you're talking about. Could it be called something different and I'm missing it? + 0 Votes Subnet issue?? Sysadmin/Babysitter February 22, 2013 at 2:22am PST 1- since you are on a wireless network, everybody else connected to the SAME wireless router should be seen. (This assumes that the wireless connection is their/your ONLY connection) 2-If you are connected to the same router that the other users are connecting to (wired or wireless), can see the others? 3- try using a hub/switch as a connector to the other computers network, (rather than the router you are using. 4- Also, insure that NONE or ALL of the computers you are trying to see are Apple OS X computers. + 0 Votes These are all Windows 7 Professional 64-bit computers codyd56 February 22, 2013 at 2:50am PST All computers are also receiving IP/subnet info via DHCP. I am not able to ping or direct the computer to any other computers on the network. I'm now apparently having issues with my laptop not being able to see other computers on the network also. I'll keep trying to work with what I've got, but does anyone think a domain controller or something else could fix this situation? + 0 Votes Our network in short... codyd56 February 22, 2013 at 4:16am PST I have the fiber coming into the building which is then connected to my wireless router (which acts as my DHCP server) which has is connected to the Cisco Small Business unmanaged switch. Everything from the building that is ethernet connected is connected to that switch, and anything wireless connects to the router.
OPCFW_CODE
Run softprops/action-gh-release@v2 Error: ⚠️ GitHub Releases requires a tag - name: Create tag id: create_tag run: | tag_name="v1.0.${{ github.run_number }}" git tag $tag_name git push origin $tag_name - name: Create GitHub Release uses: softprops/action-gh-release@v2 with: files: | release/salam-linux release/salam-mac release/salam-windows.exe env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} Fixed by adding a tag_name, but the problem and error is: it shows same error even when I am passing the tag_name. My mistake was I need to keep "v" at beginning of tag_name. I think it's better to improve error message of this action. Thanks again. Thank you for showing how to create a release from a non-tagged commit. Are you required to push the tag? I would prefer to not litter the repo with garbage tags. @VioletGiraffe Are you required to push the tag? I would prefer to not litter the repo with garbage tags. It's not really a tag name but rather a "proposed tag name", which will become an actual tag when the draft release is published. Here's how it's supposed to work: Create a draft release with build artifacts and the proposed tag of the upcoming release (no tag is created at this point) Test artifacts If the tests fail, discard the draft release Fix the problem Run the pipeline again, so it creates another draft release with a new set of artifacts and the same proposed tag Test artifacts If the tests are successful, then publish the release, at which point the proposed tag becomes the real tag Many people publish their release right away, without the draft release, which is what creates those garbage tags that need to be deleted if tests fail. I fully handled this process in one of my project by getting help from GitHub action and some other packages. here are full source of my showcase: https://github.com/SalamLang/Salam/blob/main/.github/workflows/build-release.yml Short brief: I am automatically release and build my compiler/programming language for 3 different OS (Windows, Linux, macOS) This github action is making this process automaticly, we are getting VERSION number from a file which standed in the root and we create tag and release in the github automaticly by using that VERSION number. Best, Max @BaseMax There's a couple of points you may want to consider with this setup. The most important one is that build artifacts in this workflow are being released without any testing. So, for the amount of time it takes you to test these artifacts and issue a fix, a buggy version will be tagged and made available through a non-draft release, and perhaps on whatever website that POST request is made to. You may also want to avoid using latest build images. This will eventually cause compiler compatibility issues when GitHub deploys new build images with new compilers, which may be incompatible with the current source. Typically you'd want to pick a specific GitHub image, like ubuntu-24.04 or windows-2022, so you always know what toolset will be available. When you're ready to move to the next compiler version, only at that time you would pick the new build image in the workflow file.
GITHUB_ARCHIVE
To design and build a Multi-Function Gate using the Xilinx's FPGA tools and to document the design. Xilinx's FPGA tools will be used to design, simulate and implement this multi-function gate to the BASYS Board FPGA. The Multi-Function gate in this experiment is a double input, single output gate that can be instructed to perform four different logic operations by placing a control value on the inputs X and Y. The instruction to this Multi-Function Gate is provided by the operation select bits, which thus determine how the gate will act. Figure 3.1 shows the block diagram of such a gate. A and B form the data inputs and F the single output. X and Y are the operation select lines. Multi-Function Gate F Figure 3-1: Block Diagram of Multi-Function Gate The circuit should be synthesized such that for a given X and Y, F is a certain function of A and B. A typical multi-function gate specification may be as follows: a NOR to be implemented when X=0 and Y=O and an XOR to be implemented when X=O and Y=l, and so on. The multifunction gate that you are to design is as follows: The operations for XY = 00, 01, 11, 10 are AND, OR, NAND, and NOR respectively. 1. Read this experiment carefully to become familiar with the experiment. 2. Represent the output F as a function of X, Y, A and B on a truth table. 3. Write the minimum logic expression, as a sum-of-products for the function F. 4. Draw logic diagrams for the above expressions using AND's, OR's, and Inverters. 1. Design and simulate this circuit using Xilinx's ISE using the schematic capture tool and the simulation tool. Generate printouts of the schematic circuit, timing diagram and test bench inputs. Be sure that all the sixteen input conditions have been met for A, B, X, and Y. You will need to set the test bench end time to 4000 nano-seconds and the simulation end time also to 4000 nano-seconds to have enough time to cycle through all possible inputs. See Experiment #2 (pg 3) on how to set the test bench end time and the simulation end time. 2. Now implement the design in Step 1 of this procedure and configure the FPGA so that A is on SW0, B is on SW1, X is on SW6, and Y is on SW7. Also use LED7 for the output F. Appendix D gives the pin details for the switches and the LEDs. Download the FPGA configuration file to the BASYS board using the EXPORT program as described in Experiment #1 of the laboratory. Verify that for all 16 inputs the output F matches the truth table in Step1. 3. Repeat Step 1 but this time design and simulate the Boolean circuit using the VERILOG language in the ISE. Generate printouts of the VERILOG file, timing diagram and test bench inputs. Be sure that all the sixteen input conditions have been met for A, B, X, and Y. Set the test bench end time to 4000 nano-seconds and the simulation end time also to 4000 nano-seconds. You may need additional wires beyond the input and output wires (eg. Wire a, b, c, d, e, etc). 4. Implement the design in Step 3 of this procedure on the BASYS board using the same configuration given in Step 2. Verify that for all the 16 inputs the output F matches the truth table in Step 1. (To be incorporated within the Conclusion section of your lab report.) 1. Can this Multi-Function Gate be operated as an Inverter? If yes, explain how. 2. Will the change in the number of inputs or outputs affect the number of operation select lines? Explain. 3. Will the change in the number of functions alter the number of operation select lines? 4. Have you met all the requirements of this lab (Design Specification Plan)? 5. How should your design be tested (Test Plan)?
OPCFW_CODE
Somewhat related to my other question. At this point, the players in my game have travelled through a Kobold Empire. They have killed Ogres for them. They have slain a Dragon and given them back control of an abandoned temple. They have been told that they were only allowed in because the Kobolds were tricking them into slaying the Dragon. They have negotiated a reward (in gold and trade-rights) with the Kobold Ranger that was tracking them. Now, they'll be heading to a nearby Duke, who will try to trick them into either doing dangerous jobs for the Kobolds without paying them for it. At the end of the day: - They will probably not get the gold they were promised - There will most likely be no trading happening - The players might end up risking their lives more, at no real reward - The players might end up in the gladiator-pits if they play it badly - They may have accidentally made the Kobolds realise that there is a human city across the river that's trying to grow in importance - The Kobolds don't care if they or their friends get killed, and will not really help them unless it serves their own empire Obviously, a lot of this is pretty bad. The players are pretty much outgunned and outwitted. They could cause the total destruction of the city they've been spending time building up. They could get locked in the gladiator-pits and face all its dangers. They will not get any of the stuff they were promised and they might even lose out on the reason they went to the temple in the first place if the Kobolds kick them out before they retrieve it. But I also feel like this is something they're getting themselves into. The entire campaign is a sandbox placed on a frontier. There are no set goals, I don't really plan out the plot or world far in advance and the players are entirely free to take the story where they want. I never once suggested it would be a good idea to trade with this huge empire. I did tell them the Kobolds are xenophobic (to the point where the Gnome has been going around in a disguise for weeks) and manipulative. I even drove the point home with the Kobold Ranger telling them they were allowed access because the Kobolds were trying to get them to kill the Dragon for them. If the next session ends badly for the player characters, how can I reïnforce that I'm just trying to portray the world fairly and this was all their own idea? I don't want them to feel bad (losing an rpg can also be fun) but I also don't want them to blame me for sticking them in an impossible situation. And ideally, I'd like to make them realise this without having to resort to out of game "Well, I did warn you here and here and here", because that tends to leave a bitter taste. Having the DM tell you after the game that you missed the cues is never fun imho. Some information on the players: these are people I've known for around 10 years, because we do volunteer work together. However, most of them I only really see when we're working together or playing together, only rarely outside of these situations. This is the first game we've played together, although all have previous experience. We've been playing about once every 6 weeks for about a year. - Aftermath - Since we've played the previous session, I figured I'd share the conclusion. I took Wibbs' (and others) advice about expectations and signposting and started the new session by playing up the Ranger's lack of trustworthiness a bit more, which caused them to change plans and instead kidnap the Ranger. Which turned into a very thrilling battle as the Kobolds (succesfully) managed to bust out their friend before scattering and leaving them alone from there on out. The players succesfully left the Kobold Empire and are currently plotting revenge (which I intend to let them have). Everyone had a great time and nobody felt bad, even though there were few rewards earned from this adventure.
OPCFW_CODE
Does DuplicateHandle() do any interprocess communication (IPC) and if not why target params? I am finding DuplicateHandle() very confusing. The third and fourth params, hTargetProcessHandle and lpTargetHandle seem to imply that this API function does some form of interprocess communication, but what I have been reading online seems to imply (without saying directly) that in fact this function cannot communicate with anything outside of the address space of its own process and that if you really do want to say copy the local process handle to another process you have to do that yourself manually. So can someone please please take pity on me and tell me definitively whether or not this function does any IPC itself? Also if it doesn't do any IPC then what is the point of those two parameters? How can there be a 'target' if no data is sent and the output of this function is not visible to other processes? At first I thought I could call GetCurrentProcess() and then use DuplicateHandle() to copy the local process handle to another process, but then I started to realize that it probably isn't that easy. For kernel objects, it's straight forward for the NtDuplicateObject system call in the Object Manager to attach to the source and target processes to access and modify their kernel handle tables. Neither process is necessarily the calling process. The caller just needs PROCESS_DUP_HANDLE access on its handle for each process. Some additional IPC is required to communicate the new handle value to the target process. The third parameter hTargetProcessHandle is documented as A handle to the process that is to receive the duplicated handle. That means that the handle (which is just a numeric value underneath) will become usable within the target process. However, how you get this handle into the target process and in what context it is to be used there is out of the scope of that function. Also note that "is to receive" points in the future and it refers to the result of the call, so it must be after the call has finished. As an analogy, you want to allow a friend in your house. For that, you are creating a second key to your door. Just that doesn't mean that your friend can now unlock your door, because you first have to give it to them, but it's a first step. Ok, but a guy cutting me a duplicate key would not need to know my friend's address. I would find it very suspicious and strange if he asked me for it. I would want to know why he wanted that information. Is this just like yet another security step? To continue the analogy is it like the guy making the key with a fingerprint reader so that it won't open the door unless my friend is using it? If so it seems like a ridiculous level of paranoia to me. Yes, the key comes with a fingerprint reader so that only your the right person can use it. This is not so much a security measure but has practical reasons. Every process has a list of resources that are owned (or co-owned) by it but handled by the OS. A handle is often just an index into that process' list. The same index specifies a different resource for a different process, hence the need to know the process for which to duplicate the handle. This is just regular process separation in a multitasking environment.
STACK_EXCHANGE
- The OneNote app keeps crashing due to corrupted file settings or Windows profile. - OneNote is a great tool but if it is not working, try to run a certain PowerShell command. - To fix various issues with this software, make sure to reinstall the app from the official website. - If OneNote is not responding, resetting the application can help you fix the problem. OneNote is a very useful part of the Microsoft Office package, especially for students. But, some people have reported that they’re having some issues with this tool. So, we’ve created this article in order to help you solve the OneNote problem in Windows 10. OneNote is a useful note-taking application that many Windows 10 users use. Even though OneNote is incredibly useful, sometimes certain issues can appear while using this app. Speaking of issues, many users reported the following problems: - OneNote keeps crashing, not responding: These are some of the problems that can appear with OneNote, but you should be able to fix most of them by using one of - OneNote won’t open Windows 10: Many users reported that OneNote won’t open at all on their PC. If this happens, you need to delete the settings file and see if that helps - OneNote not opening, working: Several users reported that OneNote won’t work or open at all. This can be a big problem, but you should be able to fix it simply by reinstalling the application - OneNote problems Windows 10 something went wrong: This is another common problem with OneNote. If you encounter this issue, you should be able to fix it by using one of our solutions - OneNote won’t sync: Syncing is an important part of OneNote since it allows you to view your notes on different devices. If your device won’t sync, you might be able to fix the problem by resetting the application to default - OneNote error 0x803d0013: This is one of many error codes that can appear while using OneNote. This problem can be caused by a corrupted user profile, and you can fix it by creating a new user profile in Windows - OneNote you are not connected to the Internet: Sometimes you might get this error message while trying to use OneNote. In most cases, this issue is related to your security software, so be sure that your antivirus or firewall isn’t interfering with OneNote How do I fix OneNote not working on Windows 10? 1. Use PowerShell - Open the Windows search bar and type cmd. - Select the Command Prompt Run as administrator option. - Enter PowerShell command and press Enter: - After that, enter this command and press Enter: - After that, enter one more command and press Enter: remove-appxprovisionedpackage –Online –PackageName Microsoft.Office.OneNote_2014.919.2035.737_neutral_~_8wekyb3d8bbwe - Restart your computer After you perform the above steps, try to open your OneNote again. Now, everything should work fine again. If these steps did not helped with your issue, go to the next solutions. 2. Re-install Microsoft Office If the PowerShell solution didn’t solve the problem, you can try reinstalling the complete Microsoft Office package. If you used Office before you upgraded your system to Windows 10, there’s a chance that something went wrong while the system was upgrading. So, reach for the Programs and Features, uninstall the complete Office, and then download it or install it from the installation disc again. If you have downloaded Microsoft Office from a not-so-secure website you might have a problem. Get them from official sources: 3. Delete the settings.dat file - Press Windows Key + R and enter %localappdata%. - Now press Enter or click OK. - Now navigate to PackagesMicrosoft.Office.OneNote_8wekyb3d8bbweSettings directory and delete settings.dat file. If you’re having OneNote problems, the issue might be related to the settings.dat file. This is a settings file for OneNote, and if this file gets corrupted, you won’t be able to start OneNote properly. To fix the problem, it’s advised to delete the settings.dat file and restart OneNote. After doing that, try to start OneNote again and check if the problem still persists. 4. Switch to a different page Several users reported syncing problems with OneNote. According to them, document changes aren’t being synced across devices. This can be a big problem since you won’t be able to access your documents from a different device. However, users have found a useful workaround that might help you with this problem. According to users, after you’re done editing your document, switch to a different page in OneNote. By doing so you’ll force OneNote to sync your changes. By default, OneNote should sync any changes as soon as you make them, but if that doesn’t happen, be sure to try this workaround. This isn’t a permanent solution, however, you should at least be able to sync your documents by using this workaround. 5. Click the + button Several users reported that OneNote is unable to sync. According to users, they are stuck waiting for their notebooks to load. This can be a big problem since you won’t be able to access your notes at all. However, users found a way to fix this problem. To fix the issue, simply click the + tab while your notebooks load.This will allow you to sign in to your account. After doing that, the loading should finish and you’ll be able to access your notes again. This seems like a strange bug, but you should be able to fix it by using this solution. 6. Delete OneNote cache - Press Windows Key + R to open the Run dialog. - Now enter OneNote /safeboot. - After doing that select Delete cache and Delete settings. If you’re having OneNote problems, you might be able to fix them by removing OneNote cache. Once you delete cache and settings, you should be able to start OneNote without any problems. 7. Create a new user account - Press Windows Key + I to open the Settings app. - When Settings app opens, go to Accounts section. - In the left pane, choose Family & other people. - In the right pane, choose Add someone else to this PC. - Select I don’t have this person’s sign-in information. - Now choose Add a user without a Microsoft account. - Enter the desired user name and click Next. If you’re having OneNote problems, you might be able to fix the issue simply by creating a new user account. OneNote is a built-in application in Windows 10, closely related to your user account, and if your user account gets corrupted, you won’t be able to access OneNote anymore. However, you can easily check if your user account is the problem by creating a new user account. After creating a new user account, switch to it and check if the problem is still present. If not, it means that your old user account is corrupted. You can’t repair a corrupted account, but you can move all your personal files from the corrupted account to your new one. 8. Reset OneNote app - Open the Settings app and go to Apps section. - List of installed applications will appear. - Select OneNote from the list and click Advanced options. - Now click the Reset button. - A confirmation dialog will appear. Click the Reset button again to confirm. If you’re having OneNote problems on your PC, you might be able to solve this issue simply by resetting your app to default. 9. Install the missing updates - Open the Settings app and go to Update & security section. - Now click the Check for updates button in the right pane. OneNote issues might be due to missing updates. Windows 10 is a solid operating system, but sometimes certain bugs can appear, and since OneNote is a built-in application, these issues can affect it. It’s advised to keep your Windows up to date. By default, Windows 10 installs the missing updates automatically, but sometimes you might miss an important update due to certain bugs or errors. If any updates are available, Windows will download them in the background and install them once you restart your PC. If your PC is already up to date, you might want to try some other solution. By following these steps you should be able to solve all of the most common OneNote issues that can happen in Windows 10. If you have any comments, reach for the comment section below, and we’ll try to clear up all uncertainties.
OPCFW_CODE
Re: Uncoordinated upload of the rustified librsvg On Wed, Nov 07, 2018 at 11:53:06AM +0100, John Paul Adrian Glaubitz wrote: > > librsvg has rewritten substantial fractions of its code upstream in > > Rust. It won't be the last such library or package to do so. > Well, I wouldn't bet on that. I know that a lot of people have the > feeling that rewriting everything in Rust will solve all problems > in software we have nowadays but that's not the case. Rewriting large > projects is associated with a high cost and not many companies are > willing to pay for that. Also, there have already been several > vulnerabilities in Rust and Cargo as well, so the safety is not > really an argument. I really don't feel the need to recreate extensive language arguments here. I think it safe to say that Rust's small handful of documented issues in the standard library pales in comparison to the history of whole classes of bugs in C programs. But the point of this thread is not advocacy, it's simple observation. I'm not suggesting the world will get rewritten in Rust overnight. It seems a rather safe bet, however, that a non-zero number of additional Rust libraries and binaries will show up in the core ecosystem. > > Running old versions of a library is not a viable long-term strategy. > > Attempting to find alternatives written in C is not a viable long-term > > strategy either; that's running in the wrong direction. Ultimately, the > > new version will need uploading to Debian, and an architecture that > > wants to run a full desktop, or for that matter a server or embedded > > environment, will need to have LLVM support and Rust support. > I know that. That's why I also criticized the upstream developer, > of librsvg, who happens to be a colleague of mine at SUSE, who was responsible > for that change. For attempting to improve beyond C? Hardly a criticism. > Will be interesting to see what will happen in the future > when the rustified version of librsvg will try move into the enterprise Seems far less likely to encounter issues, given that enterprise distributions target mainstream architectures only. > > I think it's reasonable for the *first* such library being uploaded to > > wait a little while to coordinate, which it did. > It didn't even wait for Rust to stabilize on the architectures it was > recently bootstrapped for. There was no guarantee the Rust compiler will > work on arm32 or mips32 in the foreseeable future. Define "stabilize". And in particular, how were people to know this from > Given the fact that Rust upstream is always introducing a significant number > of changes with each release, there is quite a chance of regressions of > the compiler on these architectures. This does not relate. The language has active development, like any package that isn't dead upstream. What makes it any *more* likely to What makes it likely to have regressions is a lack of direct support for such architectures upstream. As a random example: where are the bots that run testsuites on other architectures for PRs? > > I don't, however, think it's reasonable to wait indefinitely. > No one was saying that. But I think it's more reasonable to wait for > the Rust compiler to stabilize Rust is stable. Thank you for your contributions helping it work on more architectures, but "does not have first-tier support for every architecture ever" is not a component of "stabilize". > There is still no Rust-stable branch in sight which is > most certainly a requirement for Rust to be part of enterprise distributions. This has certainly been discussed upstream, but in general, it's not obvious what this would gain over simply taking any stable release of Rust and packaging it. > I know the QA processes associated for SLES to update packages in a release > version and I could imagine that it's not anything less involved for > RHEL or other enterprise distributions. It seems that Rust upstream has > not had any of the enterprise and long-term support distributions in > mind yet. They seem to assume that distributions can just always use the > latest upstream versions. No, we assume that distributions can package Rust alongside Rust software and that the packaged software will work with the packaged Rust. There's no need to use "the latest upstream version"; you only need to update to a new upstream version of Rust if you update to a new upstream version of software written in Rust. > > If even more coordination had taken place than what already did, > > what would have been the expected outcome? > A Rust compiler that doesn't regress every six weeks, maybe? It's not reasonable to block the introduction of software written in Rust on some developer (not yet identified) taking the time to contribute the necessary infrastructure upstream to continually test multiple additional uncommon architectures. And that's what would be > > precisely because if non-release architectures need to > > keep an outdated version while working on porting efforts, they'll > > automatically do so, and that shouldn't impede those architectures too > > much as long as other packages don't start depending specifically on > > functionality from the new librsvg. (And if packages do start having > > such dependencies, they'll get held back too.) > Debian Ports doesn't support the cruft mechanism that DAK supports. We're > lucky that the librsvg-common package is of arch any, otherwise librsvg > would already been uninstallable in Debian Ports. So, this is just > pure luck. Please don't make such statements when you're not aware of > the differences between Debian's release and ports architectures. Good to know, and sorry to hear that. Another reason why it doesn't seem particularly unreasonable to focus on release architectures, and treat others as "best effort". > > Approaching the problem from a different angle: what help is needed > > getting a viable LLVM and Rust development environment for more > > architectures? > There are open reviews for LLVM, for example: > > https://reviews.llvm.org/D50784 > > https://reviews.llvm.org/D50856 > > https://reviews.llvm.org/D50858 > And a bug in Rust: > > https://github.com/rust-lang/rust/issues/49773 Thanks for calling attention to these. Hopefully appropriate folks with both port and LLVM expertise will be able to look at them. > > Speaking with an upstream Rust hat on in addition to a > > Debian hat: what could Rust do to make life easier for porters? > Please provide an actual stable version of the Rust compiler that > is supported in the long term and can be shipped by enterprise There's a stable version of the compiler every six weeks. Pick one and If, instead of "stable", you mean "supported on other architectures", that's going to require upstream infrastructure to *test* those architectures on a regular basis. > > And what could Debian's *considerable* expertise in porting do to make that more > > sustainable upstream? (As an example, it might help if upstream Rust > > folks had access to machines for more architectures, though that's a > > side issue for having an LLVM port in the first place.) > Debian Ports has worked closely with QEMU upstream to help make significant > improvements to that emulator. So, in most cases, Rust developers can just > use QEMU for the first porting efforts. But there are also porterboxes available > from gcc to which we from Debian Ports also have provided hardware, for example: I'm more suggesting that if people want to see an architecture better supported, it needs to end up in at least tier 2 on
OPCFW_CODE
best language for 3D manipulation over web ? amardeep at tct.hut.fi Tue Jun 5 13:07:17 EDT 2001 On Tue, 5 Jun 2001, Attila Feher wrote: > > > Creating a perfectly secured Unix system is equally extremely hard. not if you know what you are doing > Yep, w/o the latest SP (was it 5??) for NT4 it was possible. _If_ you > were sitting next to it. Still, it is _very_ rare that a simple user > can sit next to an NT Domain Controller and start whatever he wants. > And if you can crack into an NT workstation... It is still possible > that the IT guys on the wire get alarm about every admin login :-))) and winnt is the only platform where you can get an alarm > > So the quality of this "code viewing" depends a lot on "who MS chooses" and how > > many. I still say that more people see UNIX code than MS code (I won't even > > mention LINUX here, what is seen even by normal users). And the people who are > > really interesting (hackers and crackers) will not be able to inspect the code > > of Micro$oft and point out possible problems. > Maybe, maybe not good that many can see the source. You _never_ know > that a guy seeing it and finding something (which has about 1E-10 chance > w/o the soruces) will turn to you or start dialing and make some i cannot convince you otherwise, but perhaps you may rethink open source is not a guarantee of security, but security through obscurity idea is dumb in my regard. "...The reason the closed source model doesn't work is that security - breakers are a lot more motivated and persistent than good guys (who have lots of other things to worry about). The bad guys will find the holes whether source is open or closed. Closed sources do three bad things. One: they create a false sense of security. Two: they mean that the good guys will not find holes and fix them. Three: they make it harder to distribute trustworthy fixes when a hole is revealed." - Eric Raymond of opensource.org and quoting from "Unix System Security Tools" "At best, security through obscurity can provide temporary protection. But never be lulled by it -- with modest effort and time, secrets can be discovered. As Deep Throat points out on X-Files: "There's always > Blue Screen cannot come from user SW. That NT runs on a faulty or > non-supported HW or uses a badly written driver. Do any of this with a > UNIX and will get the same, but called kernel panic. and also when some part of the kernel code is badly written. and you seem to be so sure that nt kernel is so good, that it cannot cause these problems. and i wonder what happened in between, that microsoft promised that win2000 is going to be much more stable, it it actually turned out so (for my case, and several others whom i know). hardware and drivers shipped with them were same. but i admit drivers shipped by microsoft with their os did change and strangely enough, i always thought windows has much better hardware support than linux. but those machines which use to crash a lot with nt did not give any problems with linux and 2000. (i know 2000 is nt, but here i am referring to nt4 ) > > One of my friends is working in a computer company, which are offering and they > > run a WinNT web server and he told me that it crashes at least once a month, > > usually more often. The Solaris server at our university is now running for > > years and it never crashed even once. It only was rebooted to add new hardware. > Than you must have a real good luck. I use Solaris here and I know what > I am talking about :-))) Reboot is once per day on a test machine where > "badly behaving" SW can run. i would say that you have real bad luck > Yep. First I gave up trying to use Java when I have installed the the > JRE and it crashed my whole Windows 95. I had to reinstall. interesting. but so sad that such software only exists for windows which could bring the whole machine down with it. > > China, one billion people. Computer shops in China sell Linux 200 times more > > often than Windows. The Chinese government plans to increase the usage of Linux > > even more (they don't trust Micro$oft, open source rules, as they can make sure > > there's no spyware inside). BTW downloaded distributions aren't counted here. > Why don't they trust MS? :-))) I cannot imagine... Why should i trust M$ :-))) I cannot imagine... > > Why should 3d access "over web" to a database be limited to x86 or Windows > > users? Why can't it be for everyone? Why aren't people in China allowed to use > > it? Because you believe that you can save 5 minutes through a win-only solution > > (what is not even true)? > Windows NT is not limited to x86... and which other platforms are supported? and finally, is this python-list or what More information about the Python-list
OPCFW_CODE
Home > Linux > CRUX 1.1 Released CRUX 1.1 Released Eugenia Loli 2003-03-24 Linux 8 Comments The hobby Linux distro CRUX 1.1 was released. Changelog. About The Author Eugenia Loli Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker. Follow me on Twitter @EugeniaLoli 8 Comments 2003-03-24 5:32 pm Anonymous Nice, simple, clean, modern and optimized Distribution; much like Slackware but with even less packages. Definetly worth a look, if you want to set up a compile (almost) everything box (GARNOME is your friend) or a highly specialized box (LinVDR). dev0 2003-03-24 10:49 pm Anonymous I had never compiled a kernel before. Downloaded CRUX, followed their instructions. Everything went smoothly, I felt great. But if you don’t want to bother with that, don’t bother with CRUX. I also really liked the people on their mailing lists/chat rooms. Very friendly and helpful, even to a n00b like me, and even though they say in their docs that it’s not a n00b distro. 2003-03-25 12:12 am Anonymous If you enjoy Crux then you will (most likly) like ArchLinux <www.archlinux.org>. I have used both but like Arch better. You should you whatever works the best for you. jon 2003-03-25 1:24 am Anonymous I don’t understand why anyone would create a distro that has little to no package management. It is completely useless. These distros can’t do a single thing debian can not. Want to compile your apps? Fine, just don’t use apt. 99% you aren’t going to want to compile everything anyway. 2003-03-25 1:43 am Anonymous @yerma Well, if someone wants to compile everything from hand, it’s his problem. Crux or Arch linux are not going to be competition for Debian, but they are good distros to findout how dose things works. It’s just a hobby. Sure, it’s nice to use big, confortable car to go to work, but some will use VW Beatle instead. And im not understand “These distros can’t do a single thing debian can not.” point. I can say same with windows and debian. It’s free market, and people will use what they want. ps, note: source based distros CAN be faster. 2003-03-25 7:00 am Anonymous What I am saying is these distros offer nothing debian doesn’t have. If you want to compile all your apps like you would in this distro you can do that in debian too. 2003-03-25 8:56 am Anonymous These distro’s offer exactly what Debian does not offer: up-to-dateness of packages (in the STABLE versions of the distro), and a non-complex packaging system. Both CRUX and Arch are kept up-to-date continuously. You will find the latest versions of software in them. As for the non-complex packaging system, it means creating a package yourself is a _breeze_. It’s literally mostly creating a small textfile of about 10 lines of text. Of course, there are other differences too, but i use CRUX for these… 2003-03-25 11:48 am Anonymous Why make things complex when they can be made simple? With CRUX things are simple and functional. Yes, there is no fancy package management with dependency checking. To me a good package system is one which main function is to simply keep track of installed files so that I can remove them easily if needed. Dependencies are listed in most CRUX packages (from http://crux.lugs.ch) but the simple package system (pkgutils) takes no notice of this. Simply check the dependency lists and install missing packages yourself – its that simple. Most applications dont have a lot of dependencies so its quite easy to install things. Of course this can be a bit complicated with large projects with huge dependencies such as GNOME and KDE, but it is possible. There are unofficial scripts for these projects. Have you ever looking in a .ebuild file? Oh my god – its not that easy to maintain. I would recommend to take a look in a CRUX Pkgfile instead – its simple and clean! The unofficial ports (package build files) collection at crux.lugs.ch is pushing 700+ ports now. It includes the most popular ports. If there is a package missing, make it yourself or request it – its that simple. As mentioned, CRUX is fast and very much up to date compared with most other distros. There is a release roughly every 3 months. I would recommend this distribution to anyone (newbie or experienced). Its really that nice – try it out.
OPCFW_CODE
At present Git is the most used “version control system” in the world, at the same time, a misunderstood and mechanically used tool for many too. There are hundreds of articles and videos about it, but not many talks about its building blocks. They would be overly articulating about the usage and commands. Even if you go through the official documentation, you can only get the hang of usage but not the internals. Internals are fun to know and it will boost your confidence while you use the product. So in PART 1, I explain the internals in simple terms as usual. Let me see if I can do it. GIT is Distributed Version Control System — What that means? The creator of GIT is none other than Linus Torvalds. Hope you know his first product, the “LINUX Kernel” which is ruling the world. What else do you expect out of him? Just one more product to rule the world. And first thing first! Git is just a “stupid content tracking tool” as expressed by the creator himself. How it was made as a “Distributed Version Control System” is sheer brilliance. It means, it does the following: Centralized Vs Distributed In a centralized repository, multiple versions of files is maintained in a central server. When people sync for the recent changes at some point, it downloads the very recent snapshot from the server. Later, if you want to go to any other previous version, you can fetch that from Central Server and replace it locally with the one you have. That means, at any point in time, just only one snapshot is present in your local machine. In contrast, once you clone/pull, Git stores everything in its entirety in your local storage. Let’s assume you have 20 released versions of your repository, Git stores all those 20 snapshots in your local system and you can go back and forth without connecting to any server. I know you might be asking me like — “Wait! Let’s say my repo size is 5 GB and it stores 20 versions of it in my local repository, so it sweeps off 5*20 = 100 GB of my disc space?”. Actually no! That’s the beauty of Git. How Git stores and visualizes files really matters and that unique mechanism gives the power of distributed nature of it. Hash & DAG — Keep these in mind. Before getting into the Git object model, you must know the following about Hash: Another concept to remember is DAG. DAG (“Directed Acyclic Graph”) is one type of graph representation without cycles. Tracking evolution is one of its use cases. Git’s Object Model — Quick glance On your git repository, you will obviously see the current working snapshot of your files. The rest all files of the multiple snapshots of your repository would be consolidated and represented in a hidden directory called “.git”. Let me give a very quick overview of “.git" storage structure straight away. All those mentioned are consolidated in that hidden “.git” directory present in your repository as I mentioned. That’s it! That’s the whole product. As I promised, I will not go further and this level of understanding is good enough to work with the product. Some important “Reference Pointers” of GIT Now we learned, if we just track the “commit” index, we can easily reconstruct a sub-tree of your DAG and that is your revision. How would you track it? It is again simple! Just with reference pointers. A reference pointer tracking a committed index is called “branch”. You can create as many branches as you can like “JIRA-123”, “stupidIdea”, “RTC-567” etc. Irrespective of you creating a reference pointer, 2 reference pointers that you must know is HEAD & master. You are free to rename “master” as “main”, “development”, “virgin” or whatever. But “HEAD” remains “HEAD”. master — It is a reference to the latest commit in the repository. This is also the first branch of a repository created by the git itself. Thus, it keeps tracking the latest snapshot in the repository. HEAD — This is a “movable” reference pointer to a reference pointer. Mostly, it would be referencing “master” branch until switched. When “switch/checkout” is called, it will reference that particular branch. In the below diagram, I have abstracted commit as single entity (it will be pointing to multiple blobs and trees internally) to illustrate how master and HEAD moves. Just by using this simple idea, you can track the entire versions of your GIT repository. Some GIT plumbing command examples Actually, there is no need for you to know the plumbing commands. They are just fun to dig and re-verify what I spoke about just now. I am giving it here, feel free to omit this section. It produces SHA-1 hash for a given text. The same is used by git internally. You can’t directly use this command, but can pipe it through the “echo” command as below: The “objects” folder of git will find a new entry. git creates the first 2 “hex” as a folder name. The remaining 38 “hex” would be the file name. It is for easy indexing. It will appear as below: 2. git cat-file If you open that file in a notepad, the content would be compressed. Don’t worry, you can unzip it with git cat-file as below: Now I believe you have understood the basic building blocks of Git. It is nothing but hash-named blobs, trees, and commits connected as DAG. It had reference pointers at multiple places to reconstruct a snapshot. Thus, Git visualizes the file system in its own way to reconstruct it quickly. This magic happens just locally in a distributed way. That’s pretty much about this article. There is PART 2 cooking and it is on the way. In that, I promise, I will talk only from the user perspective and you can use Git confidently thereon. Catch you, until then C’ya!
OPCFW_CODE
The IP address or the IP address is often referred to stands for Internet Protocol, IP consists of rows or row of binary numbers ranging from 32-bit to 128-bits are generally used to identify the address of each connected host computer or connect to the Internet. Starting from IP version 4 binary digits long is 32-bits, while for IP Version 6 reaches 128-bit, which indicates the address of each notebook, netbook, laptop, and a computer or other hardware on the network-based Internet Transmission Control Protocol (TCP)/Internet Protocol (IP). IP address is unique and not duplicate, so 1 is used to address only one local host server, for example Google IP 126.96.36.199, and the address is only used by Google alone. If the computers are in your home, have been connected to the Internet network, then the address is an address that identifies the network card (network card) on your computer to interact with the network or other devices (in order to distinguish each other and communicate with each other). For those of us who love to air-surfing on the internet, it's good to know the IP address of your computer, because it might one day be needed to track the whereabouts of your computer (for data identity), especially for those of you that work in the field of Science Technology (IT), this of course becomes mandatory for you to master. How to Know the IP Address on the Computer/Laptop that we useActually there are a variety of ways to look at the IP Address on your computer, can be manually through software, as well as through online, because now a lot of websites on the internet that provide services to check the IP Address, and certainly more practical than using the command to the application run, but on this occasion I will try to discuss both. Well, for those of you who are interested to know the IP address of your computer, please observe the following. Using the Manual Method1. Click Start on the Windows logo. 2. Retrieval application run. 3. Then, type cmd in the box located on the side of the word 'open'. 4. Then Command Prompt window will appear as shown below, type ipconfig, click enter. After that, the results will come out, you can see the IP address or the IP address of the network card that is inserted in the hardware in your computer or laptop. Can be seen in the figure below, the IP address is located above the subnet mask and the default gateway, and IP code of your device is located on the side of writing IP addresses, for example, the image below is my computer IP is 192.168.1.2. Using the Online Way1. Open this website: http://www.ip-adress.com. 2. Later straight out the IP address of your device along with the ISP location of your IP and your IP address. To check the detailed information about the identity of your computer that includes a DNS server, and so on, type ipconfig/all at the command prompt earlier. IP Address is the network address of the host that you use to connect to the internet, so when your device is not connected to the Internet connection, then your IP address at the command prompt will not appear.
OPCFW_CODE
[WIP] Implementing atomic many updates in CollectionPersister. TODO: Add test for ReferenceMany. Same code handles both embed and reference so it should be the same. Updates to embed many documents using the atomic strategy should also be atomic and should not update based on the positional key. This cannot be fully supported until we have this https://jira.mongodb.org/browse/SERVER-831 Given the following test document: > db.test.insert({"test":[{"_id":ObjectId()}]}) > db.test.find().pretty() { "_id" : ObjectId("5516e2f249bf305dfce8ca7c"), "test" : [ { "_id" : ObjectId("5516e2f249bf305dfce8ca7b") } ] } With the atomic strategy the queries to remove a many document change to this: > db.test.update({"_id" : ObjectId("5516e2f249bf305dfce8ca7c")}, {$pull:{"test":{"_id":ObjectId("5516e2f249bf305dfce8ca7b")}}}) From this: > db.test.update({"_id" : ObjectId("5516d39235447e556557c694")}, {$unset:{"test.0":true}}) > db.test.update({"_id" : ObjectId("5516d39235447e556557c694")}, {$pull:{"test":null}}) Jusr for the record as discussed on IRC yesterday: I'd suggest using additional atomic=true flag for many mapping (which will for now throw exception if used along with set/setArray strategies) or at least renaming the strategy (atomicPull?) as atomic itself is too generic and can be misleading in future :) @malarzm I'm in a bit of a jam and I need the solution you are proposing to solve for the PR I filed today https://github.com/doctrine/mongodb-odm/pull/1094 Do you have any sense of the time frame for releasing this fix? @blockjon The best bet is for you to implement what you need and submit it upstream. I wouldn't rely on someone here doing what you need in the timeframe you need it in. @blockjon I'll try to look into it tomorrow or weekend the latest and let you know Or maybe @malarzm can do it :) I just opened a pull request against this branch/PR and I see the flaw I've exposed is still a problem even in this branch. That being said, I still need to hack a solution in the near term. My codebase is currently using the BETA11 tag of the ODM. From which branch or tag should I try to code my solution? I'm assuming the master line. @blockjon the master branch, transition from BETA11 to dev-master should be smooth as far as I remember. @malarzm is this still relevant now that we have the atomicSet and atomic strategies? IIRC idea behind this was to assign some kind of identifiers to embedded documents and use only that for pull. Personally I think that what we have now is sufficient especially given this can't be fully implemented without change in Mongo itself. And for that there is an open ticket since 2010 and now people are even taking bets there: I am taking bets. I put $20 that this will be here in 2019, open. Takers? All in all I'll close this
GITHUB_ARCHIVE
The speediest way to get a cope with on deep Mastering and obtain productive at developing types for your own personal machine Studying issues is usually to practice. Taken with each other, the 2nd two observations exhibit that any selection x That may be a various of all maximal prime powers beneath n needs to be a typical multiple of all figures underneath n. By (2), if x is actually a many of all maximal prime powers beneath n, Additionally it is a numerous of all Toptal can be a matching support, to begin with produced with only tech expertise in your mind. Even though it has expanded its pool of expertise to include designers and finance industry experts, the company’s bread and butter is its developer vertical. There are several compilers to higher-degree object languages, with either unrestricted Python, a limited subset of Python, or even a language similar to Python since the supply language: You can make a firm profile, seek out candidates applying their search algorithm (that may get rid of gender and racial identifiers for fairer choosing), and request interviews with candidates. The three close-to-finish projects that explain to you how to use Multilayer Perceptron networks for predictive modeling issues. The appliance of convolutional neural networks to textual content details and the way to use them to forecast the sentiment of Film testimonials within the textual content by itself. In this guide all instructions are specified in code containers, the place the R code is printed in black, the remark textual content in blue and also the output generated by R in environmentally friendly. All opinions/explanations start with the typical comment indication '#' to forestall them from remaining interpreted by R as instructions. Acquire away the built-in circuit and the entire world would prevent useless in its tracks, a stark reminder of just how essential pcs are to every and Each one of us. We do not understand how dependent we are getting to be on them. It truly is important, consequently, to help keep up with the most recent developments, and IEEE journals are a great way to do this. Keras - A higher-degree neural networks library and effective at functioning along with both TensorFlow or Theano. Businesses searching for full-time builders may get pleasure from employing Stack Overflow and GitHub’s career boards, which can provide fantastic publicity to the Python developer community. Should you would prefer to run tox outside the Travis-designed virtualenv, it might be a greater notion to utilize language: generic rather than language: python. Don’t waste your time perusing huge career boards like Monster and Certainly. You’ll have far far better luck with work boards geared toward tech expertise. GitHub has an enormous entrance-close developer Group because it’s among the largest open up-resource on-line repositories for coders. A crucial intention of Python's developers is retaining it enjoyment to employ. This is often reflected from the language's title—a tribute for the British comedy team Monty Python—and in sometimes playful techniques to tutorials and reference components, for example illustrations that check with spam and eggs (from the well-known Monty Python top article sketch) instead of the common foo and bar.[fifty one][fifty two]
OPCFW_CODE
java.lang.NoClassDefFoundError in cmd I am facing this problem while trying to run my java file by writing java filename .... I have read on many pages the possible ways this could be corrected but unfortunately I have been unable to correct my problem... First of all I looked at my environment variables and observed that there was no CLASSPATH set and I had pointed PATH correctly to my jre as well as jdk bin in C:\ Second I am able to run javac filename.java and observe that .class file gets built in the local directory. While writing javac -classpath . filename works writing java -classpath . filename (without .class) results in the same error. I just don't know how to run my program in command prompt!!!! Please do not give me links to the pages which have given the same answers that I have mentioned above as they do not work in my case..... Please help.... Is your class in a package (i.e. uses the "package x.y.z" decl at the start of the file)? I suggest you provide a SSCCE together with the exact output of your commands. "Short, Self Contained, Correct Example" Note that if your class resides in some package mypackage, you need to make sure the class file is inside mypackage/ and do java -classpath . mypackage.YourClass the default package is discouraged. Place it in some package. There is a bit little information in your post ... as said, a complete example would really help solving this. Some ideas: Is class named filename? If not, you should make sure to call the right class name in the java command. if you have strange characters in your class name (basically anything apart from A-Z, a-z, 0-9, _), it could be mangled by your file system and thus lead to java not being able to find it. Make sure that the class name is the same as the file name. If you use packages, make sure the package names are congruent to your file structure (see latest questions with tag package for some examples). Is your class public? If not, make it so. (This should give another error, though.) Stating javac filename.java and java filename are not enough for this really. We also need to know the contents of the filename.java or at least the first few lines. This is the expected way to declare a class public class Filename { } This is the way it would have to be to work given your example: java filename public class filename { } This is could exist too, but you are probably just messing with us :) public class SomethingElse { } Overall no matter what the filename is on the filesystem, the class has a name and that is the name that the java command is expecting. I would recommend using the upper case first letter form as it is clearer imo
STACK_EXCHANGE
«The real value in changing the status quo and getting more women to the forefront, is by having women share about the important work that they do, and not just talk about their gender as a topic.» This episode was recorded right before Christmas, when I had the pleasure to chat with Alexandra Gunderson and Sheri Shamlou. Alexandra inspired by a Women in Data dinner in New York, took it upon her to find likeminded people in Norway. That is how she came across Women in Data Science, the conference that was brought to Norway by Heidi Dahl in 2017. First meetup as a Community was June 2018, and this years WiDS event «Crossing the AI Chasm» is coming to Oslo (and digitally) on May 24th, 2023. Here are my key takeaways: Women in Data Science (WiDS) - «Creating a meeting place, a place for people to connect and get inspired» - Creating a platform and stage for outstanding women. - Here are some of the events WiDS organizes: - «Champagne Coding» - hands on event - «Data after Dark» - after work event: 1-2 quick high level presentations - «Data for Good» - get together and solve difficult challenges for greater causes - An important mission is to increase the number of role models in the community to look top to. - The goal is to provide arenas to learn together, son it is as important to share stories about failure and collaborate around the learnings from those. - WiDS is looking for sponsors, and one benefit can be, that trough events real-life uses cases can be solved. - The focus for 2023 is «scalability» how to get unstuck from ML and AI pilots and bring your work to production? The Quest for Diversity - Diversity is a complex topic with several perspectives: gender, nationality, background, knowledge, expertise, and experience. - Why is diversity important? - Leads to more innovative and effective solutions - Leads to more fair and just outcomes - The starting point when working with diversity on a daily basis is awareness. Diversity in the workplace - Diversity doesn’t magically happen. You have to work for it. - Awareness is a first step, but you also need to collaborate in broader groups. - The value is gained when you are able to include everyone in your events and talks. - For people to work together against biases of any kind, you need an inclusive culture from the beginning. - Be open in your communication and foster a culture of collaboration. - The «3rd shift» is an important requisite for women to be able to spend the same amount of time and intellectual capacity at work. - The work for an inclusive work environment is never over. We have to continuously work on its and talk about it. Diversity in recruitment - You have to actively seek out to hire people with different backgrounds. - In recruitment, be aware of how you write a job announcement. - Use gender neutral language (avoid stuff like «Data Science Ninja» or «Data Rock Star»). - There are online applications to check if your language is gender neutral and with suggestions for replacements of bias´ words. - You need to highlight more possibilities with a job, like growth and learning opportunities. - Minimize the list of requirements in a job-posting. - Be aware of you own biases and work with diverse teams also in recruiting. - When screening CVs, be aware that different people write in different styles. - In an interview process, eg. Women don’t like to do coding tests, with someone watching them. Get involved: LinkedIn group, Meetup or https://www.widsoslo.com/.
OPCFW_CODE
About the meeting: The OTO’18 (OCEANS’18 MTS/ IEEE Kobe / Techno-Ocean 2018) will be held May 28~31, 2018 in Kobe, Japan. The event is hosted by three joint-organizers – the IEEE Oceanic Engineering Society (IEEE/OES), the Marine Technology Society (MTS) and the Japanese Organization of the Consortium for Techno-Ocean 2018 (CJO). The venue will be Kobe Convention Center, a state-of-the-art facility located on Kobe’s Port- Island, (Japan’ first man-made island). Kobe itself is an international port city facing the tranquil waters of the Seto Inland Sea, and cradled below the surrounding Rokko mountain range. Tourism city Kobe is also conveniently close to the ancient cities of Kyoto, Nara, Osaka and Himeji. The OTO’18 convention will be an excellent opportunity to focus on the topics that interest you, in every field related to Marine Technology and Ocean Engineering. We look forward to your participation at OTO’18. Abstract submission due December 15, 2017 Local and Core Topics Submissions related to OCEANS2018 LOCAL and MTS IEEE Core Topics will be considered. A list of topics is available here. Abstracts may be submitted in one of three categories: - Regular Technical Program: Abstract submitted for review, technical paper presentation in technical session at the conference, and publication in IEEE Xplore - Student Poster Competition: Abstract, paper, poster presentation, and publication in IEEE Xplore. Open to any full-time student in an accredited program. Selected applicants, based on abstract reviews, will have travel and registration expenses subsidized - Special Sessions (Workshops and Panels): Abstract and presentation, no publication. Participation is at the discretion of the Technical Program Committee. For more informations, please contact the TPC Chair with requests. Abstract should be no more than 2 pages including all text, pictures, and references. The abstract at the top must also list the corresponding author and his or her institute or affiliation the paper is representing. (i.e. John Smith: University of Someplace).The originally-submitted abstract cannot be used as a substitute for the conference paper. Please note that abstract PDFs do not need to be converted through the IEEE Xplore PDF eXpress service. Authors of accepted papers are expected to prepare and submit the conference paper (4 to 10 pages) for conference proceedings at the conference. Authors instructions can be selected to get details for paper requirements. For more information click here: http://www.oceans18mtsieeekobe.org/call-for-abstracts/
OPCFW_CODE
ESLint: quote globs to avoid expanding to invalid patterns When I try to run 'npm test' I see the following error: <EMAIL_ADDRESS>lint /home/ahunt/git/gitgitgadget eslint -c .eslintrc.js --ext .ts,.js {lib,script,tests}/**/*.{ts,tsx,js} Oops! Something went wrong! :( ESLint: 7.21.0 No files matching the pattern "lib/**/*.tsx" were found. Please check for typing mistakes in the pattern. This is because my shell appears to be performing brace expansion before passing the globs to eslint. Quoting the globs avoids this, and lets me run 'npm test' locally. An alternative is to use --no-error-on-unmatched-pattern - however that's noisier than just quoting the globs. What I don't understand is why no one else who is contributing to gitgitgadget has ever hit this. (It seems to be a common enough issue elsewhere though.) I was able to reproduce both on Linux and on Mac, and in both cases I'm using bash. Perhaps everyone else is using a different shell though? What I don't understand is why no one else who is contributing to gitgitgadget has ever hit this. (It seems to be a common enough issue elsewhere though.) I was able to reproduce both on Linux and on Mac, and in both cases I'm using bash. Perhaps everyone else is using a different shell though? The github actions run using bash and are not having this issue. Do you have script-shell set in your npm configuration? hmmm, running on Ubuntu WSL, the globbing is not getting an error or building a full list: echo {lib,script,tests}/**/*.{ts,tsx,js} lib/**/*.ts lib/**/*.tsx lib/**/*.js script/**/*.ts script/**/*.tsx script/**/*.js tests/**/*.ts tests/**/*.tsx tests/**/*.js Might be something environmental? Not that I disagree with the change but should we just remove .tsx from the list? Might be something environmental? Not that I disagree with the change but should we just remove .tsx from the list? This error is easily explained if @ahunt doesn't have any .tsx files in his tree, and also explains why it works correctly when he quotes the pattern, letting eslint expand it rather than the shell. I'm not familiar with this code. Are the .tsx files generated? If so, perhaps he's missing the generation step, thus explaining why he's seeing this error. This error is easily explained if @ahunt doesn't have any .tsx files in his tree, and also explains why it works correctly when he quotes the pattern, letting eslint expand it rather than the shell. I'm not familiar with this code. Are the .tsx files generated? If so, perhaps he's missing the generation step, thus explaining why he's seeing this error. There are no .tsx files (and none generated). Removing .tsx from the glob pattern will resolve the problem but not explain it. My very simple test suggests globbing will create an array of patterns but not any actual names for this particular input. The man page indicates there should be no errors for unmatched patterns so I am not clear why @ahunt is getting an error. An alternative is to use --no-error-on-unmatched-pattern - however that's noisier than just quoting the globs. I am confused. If the problem occurs before eslint runs, how will this help? I am confused. If the problem occurs before eslint runs, how will this help? Sorry. This sounded like a shell globbing issue but a search suggests it might be an eslint issue. See SO. Sorry for the noise but the error message was not very helpful (not your fault - not what would be expected from eslint). Can you run with echo instead of eslint for the lint command to see what is being passed by the shell? Sorry for confusion. Without reading the error message closely enough, it sounded like a shell globbing issue, but indeed it appears to be reported by eslint. If there are indeed no .tsx files anywhere, then removing *.tsx from the pattern seems like a good idea in addition to quoting the pattern to ensure it gets passed literally to eslint without the shell doing any sort of brace or wildcard expansion on it. It's still curious, though, why this works in CI but not for @ahunt. Further investigation as suggested: Using echo instead of eslint prints all permutations: lib/**/*.ts lib/**/*.tsx lib/**/*.js script/**/*.ts script/**/*.tsx script/**/*.js tests/**/*.ts tests/**/*.tsx tests/**/*.js Next I tried using sh as npm's shell using npm config set script-shell /bin/sh Same result as in 1. Then I tried dash: npm config set script-shell /bin/dash {lib,script,tests}/**/*.{ts,tsx,js} csh: npm config set script-shell /bin/csh echo: No match. ksh: npm config set script-shell /bin/ksh Same result as in 1. zsh: npm config set script-shell /bin/zsh zsh:1: no matches found: lib/**/*.tsx I don't know how to figure out what is being used in CI though - it looks like Github actions used to use bash back in 2019: https://github.com/actions/runner/blob/main/docs/adrs/0277-run-action-shell-options.md But then I realised maybe it's my bash version (I'm guessing npm was using bash by default): I was using bash 4.4.23(1)-release. I've upgraded to 5.0.18(1)-release and don't see any difference in behaviour. @ahunt it might have something to do with Ubuntu using dash as default shell, I think. Though I have to admit that I also develop using Ubuntu locally (inside WSL), and haven't encountered the problem. Using echo instead of eslint prints all permutations: lib/**/*.ts lib/**/*.tsx lib/**/*.js script/**/*.ts script/**/*.tsx script/**/*.js tests/**/*.ts tests/**/*.tsx tests/**/*.js Next I tried using sh as npm's shell using npm config set script-shell /bin/sh Same result as in 1. Then I tried dash: npm config set script-shell /bin/dash {lib,script,tests}/**/*.{ts,tsx,js} The dash shell does not support braceexpand, as shown by your results in 3. Based on 2, it appears your /bin/sh is pointing at bash. It looks like the default shell for macs is now zsh. man sh should tell you what is being used. Always nice to know what was causing the problem. Something to keep in mind for the future. Thanks for merging! And yes those observations explain it - I took another look and noticed: npm defaults to using the system default shell for running "scripts", including eslint: https://docs.npmjs.com/cli/v7/commands/npm-run-script#script-shell . I think the relevant code lives here. (And even if npm is invoked from bash, it will still use /bin/sh to invoke eslint.) As already discussed upthread, on Ubuntu, /bin/sh == dash, and dash doesn't perform the brace expansion -> Ubuntu or WSL users won't see this issue. On my system (openSuse), /bin/sh is a symlink to bash -> I hit this issue. On my mac, /bin/sh is actually the bash binary (this machine is still on Catalina - it seems like the switch to zsh happens with Big Sur)? Aargh! This breaks all the "PR" tests. @webstech you added pr-test.yml, but it is not actually testing PRs. Instead, it is testing on push, and only if the branch is neither master nor maint. The idea was to prevent redundant workflows from running: when a user pushes to their fork, the workflow is triggered there, and when opening a Pull Request, it would be triggered in the target repository, too. However, this idea only holds water if the contributor did not disable Actions in their fork. And @ahunt disabled Actions, so there was no test failure because there was no test in the first place! https://github.com/ahunt/gitgitgadget/actions And @ahunt disabled Actions, so there was no test failure because there was no test in the first place! https://github.com/ahunt/gitgitgadget/actions Sorry about that 🤦 . I don't remember disabling actions... perhaps they require being explicitly enabled first (I've done that now)? I had also been assuming that my PR would be automatically checked (but failed to actually verify that that happened). Thanks for fixing the breakage. Now that I have actions enabled I can hopefully produce and verify a better fix for the brace expansion isssues (unless you're feeling too burned by my first attempt, in which case I can just override the eslint config in local development). @ahunt none of this is your fault. It was my responsibility as a reviewer, not yours as a contributor, to verify that the tests run on Windows, still. @ahunt none of this is your fault. It was my responsibility as a reviewer, not yours as a contributor, to verify that the tests run on Windows, still. I did run on Windows without problems. I had seen the ESLint doc mentioned double quotes and wanted to verify there was no problem. Sorry I failed at that. Not sure if npm is using cmd or powershell in the actions environment. Locally I am using cmd. AFAIR npm uses CMD on Windows, for historical reasons.
GITHUB_ARCHIVE
I’m posting this script because I’ve recently had to create a script so Gimp performs the same action to a group of files. As I haven’t found any easy (for dummies) tutorial about script-fu, I’ve thought this little example would be of interest for begginers, so they could start to understand the structure and internals of a script using the script-fu engine. I’m a newbie too, so maybe I will not be completely right while explaining each funtion and parameter, but in my computer it is running fine (either Linux Mint x64 or Windows 8.1 x64 with Gimp 2.9.5-CCE). I have not created the script from scratch. Instead, I have joined parts of other scripts, and then I have added the function I needed. This example could be used as is if you need to batch process images sharpening them with the Wavelet sharpen plugin (every file sharpened the same). In case you don’t have that plugin, you could download its source code in Wavelet sharpen plugin (Gimp registry) If you’re not confident about your compiling skills, you can download the plugin with these links Windows (32 & 64 bits) gimp - wavelet-sharpen-0.1.2_32bits-64bits_gimp-2.8_Win.zip (152.4 KB) Linux (compiled in Linux Mint x64, working in Gimp 2.9.5-CCE Appimage, not tested in other versions or environments). If you launch it from the menu, Gimp will show a “message” warning about an error, but the plugin still works. I think it’s important to compile it with the version of Gimp you are using. wavelet-sharpen.zip (13.2 KB) To know where to find/place your scripts and plugins, see The explanations are intended to stay in the script, so you will always have an explanation right next to the code. I don’t think the increased size of the script (because of the comments) will make it much slower. And now the script (you just have to copy the text, create an empty text file, paste the text inside, and save the file as “yourdesiredname.scm” (“scm” extension is needed). (script-fu-register ;Here we start registering our script in the ;Gimp's procedural database (so it will appear ;in the Help>>Procedure Browser) and any other ;script will be able to run it ;Next you have to set a few required parameters: "script-fu-wavelets-batch" ;you give a name to your script, anything ;you want, but better if it begins with "script-fu". ;It will be used along the script "Batch wavelet sharpen" ;the name of the script shown in the submenu ;(this script is about applying a sharpen plugin ;to a group of images, hence the name) "Batch process to sharpen images\ using Wavelet sharpen plugin." ;a description of the script shown as a tooltip ;(when you mouse over the submenu name) "Javier Bartol" ;author "CC-BY-SA-3.0" ;copyright "April 02, 2018" ;creation date "*" ;types of images the script is intended to work ;with (in this case, any type, but it could be ;RGB, RGBA, GRAY, GRAYA, INDEXED or INDEXEDA) ;Now you need to give the kind of data ;each function parameter accepts: ;look at the parameters defined in the ;main function, and set them ;in THE SAME ORDER SF-STRING "Images" "~/Images/*.tif" ;the first parameter has to be a ;string of characters (SF-STRING) ;when running the script a dialog will show, and ;this first parameter will be labeled as "Images" ;and it will have a default content "~/Images/*.tif" ;(this will be the path to the files, and the type of ;images processed) SF-VALUE "Amount" "0" ;the second parameter relates to the ;information needed by the wavelet ;sharpen plugin. It has to be a number (SF-VALUE), ;and the default value is "0" ;(this will be the amount of sharpening applied) SF-VALUE "Radius" "0.6" ;the third parameter has to be a number, ;being "0.6" the default value (it will be the radius ;of sharpening) SF-TOGGLE "Luminance" 1 ;the last parameter must answer a YES/NO ;question: it will ask if you will sharpen luminance ;channel only (well, in fact, you will only see a label ;saying "Luminance", and you will have to guess it is ;asking if you want to sharpen luminance only) ;default value is YES, or "1" ) (script-fu-menu-register "script-fu-wavelets-batch" "<Image>/Filters/Enhance") ;Now you are registering the script in the Gimp menus, using the name you gave ;it, so you will find your script in Filters>>Enhance>>Batch wavelet sharpen ;(apparently <Image> is the root of every menu) (define (script-fu-wavelets-batch ask-fileglob ask-amount ask-radius ask-luminance) ;Declaration of the main function (the part of the script that does the work) ;you tell Gimp the name of your script (script-fu-wavelets-batch), and which ;parameters has to ask the user when launching it. Those parameters are just ;variables, and it's better if they have meaningful names to you. ;Here I add "ask-" to each variable to remind me Gimp will ask for their values (let* ;Here it is: the engine of the script ((thefiles (cadr (file-glob ask-fileglob 0)))) ;Let's go from the inner to the outer part ;remember that "ask-fileglob" is the first parameter of ;your function, and is defined as a string of characters, ;with a default value "~/Images/*.tif" ;"file-glob" will search the path given by "ask-fileglob" ;and will return a number followed by a list of strings of ;characters. Each string of characters will be the full ;path and filename of each image ;so we will get: a number, the full path/filename of the ;first image, the full path/filename of the second image, ;and so on ;with "cadr" we will remove the number from the list (read ;https://www.gimp.org/tutorials/Basic_Scheme/, chapter 3.1), ;although I'm not pretty sure why one has to use "cadr" ;instead of "cdr" ;so, the variable "thefiles" will hold a list returned by ;"file-glob", removing the first member of the list (the number) ;the trailing "0" means that the filenames will be coded in UTF-8 (while (not (null? thefiles)) ;While the list inside "thefiles" is not empty, the script will ;perform the actions inside this loop (let* ((thefilename (car thefiles)) ;the variable "thefilename" will hold the first item (car) ;from the "thefiles" list (image (car (gimp-file-load RUN-NONINTERACTIVE thefilename thefilename))) ;then the image with the path present in "thefilename" is loaded (drawable (car (gimp-image-get-active-layer image))) ;as a last step, the active layer is made the modifiable (drawable) ;part of the image (this has to do with Gimp internals, and may be ;used in advanced scripts, but here is the same as the full image) ) (gimp-image-undo-disable image) ;the undo cache is disabled (this is not really necessary in this simple script) (plug-in-wavelet-sharpen RUN-NONINTERACTIVE image drawable ask-amount ask-radius ask-luminance) ;the plugin Wavelet sharpen is launched ;to know which parameters the plugin needs, open the Procedure ;Database and it will be listed as "plug-in-wavelet-sharpen" ;"RUN-NONINTERACTIVE" is set so the plugin doesn't ask for user ;input on each image ;"image" is the image to work with ;"drawable" is the editable part of the image to work with ;(in our case, it's the full image) ;"ask-amount" is the second parameter the script asks in the dialog ;It's a number, and it will be the amount of sharpening the plugin will apply ;"ask-radius" is the third parameter of the function. It's a number (as ;defined at the beggining, when registering the script), and will be the ;radius used by the plugin ;"ask-luminance" is the last parameter of the function. It will be a tick box, ;and will be ticked by default, meaning the plugin will sharpen only the ;luminance channel (gimp-image-undo-enable image) ;the undo cache is enabled again (this is not really necessary and you can ;remove both disabling and enabling of undo cache) (gimp-file-save RUN-NONINTERACTIVE image drawable thefilename thefilename) ;saves/overwrites the image stored in "thefilename". BE CAREFUL WITH THIS! ;You may lose your original images! ;If you want to save files with different names, maybe you will find a solution in ; http://www.gimptalk.com/index.php?/topic/34672-script-fu-how-to-increment-filenames ;and http://it-nonwhizzos.blogspot.com.es/2014/10/gimp-script-scheme-to-scale-multiple.html (gimp-image-delete image) ;close the image ) (set! thefiles (cdr thefiles)) ;modifies the content of "thefiles", giving it the same list as before, but removing ;the first item (the previously first image of the list) ) ;here the loop is closed and returns back to checking if "thefiles" is not an empty list ) )
OPCFW_CODE