Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Questions regarding multimodal inversion Hi, first of all, thank you for sharing your code. I have run the examples, have gone a bit through the code and came up with a few questions: Is the multi-mode inversion implemented yet? The "inv_param" class get a "mode" argument which default is 0 and which is apparently never used in "cpu_iter_inversion". Or do I have to invert the modes separately while specifying the argument (if I have multiple modes) and then merge the respective velocity profiles afterwards somehow ? I have attached a figure showing exemplary modes that I need to invert. They are largely overlapping in the periods. Will this be a problem for the inversion as I have to specify a "fundamental range" for the initialization of the model ? Is it possible to retrieve the predicted dispersion curve that is used to compute the loss ? This is probably a bug. When I use none of the initialization methods and just define my own starting model parameters (thickness, vp,vs,rho) I will end up in a NameError ("The initialize method can only select from [Brocher,Constant]") in the cpu_iter_inversion. Lastly, is there a paper with a more detailed description of this method and which paper do I cite when I use you code ? Thank you! Thansk for your question. (1) the misfit function we use is named "determinant misfit", which not need to seperate different mode of dispersion curve. (More information about this misfit function can be found at doi: 10.1111/j.1365-246X.2010.04703.x). The "mode" option just used for initialize your velocity mode (Usually empirical formulas are only suitable for dispersion curves of the fundamental order mode) (2) I apologize for this problem, but I've been working on it recently. The correct input to the program should contain three columns (frequency, phase velocity, and mode order), but right now it doesn't do a very good job of initializing multi-order inputs. I'll be working on an updated version for easier initialization of the model. (3) It is possible to retrieve the dispersion curves corresponding to the inversion results, just by adding a bisection search module. One of my recent work also combines a traditional loss function with a deterministic loss function, which also requires extracting the dispersion curve of the inversion result. (4) Once you have used your own defined model parameters, you need to specify the initialization method as "Constant" to determine how vp/rho is converted during the update process (These two parameters are shown to be insensitive to changes in the dispersion curve). (5) Our article is being submitted for review by Geophysical Journal International and I will be submitting an axriv version soon, thanks for your comments! Thank you for the quick response and the reference. I bypassed the initializing problem feeding a second "tvs_obs" to the function only containing the fundamental mode. The multimodal inversion seems to work fine. regarding (3): when I compute the determinant matrix for the inversion result I retrieve the following image: I assume the predicted phase velocities for each frequency are at the zero-crossings such that I can extract them looking for the index where the amplitude is (close to) zero ? I would also like to use the MonteCarlo sampling approach. Can I here use the "cpu_multi_inversion" (with zero horizontal damping?) to invert from many starting models simultaneously for a single location? Could you explain how the input data for this case should look like in terms of dimension? It says " # observed dispersion data [[t,v],[t,v]] 【station,pvs_num,[t,v]]] " but I guess its not a list of lists that is requested here. Maybe you could post an example. Also it appears that the init_model_MonteCarlo instance can not be passed to the inversion directly as it does not contain a init_model instance with all the MonteCarlo samples, not sure if this is a bug or I do something wrong here. Thanks! Thanks for your question, I have recently tweaked the program: (1) the input data supports three dimensions, [tlist,phase velocity, dipsersion order], but note that the empirical method of initializing our model only supports the 0-order dispersion curves (2) the CPU and GPU runs of the iterative inversion process are fused together, which you can determine by simply indicating the DEVICE. (3) I added the forward module, which can help you directly compute the surface wave dispersion curves for a given model. Some examples are added into the folder. (4) 0 horizontal damping is mainly used for 2D linear table array inversion to constrain the velocity model at different locations in the lateral direction, and only vertical damping is mainly used for 1D inversion. (5) The initial model of Montecarlo method is 2D data: the model is [initial number, Vp/Vs/rho/h], and the observation data is 3D data [initial number, disp_t, disp_v]. Hi, thanks for the update and congrats for the nice paper! I am back working on the project where I would like to use your code and have more questions: The data I am using is offshore so I need to include a water layer (80m). I tried to just add a water layer as the uppermost layer to the initial models (thickness=0.08, vs=0.0, vp=1.5 and rho=1.0) When I run the inversion now the "llw-flag is activated" and I am getting dimension mismatches in "_cps/_surf96_vectorAll_gpu.py" in the dltar4_vector function in 386 "xka = omega / alpha[0]". If I debug this one I get another one at 392 at " p = ra * dpth" and afterwards one in the "var_vector" function 393/394. Hence there seems to be a systematic dimension mismatch. It seems like those functions are not adapted to handle the multiple starting models. Without adding the water layer, and hence with llw=-1, the inversion works (multi-inversion on GPU) , however the the results seem not well determined.
GITHUB_ARCHIVE
The answer is Python. The question is: What language should we use to get an introduction to DevOps development? I believe that's a pretty safe answer, but if you don't agree, let me see if I can win you over with the following points. DevOps practices run better on Linux. I'm not sure if I have to back that up, or if the conjecture alone is enough. The development landscape today, for quite a while, and probably for quite a while to come, has felt better to me on Linux. And what language comes in the box with Linux? That's right, bash. Let's consider bash for a sec. You can do just about anything you want with it. You can even write a web framework, such as the industry standard goto: Bash on Balls. Here's an excerpt from the readme: Because, you know, we can. A ringing endorsement. Bash, in actual seriousness, is a good language. It has a massive body of use, extensive support, and it certainly is going to be in use for generations to come. Bash is also very composable. Composability, the quality of a language that tells you how easy it is to assemble its parts, is very important for DevOps. There are many moving parts in a typical enterprise-level infrastructure, and the easier it is to move them around at-will, the fewer headaches you're likely to experience. Bash also carries the advantage of knowledge and muscle memory. Every sysadmin that I've encountered knows their way around bash. From a developer-first perspective, which I admitedly have, bash doesn't lend well to learning the fundamentals of modern day software practices. Inputs and outputs, data structures, and the treatment of 'entities', is where the bash argument falls flat for me. Python is almost as baked-in with Linux as bash is, but has the feature set of a productivity-friendly programming language. I believe we should use the lesson in composability as a reminder. Our component pieces of DevOps glue should be small and abstract, if not generic. We may have a tough time convincing our respective sysadmins to learn Python. Learning Python as a programming language is different than learning bash as a shell language, and can require a paradigm shift. It's worth it though, for all 'Ops' techs to make that leap. Just as it is worth it for the typical sheltered developer to get their hands dirty with load balancers and virtual machine configuration. AWS, Azure, and Google Cloud, all have Python SDKs available on pip, the Python package repository. For those not in the clouds, VMWare even has a Python SDK. The other frameworks usually featured are .Net, Java, PHP, and Ruby. Like I laid out in the bash example, we want a language thats easy to learn, to use in small doses, and is very compositional. .Net and Java, in my opinion, fall short because they are both a little too strict. They will help keep you in line when it comes to objects, but infrastructure representations can be very flexible. If you can rapidly gain or lose Configurable Items (CIs), we need an implementation that's equally flexible. PHP and Ruby, for me, are just okay, even as developer languages. I tend to group them into the realm of esoterica for the uses that DevOps requires. I, admittedly, do have less experience with them, but this use case doesn't seem to be their strength. Python, on the other hand, has some very interoperable tendencies, particularly with data structures. I mentioned that they must be flexible, and I therefore recommend sticking to dicts and lists for all of your models, and JSON for the notation. JSON and Python work very fluidly together, as any JSON data structure can be pasted verbatim into Python code. The data that gets assigned becomes a nest of Python dicts and lists. Addutionally, in the event that you need to step out of SDK functionality and make use of a REST service, you are already writing in JSON notation. Python structures and JSON are declarative, but also very simple, only making use of squigglies, squares, commas, colons, and quotes. An extra point in the easy-to-learn column. Ansible is written in Python on the Linux platform. Ansible is a wonderful tool to learn for resource provisioning. It's agentless, thereby using a more flow-friendly implementation, and as such makes use of ssh and Python on the client (which is actually a server) side. So you're running Python to run Python, and you can program in Python that gets run by Python. All learning done in this space is therefore cumulative. Jinja2 is a robust and widely use template language. Heck, it's even used by Ansible. Templates are useful for a wide range of things, from your first configuration settings to, well, your other three hundred configuration settings. Python comes with opinions on how to do things. It has its own Zen. Python even has its own nomenclature for practitioners (Pythoneers) and quality (Pythonic). Nevermind, ignore the last point. Pythonista koolaid makes my eyes roll. The Zen's good though... read the Zen. Python 2.X is executed with python, and 3.X is executed with py. This turns out to be a useful decision in practice, as you can maintain tools in old versions without accidentally executing them under the wrong framework. But after all this, I encourage you to have an opinion as well. Please try it out. Write some deployment glue for a product, no matter how small, and let me know if Python was right for you in the comments. This blog is being deployed with a combination of Docker and Python. The conversion from the old deployment pipe was very simple, and the process is fully automated with Travis.CI. I wish you the same levels of success.
OPCFW_CODE
from rest_framework import serializers from rest_framework.validators import UniqueValidator from rest_framework import exceptions as rest_exception from ..models import StudentClass, Student class AddStudentSerializer(serializers.ModelSerializer): firstName = serializers.CharField(required=True) lastName = serializers.CharField(required=True) age = serializers.IntegerField(required=True) regNumber = serializers.CharField(required=True, validators={ UniqueValidator( queryset=Student.objects.all(), message=( "Student's registration number should be unique" ) ) }) className = serializers.PrimaryKeyRelatedField( queryset=StudentClass.objects.all()) class Meta: model = Student fields = ['firstName', 'lastName', 'age', 'regNumber', 'className'] @staticmethod def check_if_student_exists(regNum): try: searched_student = Student.objects.all().get(regNumber=regNum) except Student.DoesNotExist: raise rest_exception.NotFound({ "message": "Student with RegNo:{regNumber} was not found".format(regNumber=regNum) }) return searched_student def create(self, data): new_student = Student(**data) new_student.save() return new_student def update(self, instance, edited_student_data): for (key, value) in edited_student_data.items(): setattr(instance, key, value) instance.save() return instance class ViewStudentsSerializer(serializers.ModelSerializer): class Meta: model = Student fields = '__all__' @staticmethod def students_of_class(class_name): class_id = StudentClass.objects.get(className=class_name) all_students = Student.objects.all().filter(className=class_id.id) return all_students.values('firstName', 'lastName', 'age', 'regNumber')
STACK_EDU
Are you getting ready for the Microsoft MB-820 exam and want some top tips for success? You're in the right place! We'll give you practical strategies to help you ace the exam with confidence. Whether you're experienced or new to Microsoft certifications, these tips will help you prepare for the exam and feel confident on test day. Let's get started on the path to success! The MB-820 exam focuses on Microsoft Dynamics 365 Business Central. It covers skills in designing solutions, implementing applications, and managing finance and operations. Candidates are also tested on integrating Business Central with other applications and services, as well as automation, reporting, and security. Understanding these skills is important for exam preparation and career success. To develop these skills, candidates can access study materials including official Microsoft training courses, practice tests, and online resources like forums and user communities. By using these resources, candidates can confidently approach the MB-820 exam and demonstrate their expertise in Microsoft Dynamics 365 Business Central. There are several types of questions on the MB-820 exam: To prepare for these, candidates can use: They can also: When approaching the questions, candidates can: By doing this, candidates can feel more confident and prepared for the exam. The Microsoft MB-820 exam lasts 120 minutes. It has different numbers of questions. Your score depends on how many questions you answer correctly, and each question has a specific weight. The exam has sections of varying difficulty, and each section contributes differently to your score. It's important to allocate enough time to each section because some carry more weight than others. There are rules about the exam duration and scoring system. For example, once you move to the next section, you can't go back. It's important to understand these rules to manage your time well and get the highest possible score on the MB-820 exam. Microsoft Dynamics 365 Business Central Essentials includes financial management, sales, service management, human resources, and project management. These components help businesses become more efficient by streamlining processes and providing real-time visibility into various operations. Becoming certified in Microsoft Dynamics 365 Business Central Essentials can benefit professionals by increasing their credibility and demonstrating expertise in addressing business challenges using the platform. This certification can open up opportunities for career advancement, higher salaries, and provide professionals with the knowledge and skills needed to implement and support Microsoft Dynamics 365 Business Central Essentials. When preparing for the Microsoft MB-820 exam, it's important to understand the different certification pathways. These include fundamentals, associate, and expert-level certifications. Each caters to different skill sets and career goals. Candidates can also choose specializations within these levels to focus on specific areas of interest. To learn about these pathways, individuals can visit the official Microsoft certification website. This provides detailed information about each certification, including required exams, skills measured, and learning resources. Engaging with certified professionals in online forums and communities can also offer valuable insights. Study guides and practice tests can help assess one's readiness for the exam. By using these resources, candidates can gain a comprehensive understanding of the certification pathways for the MB-820 exam, helping them make informed decisions about their certification journey. Here are some important tips for preparing for the Microsoft MB-820 exam: Readynez offers a 5-day MB-820 Microsoft Dynamics 365 Business Central Developer Course and Certification Program, providing you with all the learning and support you need to successfully prepare for the exam and certification. The MB-820 Microsoft Dynamics 365 Business Central Developer course, and all our other Microsoft courses, are also included in our unique Unlimited Microsoft Training offer, where you can attend the Microsoft Dynamics 365 Business Central Developer and 60+ other Microsoft courses for just €199 per month, the most flexible and affordable way to get your Microsoft Certifications. Please reach out to us with any questions or if you would like a chat about your opportunity with the Microsoft Dynamics 365 Business Central Developer certification and how you best achieve it. Top tips for success in Microsoft MB-820 exam prep include practicing with real-world scenarios, utilizing study materials like official Microsoft documentation, and joining online study groups for collaboration and knowledge sharing. To effectively prepare for the Microsoft MB-820 exam, you can utilize online resources such as practice tests, study guides, and virtual labs. Additionally, joining study groups and seeking guidance from experienced professionals can enhance your preparation. Some recommended study materials for the Microsoft MB-820 exam prep include Microsoft Learn modules, official practice tests, and exam guides. Additional resources like online courses and study groups can also be beneficial. Common pitfalls to avoid during Microsoft MB-820 exam prep include memorizing answers instead of understanding concepts, neglecting hands-on practice with Microsoft Dynamics 365 Marketing, and not utilizing available resources like study guides and practice exams. Yes, specific strategies for time management during MB-820 exam prep include creating a study schedule, setting specific time limits for practice exams, and focusing on high-priority topics first. For example, allocate 2 hours for practice exams and prioritize studying difficult concepts first. Get Unlimited access to ALL the LIVE Instructor-led Microsoft courses you want - all for the price of less than one course.
OPCFW_CODE
I'm interested in working on the GPGPU project. I've done GPGPU work before, which you can see here <https://github.com/vazgriz/VkColors>. That program is written using the Vulkan API, however I have some experience My questions are: Can the example programs have some kind of graphical output? Will I be allowed to use the Beagle Board's video capabilities? Can you clarify what do you mean by this? GLES uses the video capability? Are you referring to sample programs that show the output on a screen? When I posted the idea, what I had in mind was an example to show how the GLES hardware can be used in a headless case for computation. Depending on the version of the GLES drivers, integration with X11 may not be working. So the lowest common denominator is to use the direct output EGL and render to a "off screen" texture. The off screen texture part is nice to have but can be worked around in HW (i.e. it can be made invisible if the video hw is muxed out). Consider a possible flow: - 2D data is captured (say from a USB camera) - 2D data is loaded as a texture - Convolve it with a kernel - The 2D data is retrieve and written out Prehaps another variant is 1D data going through a similar flow. It would be nice to be able to repeat the load data to texture, convolve, and retrieve in a loop for benchmarking. Do Beagle Boards support Vulkan? No. The drivers are limited to GLES 2. Additionally, I don't think the suggested tasks would take an entire summer to write. Should I also include other GPGPU tasks in my proposal? I suggest addressing these points and possible others: - Value of being able to leverage the GPU even with the limited proprietary drivers (GLES 2 support only). - How do you plan to deal with getting the GLES drivers? Please search the Beagle mailling list for reports of various issues. - Prehaps identifying what is the threshold of processing that is needed to make the GPU useful for computation. Put it another way, the transfers to and from the GPU are not cheap so the GPU needs to do amount of work to make it worthwhile compared to other accelerations on the BeagleBone. The threshold may be in terms of any combinations of datasize, computations, and prehaps other factors. At the end of the summer, this should just be a dump of various shader codes. It should also show this is useful. I monitor the #beagle-gsoc channel. Discussions there may be helpful.
OPCFW_CODE
Final Activity Report Summary - DEFINABLE FORCING (Theory and application of definable forcing) Background: Set theory is a mathematical discipline that investigates infinite sets in a rather general way. A typical set theoretical question is CH, Hilbert's first problem: Does every infinite set of reals either have the size of the reals or of the integers? Mathematical logic is the part of mathematics that as its object investigates mathematical reasoning itself, so in a way it is meta-mathematics. Concepts such as proof or algorithm are investigated, and it turns out that there are very satisfying mathematical definitions for these a priori rather intuitive concepts. This allows formulating questions such as: Is a certain theorem decidable? Is a function computable? It turns out that set theory is very close to logic (and in fact it is usually considered to be one of the main disciplines of mathematical logic), for several reasons, among them:(1) Several natural questions in set theory turn out to be undecidable, for example CH.(2) Set theory actually provides the methods to prove that a wide array of questions from many mathematical fields are undecidable.(3) Set theory is one (and currently the most common) universal foundation for mathematics: A theorem is generally considered to be proven if it is (or at least could theoretically be) proven in a specific axiomatisation of set theory, ZFC.Forcing is a central method of set theory: Starting with a mathematical universe (i.e. a structure containing the natural numbers, the reals, to each set its powerset etc, a bit more formally a wellfounded ZFC model) we can add a new "generic" object for a particula partial order to get a new mathematical universe, and we force this new universe to satisfy certain properties (e.g. the negation of CH). This way we can prove that the negation of CH is possible in a mathematical universe, i.e. that CH is not provable.The Marie Curie project contributed to the development of the theory of forcing, in particular of definable proper forcing. We proved various preservation theorems for countable support iterations (nep forcings that do not make old positive Borel sets null do not make any old positive set null, which is iterable), we showed examples for limitations of such theorems (a Sacks real can appear at stage omega in a proper countable support iteration that does not add any reals at finite stages), we developed new constructions for creature forcings to get large continuum, and applied them to show that you can simultaneously distinguish several well known cardinal characteristics (and also get a perfect set of different simple characteristics defined with a real parameter). We worked on the theory of non-elementary proper forcing; and also investigated the pressing down game (related to precipitous ideals and large cardinals).
OPCFW_CODE
The POST method can be used to send ASCII as well as binary data. The data sent by POST method goes through HTTP header so security depends on HTTP protocol. What is $_ GET and $_ POST in PHP? $_GET, and $_POST are array variables of PHP which are used to read submitted data by HTML form using the get and post method accordingly. What is get & POST method in PHP? Get and Post methods are the HTTP request methods used inside the <form> tag to send form data to the server. HTTP protocol enables the communication between the client and the server where a browser can be the client, and an application running on a computer system that hosts your website can be the server. Which of the following is used to get information sent via GET method in PHP? PHP $_GET associative array is used to access all the sent information by GET method. What is the use of GET and POST method? The GET Method - GET is used to request data from a specified resource. - GET is one of the most common HTTP methods. - POST is used to send data to a server to create/update a resource. - POST is one of the most common HTTP methods. - PUT is used to send data to a server to create/update a resource. How send data from GET method in PHP? GET can’t be used to send binary data, like images or word documents, to the server. The data sent by GET method can be accessed using QUERY_STRING environment variable. The PHP provides $_GET associative array to access all the sent information using GET method. What are PHP methods? Methods are used to perform actions. In Object Oriented Programming in PHP, methods are functions inside classes. Their declaration and behavior are almost similar to normal functions, except their special uses inside the class. Let’s remind the role of a function. How data is sent in POST method? POST Method: In the POST method, the data is sent to the server as a package in a separate communication with the processing script. Data sent through the POST method will not be visible in the URL. The query string (name/weight) is sent in the HTTP message body of a POST request. Can we use POST method to get data? Use POST when you need the server, which controls URL generation of your resources. POST is a secure method as its requests do not remain in browser history. You can effortlessly transmit a large amount of data using post. How do you send data in GET method? How to use GET method to send data in jQuery Ajax? - url − A string containing the URL to which the request is sent. - data − This optional parameter represents key/value pairs that will be sent to the server. - callback − This optional parameter represents a function to be executed whenever the data is loaded successfully. Which variable is used to collect form data sent from GET and POST methods? 5. Which variable is used to collect form data sent with both the GET and POST methods? Explanation: In PHP the global variable $_REQUEST is used to collect data after submitting an HTML form. What are https methods? The primary or most commonly-used HTTP methods are POST, GET, PUT, PATCH, and DELETE. These methods correspond to create, read, update, and delete (or CRUD) operations, respectively. What is method POST in HTML? The method attribute specifies how to send form-data (the form-data is sent to the page specified in the action attribute). The form-data can be sent as URL variables (with method=”get” ) or as HTTP post transaction (with method=”post” ). Notes on GET: Appends form-data into the URL in name/value pairs. Can we use POST method to update data? Can I use POST instead of PUT method? Yes, you can. Which method is safe GET or POST? GET is less secure than POST because sent data is part of the URL. POST is a little safer than GET because the parameters are stored neither in the browser history nor in the web server logs. What is get and POST method in C#? Get and Post are methods used to send data to the server. Both methods are used in form data handling where each one has some difference on the way they work. It’s important for you to know which method you are using.
OPCFW_CODE
Cross-compile the firmware for your router from a ubuntu machine A basic knowldge of the procedure of compiling a OpenWrt firmware and flashing your router is a prerequisite for this task. Make sure that you have the needed software installed in your system. sudo apt-get update sudo apt-get install build-essential subversion git-core libncurses5-dev gawk wget gettext First, we download OpenWrt. The version we tested this was svn revision 39211. mkdir ~/openwrt cd ~/openwrt svn -r 39211 co svn://svn.openwrt.org/openwrt/trunk workdir cd workdir # Revision 39211 needs patches: # http://patchwork.openwrt.org/patch/4588/ wget -O - http://patchwork.openwrt.org/patch/4588/raw | patch -p1 ./scripts/feeds update -a ./scripts/feeds install -a Now download and apply a patch to the OpenWrt configuration files in order to make its build system download and build netsukuku. wget -O - http://download.savannah.gnu.org/releases/netsukuku/openwrt-39211-netsukuku-1.0.patch 2>/dev/null | patch -p0 We prepare to compile OpenWrt with Netsukuku. Select your Target, Subtarget and Profile (e.g. Atheros AR7xxx/AR9xxx - Generic - TP-LINK TL-WR1043N/ND) We have to produce the binaries with eglibc instead of uClibc. For this, select "Advanced configuration options". Inside it, select "Toolchain Options". Inside it, as C Library implementation choose eglibc 2.15. Inside category Network, select package netsukuku It is optional to activate the OpenWrt web interface. You find it inside LuCI, Collections, luci ionice -c 3 nice -n 20 make After approx. 90 minutes (+ download time of various pkgs) you will find your new firmware in ./bin/ar71xx-eglibc. Flash your router. Then connect to it and configure as you like the usual bits. The password for root, the IP address of your LAN and the wireless settings (SSID, channel, security, ...) For this steps refer to documentation of OperWrt that you can find online. Connect to the router as root. Edit the configuration file /etc/nsswitch.conf and place "andna [NOTFOUND=return]" before "dns" in the line for the database "hosts". Configure dnsmasq so that it listens on port 53 but only for its real IP, e.g. 192.168.2.1. The requests are to be forwarded to 127.0.0.1 on port 53 where we will listen with dns-to-andna. The file has to include these lines: listen-address=192.168.2.1 bind-interfaces no-resolv server=127.0.0.1 option rebind_protection '0' Configure /etc/resolv.conf so that it points to the real DNS server, e.g. 188.8.131.52. On OpenWrt /etc/resolv.conf is usually a symlink. Make it persistent and modify. cd /etc cp resolv.conf real.resolv.conf rm resolv.conf mv real.resolv.conf resolv.conf vi resolv.conf Configure /etc/ntkresolv/ntkresolv.ini so that DNS_TO_ANDNA listens to 127.0.0.1. The file has to include the line: Restart service dnsmasq and dns-to-andna: /etc/init.d/dnsmasq restart /etc/init.d/dns-to-andna restart Start / stop the daemon Open a terminal and connect to the router as root. You have to know which network cards (NIC) you want to handle with netsukuku. For OpenWrt this is usually br-lan. Find the name of the NICs with Launch the daemon with the NICs you want to use. ntkd -i br-lan The daemon will not exit, so you have to leave the terminal open. When you want to stop the daemon and exit netsukuku, press CTRL-C and it will remove all the configurations that it had set up. Now you should be able to connect even a unsupported device (android, mac, windows) and reach netsukuku nodes.
OPCFW_CODE
A small, composable Philips Hue interface for Node and the browser. Under development, unstable API. Hue's API is rather unintuitive, and has more baggage than probably needed. For example: there are three ways to set a lamp color: - XY - Coordinates in CIE color space. - Temperature - The degrees in Mired color space. - HSB - Hue, saturation, and brightness. The most intuitive of the three is HSB (aka HSL, which you've probably used in CSS), but the API implements it differently. Brightness goes from 1-254, saturation from 0-254, and hue from 0-65535. That's a far cry from hsl(240, 40%, 45%). This library handles some of the rougher edges, using TinyColor to collapse any CSS color expression into an API-compliant HSB expression. lightstatecolor'blue'lightstatecolor'#00F'lightstatecolor'rgb(0, 0, 255)'// Supports alpha channels, toolightstatecolor'#0000FFFF' Note: due to hardware limitations, colors may seem a bit off. Try yellowand you'll see what I mean. Colors are the most obvious, but not the only way Hue's API can be improved... - Transition time, measured in second/10 (not milliseconds) - Groups (which requires an entirely new API) - Scenes (same as groups) Illumination is created to address these problems, simplifying the hue interface. It introduces four simpler concepts: - States (such as color, alerts/effects, transitions...) - Lights (holds state and light information) - Presets (Groupings of lights and states) - Bridges (for applying presets) Illumination is under development, and APIs are subject to change at little more than a whim. Because of that, I haven't invested much into documenting what they do. If you want to try it out now, I suggest poking around in the source code - I use JSDoc liberally. You can install illumination with npm: $ npm install illumination --save Then import it (with ES6 modules FTW!) State is an interface over hue light state, but it doesn't need to be tied to a light. const state = From there, you can start editing the light state. Once you're ready, you add it to a Preset (more on this soon) and submit it to the bridge. If you need to import an existing hue state, you can pass it directly to the constructor. const state =hue: 30000sat: 251bri: 101 Since the API is changing often, I'm just gonna run through these really quick. Look in the source code for more details. // Blink the light.statestatestate// Turn the light on.statestate// Turn the light off.statestate// Start a colorloop.state// Stop it.state// Change the color.statecolor'blue'statecolor'#00f'statecolor'rgb(0, 0, 255)'// Degrees, 0-360.state// Percent saturation.state// Percent brightness.state This class has no methods. It could be implemented as a function, but, you know, symmetry. Light an object (like one returned from GET /api/lights/1) and it copies the metadata onto itself, but upgrades .state into a You'll be able to access the state object from the Light instance like so: const light =state: hue: 10000uniqueid: 'bob'lightstate Like I said, no methods though. Maybe in the future. This is where things get fun! Presets are collections of light states, most comparable to a hue scene (but lightweight). A preset might be what lights are on at a time, or what colors a group of lights are set to. You can manipulate entire groups at once using some convenience methods. For example: you might want to create a general preset containing your living room lights, then five more presets that build off it and change the colors and transition times. // Either add them on at once...const preset =1: name: 'Living Room Lamp'2: name: 'Office Lamp'// Or add them later.const light =name: 'Porch Light'preset // Change all colors at once.presetcolor'blue'// Set transition time.preset// Like Object.keys.preset// Adds a new light to the preset.preset// Iterate over each light.preset Creates a bridge interface that understands presets and has a reasonable api. No bridge discovery mechanism is included, as I think that would be feature bloat and belongs in a separate module. Also, it would prevent illumination from being browser friendly. // `ip` is the IP address,// `key` is the API key.const bridge = ip key Bridge class is fairly new and doesn't have many methods. As needed I'll add some more. // Formats an absolute urlbridge// Also works with arraysbridge// Resolves to the http response databridge// Sends preset state to the bridge.// This is probably what you're looking for.bridge Installing from GitHub I love all the cool toys ES6/7 brings. For now, that means compiling the project through Babel. $ git clone https://github.com/PsychoLlama/iillumination.git$ cd illumination$ npm install# `npm run dev` if you're editing.$ npm run babel To run mocha, do If you find a problem, lemme know by filing a GitHub issue. This is a hobby project, but I'll try to follow up asap. You can support this project and my narcissism by starring it 😀
OPCFW_CODE
- Hourly Rate$24 / Hr - Total Earned$16719 - Experience7 Years HTML, CSS, Angular , JSF Spring , Spring boot, Hibernate, JPA Eclipse, Netbeans , visual studio, StarUML Latex, GIT, bitbucket , Apache Tomcat Proposed an approach based on the Formal Concepts Analysis allowing the placement of tenants in a multi-tenant cloud computing environment. Maximized the exploitation of physical resources and the number of unused servers to reduce the energy consumption. Implemented the algorithm using JAVA. - Hourly Rate$22 / Hr - Total Earned$21802 - Experience8 Years Welcome to my profile! With 8 years of experience as a Full Stack Developer, I specialize in a wide range of technologies and offer comprehensive solutions for Web, Android, iOS, and Windows platforms. Here's what sets me apart: ⚡️ Web Development Services ⚡️ React: Harness the power of React to build fast, scalable, and interactive web applications. I excel in creating robust front-end interfaces using React's component-based architecture. Angular Ts: Leverage the Angular framework to create dynamic and feature-rich web applications. I possess expertise in Angular TypeScript, delivering seamless user experiences. Next Js: Enhance your website's performance and SEO capabilities with Next Js. I can create server-side rendered (SSR) applications that load quickly and rank higher in search engines. Vue Js: Embrace Vue Js for efficient front-end development. I can create responsive and interactive user interfaces using Vue's intuitive syntax and component-based architecture. Tailwind CSS: Create visually stunning and responsive designs with Tailwind CSS. I am adept at utilizing Tailwind's utility-first approach to build custom and scalable UIs. ⚡️ Mobile Development Services ⚡️ React Native: Build cross-platform mobile applications using React Native. I have expertise in creating native-like experiences for both iOS and Android platforms. Ionic & Cordova: Leverage the power of Ionic and Cordova to develop hybrid mobile applications. I can create apps that run seamlessly across multiple platforms. ⚡️ Backend Development Services ⚡️ Node Js & Express Js: Develop efficient and scalable backend systems using Node Js and Express Js. I excel in creating RESTful APIs, handling data operations, and implementing authentication mechanisms. Nest Js: Benefit from the robustness of Nest Js for server-side development. I can create scalable and modular applications using Nest Js's powerful dependency injection and module-based architecture. Firebase: Leverage Firebase to build real-time and cloud-based applications. I have hands-on experience in implementing Firebase's authentication, database, storage, and cloud messaging features. ⚡️ DevOps & Testing Services ⚡️ AWS & Ms. Azure: Utilize the capabilities of cloud platforms like AWS and Ms. Azure to deploy, scale, and manage your applications. I am well-versed in configuring cloud infrastructure and optimizing application performance. Netlify: Maximize your web application's deployment efficiency with Netlify. I can seamlessly deploy static websites, serverless functions, and continuous integration workflows using Netlify. Docker: Employ Docker to create lightweight and portable containers for your applications. I possess expertise in containerization, enabling efficient deployment and scalability. QA & Testing: Ensure the quality and reliability of your software through comprehensive testing. I am proficient in using industry-standard testing tools like Jest, Selenium, and Storybook to conduct thorough quality assurance. With a solid background as a technical lead and experience in managing complex databases, I am equipped to handle projects of any scale. I am committed to delivering exceptional results, meeting deadlines, and exceeding client expectations. - Hourly Rate$22 / Hr - Total Earned$15842 - Experience7 Years I have created wireframes, layouts, graphics, and other design elements My skills are; Python, Django, CSS, HTML, React, Redux, Mongo, Deployment, Databases - MySQL, MongoDB,PostgreSQL I have Involved in designing and coding user interfaces using C# and VB.Net.. Developed all the middleware components, which consisted of all the business logic, using C++. Designed master-page and used themes, skin and sitemap to make consistent look throughout the website. Involved in the development of the web pages according to the specifications using HTML, XML, and XSTL.
OPCFW_CODE
Robert Corwin, CFA Senior Python Developer/Architect | Austin, TX EVA Capital Management LP Co-Founder/Portfolio Manager/Senior Data Scientist Director and Head of Quantitive Research The Rohatyn Group Co-Founder/Portfolio Manager/ Senior Data Analyst EVA Capital Management LP (January 2016 - Present) Served as the Co-CIO of a data-based, quantitive asset management firm and laid the groundwork for company operations, hiring and trading. - Pitched the company business plan and trading strategy to a large number of potential clients and business partners in order to secure investments in the company and its products. - Maintained TB of data in a SQL database and pulled from it into a C++ trading algorithm which optimized the portfolio, a C# desktop app for data manipulation, and Python, MATLAB and R scripts for statistical analyses. - Coded the company web site, including an interactive performance visualization tool, using Node.js / Angular / Highcharts. - Lead researcher and portfolio manager responsible for daily trading and R&D on a systematic EVA (Economic Value Added)-based stock selection strategy, portfolio construction, and statistical/big data/machine learning analyses. - Managed one analyst Head of Quantitative Research EVA Dimensions (January 2007 - December 2015) Robert was the fifth hire at this financial company startup based around the Economic Value Added (EVA) valuation framework. He was instrumental in growing revenues to over $7M and employees to over 20. - Created and marketed equity research reports, data visualizations, and interactive web tools to buy- and sell-side clients. - Mined proprietary databases and tied statistical observations to non-technical, actionable investment recommendations. - Used techniques such as regression, cointegration, optimization, clustering, decision trees, etc. - Research focus includes finding areas of under/over-valuation, commenting on trends in market or factor behavior, and custom client projects. Samples below. - Lead researcher on many quantitative financial models, including an EVA-based global stock selection system; simulations of ETFs designed to capture premia (beta) on various factors; aggregate DCF analysis, a custom-built portfolio analysis, risk, and attribution system; cost of capital models; and thematic models tying factor returns and exposures to the business and company life-cycles. - Read academic literature and found practical applications in our products, research, and client work. The Rohatyn Group (May 2003 - June 2006) An analyst at a multi-billion dollar emerging markets hedge fund. - Provided support for an earnings-estimates based trading model. - Created a 100GB store of data; researched the underlying data quality and effectiveness of various trading signals/indicators; backtested. Implemented in the master fund at $50M. - Created a cointegration-based pairs trading model and presented results to the trading desk weekly. The University of California, Berkeley, Haas School of Business MFE - Financial Engineering (2002 - 2003 ) BSE Chemical Engineering (1994 - 1998)
OPCFW_CODE
The Song Surfing Podcast features the best in independent music! On this episode of Song Surfing we’ll hear music from Brighton in the UK; Bloomington, Indiana; Japan; Munich; Slane, Ireland; and Tallahassee, Tennessee. John also talks about hearing a home-recording legend for the first time. This episode of Song Surfing features the music of Dereck d.a.c and KIANVSLIFE, Amy O, Lanpazie, Love Dancer, Friggsy, and One Million Horses. Visit the https://songsurfingpodcast.com/episode-15/ (Show Notes Page) for show notes, including links to the artists' sites and the best places to purchase and stream the music featured on this and all episodes of Song Surfing. Watch John on Studio Live Today’s https://www.youtube.com/channel/UC6BWO4JfxBFSSf41dtF8hqg (“Creator Town Hall”) https://player.captivate.fm/episode/90e89664-2cc3-44fa-b5b8-eed6567cf568 (John’s was interviewed on ) In the Key of Q The theme music for this episode is https://wiensolo.bandcamp.com/album/message-from-the-future (“Living in a Fishbowl” by Wien Solo) The outro music is https://l.facebook.com/l.php?u=https%3A%2F%2Fsong.link%2Fca%2Fi%2F1550227807%3Ffbclid%3DIwAR2CysDJ3sGhbQdDqtevSeAWvN17o7dfo-U6MHenI7LzHCoG1Gy8gMgtulAandh=AT0X9EEeP8CyW1hy_058bFFTqadmBHhxbXsJAu4Wrvp0NO9podlWP9oKOU7ElH8cLX0Jh_qzoX7yL8qvCB33CAl8WLFWySG (Little Pills by Patrick Moon Bird) Want to help the show? Rate and review on one (or all) of these sites: https://podcasts.apple.com/us/podcast/song-surfing/id1549025544 (Apple podcasts) (scroll to the bottom to find the review link) https://www.podchaser.com/podcasts/song-surfing-1581825 (Podchaser) (scroll down then click +add a review) https://castbox.fm/channel/Song-Surfing-id3721681?utm_source=websiteandutm_medium=dlinkandutm_campaign=web_shareandutm_content=Song%20Surfing-CastBox_FM (Castbox) (add a comment instead of a review) https://podcastaddict.com/podcast/3212595 (Podcast Addict) (click on reviews tab) https://song-surfing.captivate.fm/listen (Listen, Follow and Subscribe to Song Surfing) https://forms.gle/p3ugGg2mBiv1V7jv5 (Join the Song Surfers Mailing List -US listeners get a free sticker!) Follow Song Surfing on https://www.facebook.com/songsurfingpodcast (Facebook) https://www.instagram.com/songsurfingpodcast/ (Instagram) https://forms.gle/casuqyVN8e5RGVd58 (Submit your music to Song Surfing) https://forms.gle/kNLGHpkNk3wDExUM6 (Spotify playlist) Song Surfing is part of the https://www.thelincolnlodge.com/podcasts (Live from the Lincoln Lodge Podcast Network) Check out these other podcasts: https://hot-dish.castos.com/episodes/episode-5-jessica-besser-rosenberg (Hot Dish) https://pop-of-passion.castos.com/ (Pop of Passion) Mentioned in this episode: Use our referral link next time you're shopping for plugins at pluginboutique.com https://pluginboutique.com/?a_aid=songsurfing
OPCFW_CODE
namespace TBD.Psi.RosBagStreamReader.Deserializers { using System; using Microsoft.Psi; public class SensorMsgsPointFieldDeserializer : MsgDeserializer { public SensorMsgsPointFieldDeserializer() : base(typeof((string name, int off, byte dtype, int count)).AssemblyQualifiedName, "sensor_msgs/PointField") { } public static (string name, int off, byte dtype, int count) Deserialize(byte[] data, ref int offset) { /* The following deserializer extracts the four variables within a sensor_msgs/PointField ROS message * and returns them in order within a tuple. The four variables describe the name of the field, the offset * from the start of the point struct, the datatype enumeration, and how many elements are in the field, respectively. */ string name = Helper.ReadRosBaseType<string>(data, out offset, offset); // string name int off = Helper.ReadRosBaseType<Int32>(data, out offset, offset); // uint32 offset byte dtype = Helper.ReadRosBaseType<Byte>(data, out offset, offset); // uint8 datatype int count = Helper.ReadRosBaseType<Int32>(data, out offset, offset); // uint32 count return (name, off, dtype, count); } public override T Deserialize<T>(byte[] data, ref Envelope env) { int offset = 0; return (T) (object) Deserialize(data, ref offset); } } }
STACK_EDU
Defect: subscription to event defined in generic interface via extension method silently fails to subscribe Expected Subscribing to event should work. Actual It silently fails. Using 'Steps to Reproduce' generated code should be js self.Ev = Bridge.fn.combine(self.Ev,$_.Demo.EvExtension.f1); generated code is: js self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"] = Bridge.fn.combine(self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"], $_.Demo.EvExtension.f1); Pull request will be submitted in a while Steps To Reproduce Deck public interface IEvGen<T> { event Action Ev; } public class EvGen<T> : IEvGen<T> { public event Action Ev; public bool HasListeners {get {return Ev != null;}} } public static class EvExtension { public static void AttachViaExtension<T>(this IEvGen<T> self) { self.Ev += () => {}; } } public class Program { [Ready] public static void Main() { var sut = new EvGen<int>(); IEvGen<int> sutIface = new EvGen<int>(); sutIface.AttachViaExtension(); Console.WriteLine("bug? " + !sut.HasListeners); } } Why handler is attached to sutIface but you check sut? May be need to use IEvGen<int> sutIface = sut;? Also, this test works fine in 15.4 (current master branch). Please note that expected code should be long alias name ("Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev", not just "Ev") because it is interface member which can conflicts with another interfaces members. Bridge generates aliases to avoid conflicts See https://github.com/bridgedotnet/Bridge/issues/1025 issue for more information I will close the issue. Please open it if you find another similar issue Hi vladsch, Indeed during cleanup of my test I made a mistake. Code should be like you wrote: IEvGen sutInterface = sut; Thanks for explaining use of those aliases. Just for the record: it is failing in latest official release 15.3 and also in latest public master a831f6d. I guess that fix is not yet public? For me it works fine with latest master code. Do you have exactly the same generated javascript code if use master branch and online deck.net? For me, the following code is generated for AttachViaExtension attachViaExtension: function (T, self) { self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$addEv"]($_.Demo.EvExtension.f1); } I've change example to be more user friendly: Deck as you can see there it generates code attachViaExtension: function (T, self) { self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"] = Bridge.fn.combine(self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"], $_.Demo.EvExtension.f1); } Thing is that code runs BUT listener is not attached to event - you don't see expected "line 2" in console. It silently fails to subscribe listener to event. This is bug. self.Ev should not be null. I don't argue about you of aliases as you are correct in that aspect. Original (fixed) example is: Deck There code is also generated attachViaExtension: function (T, self) { self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"] = Bridge.fn.combine(self["Demo$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"], $_.Demo.EvExtension.f1); } Thing again is that listener is not attached to event Yes, I see the problem in Deck generated script but Deck uses last public release, not latest code from master. In master, generated code is correct. I tested it and handler is attached. I will add unit test for your test case I've checked out latest sources - at the time of writing latest commit is a831f6d. Is this latest "master" that you are referring to? In Visual Studio I've cleaned up and then recompiled whole Bridge.sln solution. Then I've used those release dlls in the test project and compiled our former example Deck. Code that was generated is /** * @version 1.0.6151.33205 * @copyright dominik * @compiler Bridge.NET 15.3.0 */ Bridge.assembly("BridgeNetEval", function ($asm, globals) { "use strict"; Bridge.define("WebClient.EvExtension", { statics: { attachViaExtension: function (T, self) { self["WebClient$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"] = Bridge.fn.combine(self["WebClient$IEvGen$1$" + Bridge.getTypeAlias(T) + "$Ev"], $_.WebClient.EvExtension.f1); } } }); var $_ = {}; Bridge.ns("WebClient.EvExtension", $_); Bridge.apply($_.WebClient.EvExtension, { f1: function () { } }); Bridge.definei("WebClient.IEvGen$1", function (T) { return { $kind: "interface" }; }); Bridge.define("WebClient.Program", { statics: { config: { init: function () { Bridge.ready(this.main); } }, main: function () { var sut = new (WebClient.EvGen$1(System.Int32))(); var sutIface = sut; WebClient.EvExtension.attachViaExtension(System.Int32, sutIface); Bridge.Console.log("bug? " + System.Boolean.toString(!sut.getHasListeners())); } }, $entryPoint: true }); Bridge.define("WebClient.EvGen$1", function (T) { return { inherits: [WebClient.IEvGen$1(T)], config: { events: { Ev: null }, alias: [ "addEv", "WebClient$IEvGen$1$" + Bridge.getTypeAlias(T) + "$addEv", "removeEv", "WebClient$IEvGen$1$" + Bridge.getTypeAlias(T) + "$removeEv" ] }, getHasListeners: function () { return !Bridge.staticEquals(this.Ev, null); } }; }); }); ... yet still it gives me "Bug? True" indicating that no listener is attached. Afterwards I've manually updated generated JS code to have "$addEv" instead of "$Ev" as you've written above that it fixes code: attachViaExtension: function (T, self) { self["WebClient$IEvGen$1$" + Bridge.getTypeAlias(T) + "$addEv"] = Bridge.fn.combine(self["WebClient$IEvGen$1$" + Bridge.getTypeAlias(T) + "$addEv"], $_.WebClient.EvExtension.f1); } ...but it didn't help. Either latest public master doesn't contain fix OR I'm doing something very very wrong here :/
GITHUB_ARCHIVE
+ Reply to Thread Results 61 to 84 of 84 No Widevine DRM key Hi, I need help downloading these video files from this website. I can't seem to locate the .mdp file at all. I'd appreciate it if someone can help me out. The links are I'm pretty new at this and wanted to know if someone could help me downloading this file: I've tried tried to find the mfd manifest file to use then youtube-dl but I didn't succeed. The video extraction went well but unfortunately it has no sound. Any suggestion? 14:03:08.119 File Name: manifest.mpd_20210822140307 14:03:08.131 Save Path: C:\N_m3u8DL\Downloads 14:03:08.298 Start Parsing 14:03:08.475 Downloading M3u8 Content 14:03:09.776 Start Parsing MPD Content... 14:03:14.374 Checking Whether The Last Fragment Is Valid...(Video) 14:03:15.473 Checking Whether The Last Fragment Is Valid...(Audio) 14:03:17.842 Parsing M3u8 Content 14:03:17.901 Writing Json: [meta.json] 14:03:18.452 Master List Found 14:03:19.418 Writing Master List Json: [playLists.json] 14:03:19.419 Auto Selected Best Definition 14:03:19.420 Start Re-Parsing... 14:03:19.420 Downloading M3u8 Content 14:03:19.423 Parsing M3u8 Content 14:03:19.427 Downloading M3u8 Key... 14:03:19.427 PLZ-KEEP-RAW Is Not Supported Yet, Ignore Decrypt, And Use Binary 14:03:19.443 Writing Json: [meta.json] 14:03:19.540 File Duration: 55m24s 14:03:19.542 Original Count: 554, Selected Count: 554 14:03:19.645 Has External Audio Track Last edited by karapuz; 22nd Aug 2021 at 06:22. Thanks a lot karapuz. I could make it work. Last edited by piramalakia; 22nd Aug 2021 at 09:21. Reason: update Hello, could I possibly get some help downloading two videos? When I try to download the MPD files through youtube-dl, it says "ERROR: No video formats found." I tried yt-dlp too and had the same issue. The MPDs are: I assume the error has something to do with DRM, but I have no idea what steps to take to get around it. I am new to this and have spent hours Googling, but I can't figure it out, so I am requesting help. Thank you. ytdlp.exe --allow-unplayable-formats -F "https://g-aegis-naver.pstatic.net/globalv/owfs_rmc/read/global_v_2020_09_29_96/cenc/73ff1860-062b-11eb-a1df-a0369ffc92d0/stream.mpd" ytdlp.exe --external-downloader aria2c --allow-unplayable-formats -f video_avc1 "https://g-aegis-naver.pstatic.net/globalv/owfs_rmc/read/global_v_2020_09_29_96/cenc/73ff1860-062b-11eb-a1df-a0369ffc92d0/stream.mpd" Last edited by [ss]vegeta; 1st Sep 2021 at 04:43. widevine guesser not work with AES encryption... need other workaround. explained in other pages of this forum. Last edited by codehound; 2nd Sep 2021 at 06:23. Last edited by codehound; 2nd Sep 2021 at 18:28.
OPCFW_CODE
Could we get something like the worldbuilding hard-science tag? On the worldbuilding stack site there is a tag called hard-science. When this tag is used, it generates the following text. This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. Here is an example of this. It looks like this. The use case on this site, would be questions like this about being bit by a snake or this one about removing a fishhook, or this one about acclimatization. In all of those cases, there is good information about what to do from reputable sources. I think that requiring sources would greatly improve the quality of the answers. Personal experience is great, but it is all but impossible to verify and I think that the site would be much better off by requiring sources for certain questions. Edit. There is another possible option seen here on the Latex stackexchange site. What it is a long list of prepared responses to common scenarios with links to meta posts already set up. For instance, if someone was not actually answering the question, we could have a prepared text that linked to this meta post. Health:SE has dealt with this frequently. Search "references" and "citations" on their meta to read the discussions. They have to be very strict, so much of what they do would be overkill here. They don't use a tag, but they do use mod-messages, like "Some of the information contained in this post requires additional references. Please edit to add citations to reliable sources that support the assertions made here. Unsourced material may be disputed or deleted." Source. I'd agree with Sue. Mod-messages are easy to set up and use. We don't have a need for a tag to force this. @RoryAlsop There was also something I saw on the Latex site, that I can't find at the moment, where there was a list of prepared comments for a number of scenarios. Like homework questions, link only etc. The were set up and ready to go for anyone to copy and paste and would include relevant links to meta posts. Maybe something like that would work. Why isn't that considered to be meta-tagging? @OddDeer on world building it makes sense since there is a big difference between questions about "real/hard science" and "science fiction". Here it would be a meta tag telling you little about the question. @StrongBad This was my old proposal somewhat along the lines of what your recent one is about I think this would be a wonderful idea. In my opinion the tag should be called science-based. This is because we're looking for a scientific answer to the question, and hard vs. soft science is fairly meaningless to what we're trying to achieve. I feel like the tag guidance should read something along the lines of: This question asks for science based answers. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information. The body of the tag description should be an adapted version of the hard-science tag description. The reasons I like the tag idea better than just mod messages: They are incredibly more discoverable. I personally never knew that I could/should flag my own post to ask a mod to put a mod message limiting the types of answers I would accept. Now that I know this it makes sense, but conceptually I view flags about problems. They don't rely on mod intervention. My understanding of the SE moderation model is that we should empower the commoners to handle things with out the intervention of mods/high rep users as much as possible. It makes searching for science based answers much easier to a given problem. Mod messages don't help with searching. It makes it clear that the person is asking "why does XYZ ..." instead of our more standard "what ... for XYZ." Tags are meant to describe the subject of the question, not to express other things like how hard the question is or (in this case) the conditions about how to properly answer the post. See The Death of Meta Tags If the author is looking for a particular type of information, they should express those requirements in the body of the post. But it would be community unfriendly to disallow an answer out of hand because an unsuspecting user did not read the description of every tag in the post. Tags do not change the context of a question. This is inconsistent with the feature set and how folks have come to expect Stack Exchange to work. I think you misunderstood, I added a screenshot to the question just in case. I agree that expecting people to read the descriptions of tags before answering isn't fair, but I think the message bar just below the question would. If it would work to put the requirements in the body, then I think it would work to have that message posted automatically with a certain tag. For the record, I had kind of given up on getting this implemented, but I don't think that I am as far out in left field as this answer seems to imply.
STACK_EXCHANGE
Syntax of `offset` option of `e2fsck` In this answer the user @psusi shows a use of e2fsck that I could not find in the documentation: e2fsck /dev/sdc1?offset=2048. I am trying to use it to check a device (manually failed and removed) from a raid array. Long story short: I know that starting from offset 67108864 of /dev/sdb2 there is an ext4 filesystem, which was not properly unmounted. I have tried the following (always reverting /dev/sdb2 to the same content between different tests). Running mount /dev/sdb2 -o loop,offset=67108864 /tmp/mnt works fine and I can see using dmesg | tail that it deleted 5 orphaned inodes, fully recovered the filesystem, and mounted it: EXT4-fs: 5 orphan inodes deleted EXT4-fs: recovery complete EXT4-fs: mounted filesystem with ordered data mode. Opts: (null) Similarly, losetup /dev/loop0 --offset 67108864 /dev/sdb2 && e2fsck -n /dev/loop0: root: recovering journal root: Clearing orphaned inode 1179687 (uid=1000, gid=1000, mode=0100600, size=16384) root: Clearing orphaned inode 1179686 (uid=1000, gid=1000, mode=0100600, size=16384) root: Clearing orphaned inode 1179685 (uid=1000, gid=1000, mode=0100600, size=32768) root: Clearing orphaned inode 1179684 (uid=1000, gid=1000, mode=0100600, size=32768) root: Clearing orphaned inode 1179683 (uid=1000, gid=1000, mode=0100600, size=65536) root: clean, 225936/4882432 files, 2664100/19514624 blocks I thought that e2fsck -p /dev/sdb2?offset=67108864 would be equivalent to the approach above (using losetup) but instead I get: root: recovering journal e2fsck: Bad magic number in super-block while trying to re-open root root: ********** WARNING: Filesystem still has errors ********** I am sure that e2fsck finds the partition because if I put the wrong value in offset (e.g., 0) I get an error that the filesystem could not be found. Also, if I use e2fsck -p /dev/sdb2?offset=67108864 after any of the approaches 1 or 2, it prints root: clean, .... I wonder if someone could point me to the documentation for the offset option of e2fsck or help me understand what it does exactly and how this differs from mounting the loopback device with a given offset. Thanks. EDIT: Additional information. I can reproduce this behaviour as follows: dd if=/dev/zero of=/tmp/disk bs=1M count=100 mkfs -t ext4 -E offset=70000000 /tmp/disk sudo mount -o loop,offset=70000000 /tmp/disk /mnt/ ps > /mnt/test cp /tmp/disk /tmp/disk2 cp /tmp/disk2 /tmp/disk2.copy sudo umount /mnt e2fsck -p /tmp/disk2?offset=70000000 # /tmp/disk2: recovering journal # e2fsck: Bad magic number in super-block while trying to re-open /tmp/disk2 # /tmp/disk2: ********** WARNING: Filesystem still has errors ********** sudo mount -o loop,offset=70000000 /tmp/disk2.copy /mnt/ dmesg | tail # [240760.866274] EXT4-fs (loop3): mounted filesystem with ordered data mode. Opts: (null) # [240770.516865] EXT4-fs (loop3): recovery complete # [240770.516869] EXT4-fs (loop3): mounted filesystem with ordered data mode. Opts: (null) sudo umount /mnt e2fsck -n /tmp/disk2?offset=70000000 # e2fsck 1.42.13 (17-May-2015) # Warning: skipping journal recovery because doing a read-only filesystem check. # /tmp/disk2: clean, 11/25688 files, 8896/102400 blocks e2fsck -n /tmp/disk2.copy?offset=70000000 # e2fsck 1.42.13 (17-May-2015) # /tmp/disk2.copy: clean, 11/25688 files, 8896/102400 blocks As we can see mounting the file recovers the journal (and the file system is then reported as clean by e2fsck), while e2fsck -p throws an error and does not recover the journal. If this can be useful, here is the difference between the two disk images.
STACK_EXCHANGE
#include "QuickGUICharacter.h" #include "QuickGUIText.h" namespace QuickGUI { Character::Character(Ogre::UTFString::code_point cp, Ogre::FontPtr fp, ColourValue cv) : codePoint(cp), fontPtr(fp), colorValue(cv.r,cv.g,cv.b,cv.a), mHighlighted(false) { texturePtr = Text::getFontTexture(fp); if(Text::isNewLine(cp)) { mWhiteSpace = true; dimensions.size.height = Text::getGlyphHeight(fp,'0'); dimensions.size.width = 0; } else if(Text::isSpace(cp)) { mWhiteSpace = true; dimensions.size = Text::getGlyphSize(fp,'0'); } else if(Text::isTab(cp)) { mWhiteSpace = true; dimensions.size = Text::getGlyphSize(fp,'0'); dimensions.size.width *= 4; } else { mWhiteSpace = false; // Get the glyph's UV Coords UVRect uvCoords = Text::getGlyphUVCoords(fp,cp); this->uvCoords = uvCoords; // Use UV Coords to Determine character dimensions dimensions.size = Size(((uvCoords.right - uvCoords.left) * texturePtr->getWidth()),((uvCoords.bottom - uvCoords.top) * texturePtr->getHeight())); } dimensions.size.roundUp(); } bool Character::getHighlighted() { return mHighlighted; } ColourValue Character::getHighlightColor() { return colorValue; } ColourValue Character::getHighlightedTextColor() { return ColourValue(1.0 - colorValue.r,1.0 - colorValue.g,1.0 - colorValue.b,1.0); } bool Character::isWhiteSpace() { return mWhiteSpace; } void Character::setFont(Ogre::FontPtr fp) { fontPtr = fp; texturePtr = Text::getFontTexture(fp); if(Text::isSpace(codePoint)) { dimensions.size = Text::getGlyphSize(fp,'0'); } else if(Text::isTab(codePoint)) { dimensions.size = Text::getGlyphSize(fp,'0'); dimensions.size.width *= 4; } else { // Get the glyph's UV Coords UVRect uvCoords = Text::getGlyphUVCoords(fp,codePoint); this->uvCoords = uvCoords; // Use UV Coords to Determine character dimensions dimensions.size = Size(((uvCoords.right - uvCoords.left) * texturePtr->getWidth()),((uvCoords.bottom - uvCoords.top) * texturePtr->getHeight())); } } void Character::setHighlighted(bool highlighted) { mHighlighted = highlighted; } }
STACK_EDU
Napatech Link-Capture™ Software offers two different port configurations, 4×10G and 1×40G, for Intel® PAC A10 GX. AFU image files The firmware part of Napatech Link-Capture™ Software is packaged as FPGA image files. FPGA image files for Intel® PAC A10 GX are known as AFU (Accelerator Functional Unit) image files. AFU image files can be dynamically loaded to Intel® PAC A10 GX acceleration cards. |Port configuration||AFU filename| AFU image files are also known as Green BitStream files and commonly use .gbs as the filename extension. AFU image files for use with Napatech Link-Capture™ Software are encrypted and use .egbs as the filename extension. During initialization, ntservice reads /opt/napatech3/config/afu.ini and /opt/napatech3/config/ntlicense.key and loads the specified encrypted AFU image files. Your license key is valid for both types of port configuration. See Installing the License Key for Napatech Link-Capture™ Software for information about how to order and install a license key file. Selecting 4×10G or 1×40G - Backup any previous version of the /opt/napatech3/config/afu.ini The installation will overwrite any previous versions. - Run the /opt/napatech3/bin/nt_afu_ini_setup.sh configuration script to select an AFU image file for each installed Intel® PAC A10 GX: - If /opt/napatech3/config/afu.ini already exists, it is displayed. Type Y to confirm OK: 1x40G image found /opt/napatech3/images/INTEL-A10-1x40/200-7001-12-03-00-180817-1210.egbs >> Configure AFU images for PCI device found in PCI-slot 0000:04:00.0 - ([Y]es/[N]o) ? - If /opt/napatech3/config/afu.ini does not exist, or if you confirm reconfiguration, the script displays bus IDs for the installed Intel® PAC A10 GX SmartNICs and file names for available AFU image files.For each Intel® PAC A10 GX, choose whether to select another AFU image file and which one: >> Configure AFU images for PCI device found in PCI-slot 0000:04:00.0 - ([Y]es/[N]o) ? >> select 4x10G (press 1) or 1x40G (press 2)Here is a sample of the commands and output: $ sudo /opt/napatech3/bin/nt_afu_ini_setup.sh No afu image configuration file (/opt/napatech3/config/afu.ini) found Intel(R) PAC A10 PCI adapter(s) found in following PCI slots --------------------- No | PCI slot --------------------- 0 0000:04:00.0 OK: 4x10G image found /opt/napatech3/images/INTEL-A10-4x10/200-7000-12-02-00-180815-0828.egbs OK: 1x40G image found /opt/napatech3/images/INTEL-A10-1x40/200-7001-12-03-00-180817-1210.egbs >> Configure AFU images for PCI device found in PCI-slot 0000:04:00.0 - ([Y]es/[N]o) ? >> select 4x10G (press 1) or 1x40G (press 2) OK: 4x10G selected OK: afu configuration file, /opt/napatech3/config/afu.ini, created Example afu.ini file # afu.ini # [Afu0] BusId = 0000:04:00.0 # 4x10G: AfuFile0 = /opt/napatech3/images/INTEL-A10-4x10/200-7000-12-02-00-180815-0828.egbs # 1x40G: #AfuFile0 = /opt/napatech3/images/INTEL-A10-1x40/200-7001-12-03-00-180817-1210.egbs # # EOF #
OPCFW_CODE
I wrote yesterday about Linux Mint 12, which provides some nice built-in extensions to GNOME 3. Parts of the discussion about Mint apply to generic GNOME 3, especially the Activities features. To see how Activities works, read about it here. I originally planned to write a post tonight comparing Mint 12, Ubuntu 11.10 Unity, and GNOME 3. However, in preparing I realized that I hadn’t fleshed out my GNOME 3 setup, and therefore hadn’t learned enough to discuss it. So, I played with GNOME 3 and learned some new things. GNOME 3 departed in a new direction from its predecessor, losing significant functionality in the transition. The loss doesn’t appear to be as great as that in the transition from KDE 3.5.10 to KDE 4. However, the concept of indicators has apparently died. Here’s the generic GNOME 3 desktop from Ubuntu 11.10 Oneiric (look ma, no Unity!) with my fall background being the only customization: You can’t tell from the desktop that this is the 4th of four workspaces and that several applications reside on other workspaces. I don’t look upon that void favorably. The only useful piece of information on the screen is the time. Simple and clean? Yep. Useful? Not so much. Driving the pointer to the upper left corner of the screen, or clicking on Activities, or hitting the Super key brings up the activity screen that reveals all: For additional details on how the Activities screen works, please see my Mint 12 review. One other note on this screen. You can drag icons from the Applications tab to the dock-like bar on the left to add them to your favorites. Right clicking on the icons can add or remove them from the favorites list. Simple and intuitive. Clean? Yes. Attractive? Yes. Productivity enhancing? Nope. While preparing for this post and then writing it, I found repeatedly dragging the pointer to the upper left to open the dash and then back to the workspace bar on the right to be beyond tedious. Using the Super key didn’t improve things much. On a large desktop screen, this scheme leaves much to be desired. Since Ubuntu provides a virtually generic GNOME 3 shell, the GNOME 3 Shell Extensions mostly work with this release. I installed a handful to customize my desktop and make it easier to use: Once installed, they can be turned on and off with the Advanced Settings app. [Quick interlude: Note that the only button/control on the window’s title bar is the close button on the upper right. That’s standard for GNOME 3.] Here’s what I have after this customization: I found the workspace extension particularly useful. Clicking on it brings up a list of active workspaces: Of course, you have to remember what app is on which workspace. An extension exists to list all open windows on the panel, but it takes up too much room. There’s only so much real estate available. I hadn’t originally intended to move the clock from the center to the right, but found it necessary because the favorites pushed the active window display over the center clock display. With this setup, I can see what workspace I’m on, go to places quickly, and access a standard GNOME menu to find apps quickly. This setup works better for me, but still leaves some holes. One hole involves indicators. The old GNOME 2 indicators don’t generally work on the GNOME 3 panel. Some program indicators do load, but GNOME 3 hides them on the lower right of the workspace. Moving the pointer to the lower right area of the screen reveals them: Note that they do not display as designed. The screen icon with the circle and line through it is supposed to be my hardware sensor display. The weather indicator next to it should be showing conditions and temperature. Mousing over them shows their individual names. All these indicators function correctly when right- or left- clicked. Even if the indicators displayed correctly, GNOME 3 hides them from view. That dramatically limits their usefulness. That’s pretty much my quick spin with GNOME 3. It looks great, and the dash is pretty cool. But simplicity doesn’t necessarily equal elegance. In this case, the simplicity comes at a price. The standard workspace tells the user nothing about their current computing environment. The dash tells you almost everything, but only when you call it up. With the right extensions, though, GNOME 3 can be usable, but it’s not ideal for me. Creating this post in GNOME 3 taught me that very quickly.
OPCFW_CODE
Webcenter can be used to create applications which contain web application features and portal features both at the same time and also contain capabilities of social networking and is built on SOA standards. This way, enterprises can have a new kind of applications where everything can be accessed from one place. All legacy applications, data from different repositories, content and everything can be tied together and provided to user based on context. Users can have their own private and public spaces where information and ideas can be stored and shared. The following pictures show the architecture of Webcenter 11g. Webcenter 11g is built over ADF (Application Development Framework) and it mainly consists of the following components. - Webcenter framework - Webcenter Services - Oracle Composer - Webcenter Spaces Oracle Webcenter Framework: This is a design time extension to jdeveloper and injects portal capabilities into ADF. Features of Webcenter framework include Content Integration, Portlet Container, Resource Catalog, Customizable Components, Portal Framework, Search Framework, Oracle JSF Portlet Bridge and Portlet Runtime. Content Integration allows to drop content on pages from content management systems like UCM, file system, portals etc. Portlet container is used to deploy and register portlets. Resource Catalog allows developer to search for resources that are existing in different repositories. Ex of some resources are layouts, ADF faces, components, portlets, task flows and documents. Customizable components decide whether your page is personalizable or customizable at runtime. Oracle Composer is exposed to webcenter framework for creating customizable components. Portal Framework enables building and deploying portlets which are reusable portal components. Search Framework allows users to search enterprise wide information from one place without shifting between applications. The Oracle JSF Portlet Bridge enables you to easily convert any JSF pages or ADF pages or task flows into portlets (reusable portal fragments) which can be integrated into the web center application. Portlet runtime embeds portlets in pages. Webcenter Services provide out of the box services to enable social networking and personal productivity. These services are integrated in webcenter and are ready to use. Following are the services provided by webcenter. Services for Social Networking: Announcements , Discussions , Blog, Instant Messaging and presence, Wiki. Services for Personal Productivity: Mail, Notes, Recent Activities, RSS(Really simple syndication), Search, Worklist. Shared services (provides benefits of both social networking and personal productivity) : Documents, Events, Links, Lists, Tags, Oracle Webcenter analytics, Oracle Webcenter Ensemble(enables users to add portlets as UI widgets, integrate external content into portal and create light weight mashups which are pages with data, presentation or functionality from multiple sources). Webcenter Spaces: These are built using Webcenter Framework, Webcenter Services and Oracle Composer. They allow users to create their own spaces in the application so as to manage personal or group information. Features include Group spaces, Group space templates, Personal Spaces, Business Role pages. Personal Spaces: each user can have their own personal pages that they can create, change and share with others. Business Role Pages: Pages specific to an enterprise role so that all users who belong to that role can access them. Group Spaces: Pages specific to a particular group like group of people belonging to a particular project or department. Group Space Templates: for consistent look and feel, an existing group space can be created as a template so that all other group spaces can be quickly created based on the template. Powerful Localization Features: Webcenter spaces can use multilingual capabilities. Oracle Composer: Enables creation of personalizations and customizations as a separate layer so that even if the application version changes, the personalizations and customizations still retain. Oracle composer comes integrated with webcenter framework and webcenter services and webcenter spaces and has been leveraged extensively in webcenter spaces. Customizations can be stored either in file system or directly in database using MDS. Using above components, we can create Portal applications or Composite Applications or combination of both. Webcenter basically erases the line between enterprise applications and portals.
OPCFW_CODE
Pipeline Catalog: Flow Cytometry Flow cytometry data should be uploaded as a collection of FCS files. There are two options for how the data can be formatted for upload: - Use the sample IDs encoded in the FCS file names (e.g. SampleA.fcs), or - Use a samplesheet CSV to specify the sample IDs for each FCS file. Organizing data with a sample sheet The advantages of using a sample sheet when uploading data are (a) the file names do not have to match the sample names, and (b) additional sample metadata can be added en masse. To use a sample sheet, simply create a samplesheet.csv in the folder containing the data to be uploaded with the format: Note that the file names do not need to match any particular pattern. Any additional metadata can be added as columns to the sample sheet. For example, subject, or any other information can be included in columns to the right of filein the example above. Apply FlowJo Gates After setting up a gating scheme in FlowJo, it can be helpful to apply those gates across a large collection of FCS files. Before running the analysis, you will need to upload the FlowJo gates as a Reference. This makes it easy to apply the same set of gates across multiple batches of FCS files. - Save the FlowJo workspace as a file with the - Upload that file to the Cirro References page To apply the gating scheme to a batch of samples, first upload the input data in FCS format to Cirro. The analysis can be run on either (1) the complete batch of files (by default), or (2) you may select a subset of files to analyze. Summary metrics will be provided in CSV format for both: - The absolute number of cells from each file which were assigned to each population - The percentage of cells which were assigned to each population (relative to its parent) - Finak, Greg et al. “CytoML for cross-platform cytometry data sharing.” Cytometry. Part A : the journal of the International Society for Analytical Cytology vol. 93,12 (2018): 1189-1196. doi:10.1002/cyto.a.23663 Automated Quality Control Analysis To help provide an automated first-pass analysis of flow cytometry data, we have implemented a workflow which uses a set of open source tools for the unsupervised analysis of these datasets. While manual inspection of flow cytometry datasets is difficult to automate in a single solution, our hope is that this tool can be used to provide a quick look at the contents of a particular dataset. Flow Cytometry QC Steps The analysis steps performed by the QC workflow are as follows: - flowClean: Automated identification and removal of fluorescence anomalies in flow cytometry data (ref); - flowAI: Automatic and interactive anomaly discerning tools for flow cytometry data (ref); - If spillover data is available, perform compensation with flowWorkspace (ref); - If possible, perform logicle transform with flowCore (ref); - Automatically identify groups of cell using FlowSOM (ref); Note: If any of the methods in steps 1-4 can not be performed, that particular step will be skipped. Output data will be provided for each individual step as FCS files. In addition, the complete set of measurements will be output in CSV format, including the number of cells from each sample which were assigned to a given cluster.
OPCFW_CODE
People often ask me why I bother to refurbish old computers and often garnish their comments with all types of snide remarks and put-downs. Do I care? Certainly not. I derive a lot of pleasure from breathing new life into old PCs, especially if they’ve been neglected and need some TLC. This particular machine belongs to a friend and he was about to throw it out onto the street until I intervened. The first thing I do is to strip the computer down to the metal, remove all its components, and then wash everything that’s not electrical in hot soapy water, finishing off with a good blast of the hose pipe. I then leave everything out in the hot sun to dry. In this scorching weather, that takes no more than a few minutes. I then clean the motherboard and other components with compressed air and a soft brush, remove the heatsink, renew the thermal paste and ensure that all fans are free from dust and other gunk. Rebuilding the PC is always a pleasure and with HP Compaq machines, because they are built to last with top quality materials, they are difficult to break. On the other hand, you will need the correct screwdrivers because HP always uses aluminium torque screws. I ensure every detail is cleaned and inspected and once the rebuild is finished, I hit the switch not without a little trepidation. This HP Compaq Deskpro Pentium III E was originally built in February 2000 (HP is very good at marking all the components) and came with a Maxtor 30GB hard drive, 2 x 256 DIMMs, a CD-ROM, floppy, and a hard drive caddy device with a parallel port connection which must have been an early removable hard drive mechanism. The huge motherboard speaker is connected to the motherboard and of course, the on/off switch is proprietary HP. Installing Windows XP When I first received the PC, Windows XP was running fine, but then developed errors so I went for a clean install which is usually a stroll in the park, but not on this occasion. To cut a long story short, the install took several hours, not least because of read/write errors from the CD, but also because of the proprietary HP BIOS which kept throwing up errors. When the OS was finally installed, I didn’t even need to install any further drivers either as everything was recognised immediately. With XP SP 2 finally installed, I ensured that all was working correctly, checked out Device Manager, and then left the install as is because I’ve now put it up for sale. Someone will buy it and of that, I have no doubt because every refurbished machine I’ve put up for sale has moved within days. Besides, I already have three retro machines that I refurbished some time ago and if I’m not careful I’ll be running out of space. It’s always fun to explore a blast from the past, not only to remind us of how far we’ve advanced in computing but also to give an old machine a new lease of life.
OPCFW_CODE
# creating Trieclass class TrieNode(object): def __init__(self): """Initializing the node with list of 10 digits from 0-9 and store price""" self.digit = 0 # initializing 10 children for each node because there are 10 digits possible self.children = [None] * 10 # self.children = {} # key: digit, value: TrieNode (child) # default price is 0 self.price = 0 # to indicate we traverse all the digits in the route self.end_path = False # consider having store the len of the route def __repr__(self): """Return a string representation of this trie node.""" return 'TrieNode({!r})'.format(self.children) # def get_child(self, digit): # if isinstance(self.children, list): # self.children[int(digit)] # elif isinstance(self.children, dict): # self.children[digit] class TrieTree(object): def __init__(self, routes=None): """Initialize trie tree with all routes""" self.root = TrieNode() self.size = 0 if routes != None: for route, price in routes: self.add(route, price) # self.size += 1 def __repr__(self): "return A string represention of the Trie tree" return 'size: {}'.format(self.size) def add(self, route_number, price): """Add the new digit as node""" node = self.root # print("root: ", node) self.size += 1 for index, digit in enumerate(route_number): if node.children[int(digit)] == None: node.children[int(digit)] = TrieNode() # sake of keeping track of digits node.digit = int(digit) # check if we are at the end of the route number if index == len(route_number)-1: # check if the price the minimum we want to store the cheapest if node.price == 0 or price < node.price: node.price = price # we are end of the path node.end_path = True break # updating the node, move downward to child node node = node.children[int(digit)] def search(self, phone_number): """Return a price for givin phone number searching through Trie structured routes""" # start at the root node = self.root price = 0 for digit in phone_number: # check if node exists where digit equals index if node.children[int(digit)] != None: # setting the price before we see the fisrt unmatch price = node.price # print("each node price:", node.price) node = node.children[int(digit)] else: # first unmatch digit and we break # print("else each node price:", node.price) break # return the price of last node return price if __name__ == "__main__": price_1 = '0.01' route_1 = '1415' route_2 = '1415234' price_2 = '0.02' phone_number = '14152346370' # return 0.01 # phone_num = '' obj = TrieTree([[route_1, price_1], [route_2, price_2]]) print(obj.size) # Testing search print(obj.search(phone_number))
STACK_EDU
> Word 2002 > Word 2002: Form Checkboxes Disappear Word 2002: Form Checkboxes Disappear You can use a table to lay out the form and then enter form fields in the cells where you want information entered or updated. In a new document, type some sample text for heading levels 1, 2, 3, and 4, or however many levels you used in your document. Remove the checkmarks from Allow fast saves and Allow background saves. The check boxes and lists disappear and now I even see the information bubbles of modification follow-up appearing on the right side margin. http://winnthosting.com/word-2002/word-2002-is-slow.html Unlike protected, fill-in forms, guided forms can also be edited and changed by the user in areas other than the *form* area. Once you have created this type of data source file, Word saves it in a file in table format. I would > like users to be able to complete the form and email (not as an > attachment) > but the checkboxes disappear when emailed. It seems Like the follow-up modifications option is always turned on "by default" or something; it always comes back in this document. Merge Finally, to perform the merge, you can choose the appropriate button from the mail merge toolbar, or from the Mail Merge Helper. Starting from Scratch In Word 97-2003 you will want to work with the Forms Toolbar. The first cell will contain "Name:", the second will contain your form field and will have a bottom border on it (unless you are keeping all of your cell borders). At first, when I tried to use it, and clicked on a check box or selected an item in the list, it actually disappeared. Please post replies and questions to the newsgroup so that others can learn from my ignorance and your wisdom. "Rubieliz" <> wrote in message news:... >I have created a simple form, Write your letter if necessary. All of the records in your data should be checkmarked. Every time she highlights a section of text and then changes the font or margin alignment, Word changes the whole document into that new font or margin. Thanks again! "Anne Troy" wrote: Well, it sounds like the document was protected for tracking changes (follow-up revision?) instead of for Forms. ************ Anne Troy VBA Project Manager www.OfficeArticles.com "Martine" wrote The location of the normal.dot file that Word is looking for can be found by opening Word, and using Tools-Options, File locations tab. Mail merges can be used for other purposes. Files in either of these folders are automatically opened when you launch the respective program. All the check box or drop down list disappear if used. They're one of the easiest things to create. You can lock a document for forms using the padlock button on the Forms toolbar or using the Protect Document command under the Tools menu. - Newer Than: Search this thread only Search this forum only Display results as threads Useful Searches Recent Posts More... - Turn the button off after testing. - Click on OK. - Word 97 on Windows XP Word 2000 closes right after opening after installing the SR-1 patch. - When i accepted all changes and made sure that track changes was then turned off, this fixed the problem. Start-->Run and type: regedit Hit your Enter key. You are now on Step 2 of the Mail merge wizard. Eventually you will have to convert them. Make sure there's a checkmark in Prompt to save normal template and Save autorecovery info. I made some test to see if I could find the reason and here are some things I came up with but did not solve the problem. have a peek at these guys It should be impossible to uncheck the check box beside the Menu Bar option, but you can still select the option. Inserting an odd-page section break at this point will force the next page of the document to be page 31. I have it set on standard as I was instructed and still gone. An easy solution is to create a form in Microsoft Word 2000. There was a "W" in toolbar. Step 6 should be reinstalling the printer driver. check over here The document is locked so we can use the forms options, but there is no password on the lock. Close the template. Step 3. Your newly installed program finds normal.dot right where it was before. Enter a Drop-Down Form Field Click Drop-Down Form Field. It has controls for insertion of form fields. You can copy and paste pieces of your document to a new document, saving the new document each time, until you get the error. Starting with Word 2013, performance of legacy form fields has degraded. Microsoft support professionals can help explain the functionality of a particular procedure, but they will not modify these examples to provide added functionality or construct procedures to meet your specific needs. Sean, Sep 9, 2003, in forum: Word Documents Replies: 5 Views: 147 Nikolai Dec 12, 2003 Form check box control inconsistency in Word 2002 doc emailed , Feb 25, 2004, in Single-Document Interface What the heck is that? http://winnthosting.com/word-2002/word-2002-defaults.html The third will contain "Date:", and the fourth will contain a text form field formatted as a date, and can have a bottom border. Please help. Then, hold down the left shift key and use the left arrow key to deselect any extra paragraph markers at the bottom of your document. Another indication of the Klez virus. Besides enabling the checking/unchecking of the check boxes, it will enable tabbing from field to field while preventing any editing of the text between the fields. Once you have found the path, right click the Word folder, hit Rename and rename it to OldWord. If you haven't created text form fields as well, you may find that your "form" is useless. Shortly, Dreamboat will be adding information about the different layouts, odd/even headers and footers, and the anomalies of the *same as previous* option in headers and footers. This behavior (of absorbing explicit formatting into the underlying style) really muddies the water for people just learning how Word handles formatting. You will not find this chapter on the Microsoft site. Choose Use an existing list at the top. from http://officeupdate.microsoft.com/2000/articles/wCreateForms.htm Troubleshooting Forms — Issues To Watch Out For Q212328 WD2000: How to Create an Online Form Using Form Fields Q212378 WD2000: How to Control the Tabbing Order in a Up to three images may be included in a comment. How do I get my document back?" Start retyping (Dreamboat says this is one of the worst things a technical support person is asked.) When pasting from another document, the text Word will simply not see the form field as anything other than text when the field is contained inside an IF Field. That's it; that's how you stop Word from applying the explicit changes to the underlying style. We have Word 2002 SP3.
OPCFW_CODE
2D processing changes June 2009 A note on changes to 2D-processing for NCAR/NSF aircraft in ICE-L, START-08 and PACDEX projects There is no standard for processing of 2D-data from PMS 2D-C (25 and 10-micron resolution) and PMS 2D-P probe (200-micron resolution). RAF is currently working on upgrades to 2D-processing as we strive for producing better and more easily usable data. For the three projects (ICE-L, START-08 and PACDEX), RAF has produced spectra using the so-called 1-D probe emulation which essentially gives the particle size based on the maximum shaded pixels during any time of a particle passage during the laser beam. For very small particles the Depth-of-Field is associated with considerable uncertainty. From theory, the Depth-of-Field is well defined; however, the correction for sample volume changes very dramatically for small particle sizes. Thus if there is a small error in sizing, the concentration of a particle may change dramatically. In the original ICE-L, START-08 and PACDEX data, RAF set the depth of field to the fixed distance between the probe arms, and it was left up to individual users to correct the size distributions using the Depth-of-Field of their choice. RAF is currently working with MMM (Aaron Bansemer and Andy Heimsfield) on implementing much more sophisticated 2D-algorithms. This is a development project, both as far as algorithms are concerned and as far as implementation into RAF-s standard processing. Accordingly it cannot be implement in the short term. Changes to RAF-s 2D-processing and contents of re-run of data: RAF has made six main changes to the processing in the ICE-L, START-08 and PACDEX data. They are: - A netCDF attribute has been added to the concentration spectra variables C1DC called DepthOfField, which contains a vector with the depth of field per channel (similar to CellSizes). - The 2D-particle concentration spectra have calculated using this Depth-of-Field. Accordingly the concentrations in the smaller bins are now higher than previously. - The 2D-particle concentration spectra have had the concentrations in the first two bins excluded. The reason is the above-mentioned uncertainty arising in bins 1 and 2 due to sizing errors and the very small Depth-of-Field in these bins. Users can still calculate particle concentrations in bins 1 and 2 by using the theoretical Depth-of-Field in conjunction with the 2D-particle count spectrum; RAF consider that this is too uncertain. - The 2D-C total particle concentration, CONC1DC, now only include the particles in bins 3 and larger. The concentrations will thus differ from previous values due to the changed Depth-of-Field and the omission of bins 1 and 2. - By excluding bins 1 and 2 of the 2D-C concentration spectrum, the nominal minimum size (lower bin limit) of bin 3 is 62.5 micron (for the 25-micron resolution 2D-probe). Thus the total particle concentration from the 2D probe is CONC1DC of all particles greater than 62.5 micron (nominally). - An error was found in the algorithm for accepting particles ending in a blank slice; this has been corrected with the result that more particles have been accepted.
OPCFW_CODE
#pragma once #include "common.hpp" #include <vector> #include <memory> #include "debug.hpp" size_t next_power(size_t size); class Allocator { public: struct Block { bool free; size_t size; std::unique_ptr<uint8_t[]> data; Block(size_t _size) : size(next_power(_size)) , free(false) { data = std::make_unique<uint8_t[]>(size); } void* get() { return static_cast<void*>(data.get()); } }; private: std::vector<Block> mBlocks; Block* find_block(void* block); Block* find_block(size_t size); Block* find_free_block(size_t size); public: ~Allocator(); void* allocate(size_t size); void mark_free(void* block); static Allocator& getGlobalAllocator(); };
STACK_EDU
The Dockerfile is a simple text file that define the steps to build a Docker image for a service. Every instruction in it constitutes a layer on top of which the subsequent are built. You should avoid creating too many, but it’s good for readability and reusability that you keep different concepts in different layers. The resulting image will be the “blank” state for our container to be run, so should already hold everything our service will need. It should not be changed too often but, if you keep it tidy and thin, working on it later during the app development won’t be a problem. I suggest creating a brand new app executing rails new my-awesome-app to follow along with this tutorial. You could actually use any existing application, but the db should be SQLite cause I’m not going to discuss connection with other databases in this post. First thing we need to do is creating a file named Dockerfile in the working directory of our application. This is what we want to end up with # /path-to-my-awesome-app/Dockerfile # select the base image FROM ruby:2.5 # install Node.js RUN apt-get update -qq && \ apt-get install -y build-essential nodejs # set the working dir WORKDIR /app # add "rails" default user RUN useradd -u 1000 -Um rails && \ chown -R rails:rails /app USER rails Every line in it is explained in detail below. Select the base image First, we should choose an existing image to build on top of. Since we want to dockerize a Ruby on Rails app, we can use one of the Ruby images on the Docker Hub. FROM let us do that. I’m choosing the latest release with ruby 2.5 on Debian. We need Node.js to run a RoR app. So in the next layer we install it, executing the same commands we would on any Debian machine. We update quietly the sources and install it, along with Debian package builders. RUN apt-get update -qq && \ apt-get install -y build-essential nodejs Notice we don’t need to sudo, since root is the default user of most Docker images (including Ruby ones). Set the the working directory In the next layer we set the container’s working directory, where the application’s files live. WORKDIR will also create the directory since it doesn’t exist yet. Create the rails user Since now, if you were to build and run a container from this image, its default user would be root. This is ok in Docker containers for many types of services, but it’s not convenient for a Rails one. RoR apps are usually run by unprivileged users, so we should do in our container. If we don’t, many routine operations, like generating migrations, would mess with file permissions on the host machine (since we are going to bind the host working directory to the one in the container for development). Same goes for other commands like bundle, that is not supposed to be run as root. Doing so can lead to problems, both inside and outside the container (sure happened to me, anyway). I feel much more comfortable running the service as an unprivileged user by default, than having to care how to interact with the container as a different user every time (you sure can do that but I find it prone to mistakes). Using Debian standard syntax we create a rails user with the default 1000 UID (or your user UID if different, you can find out by typing echo $UID), belonging to a group with the same name and having a home directory. RUN useradd -u 1000 -Um rails && \ In the same layer then we give the new user ownership of the app directory. chown -R rails:rails /app Set the default user With the USER command we make rails the default user of the image. Every remaining command in the Dockerfile will be run as this user, as the main process of the service and the commands called from outside the container. Building the image That’s it, there should be a Dokerfile like this in the root of the application. To build the image for the application we just need to docker build path-to-my-awesome-app -t my-awsome-app First argument for docker build is the application’s path (where the Dockerfile is) and -t option tags the image with a custom name. Be aware you may need to sudo Docker commands, depending on your OS and user configuration. Running the container You can now run a new container from the image and start an interactive shell inside it like this (–rm option makes sure this container will be removed when we close it) $ docker run --rm -it my-awesome-app bash You’ll find yourself to a prompt like this Not too exciting, huh? There’s almost nothing in it, just Debian and ruby. You can try some commands in the ruby shell to check everything is in place. Then just exit the container for now. rails@container_id:/app$ irb irb(main):001:0> 2+2==5 => false irb(main):002:0> exit rails@container_id:/app$ exit In the next section, we are going to explore how to run a Rails app inside the container. Next step: Running Rails Inside a Container Previous step: Docker Development for Rails
OPCFW_CODE
Math 155 - Computer Graphics - Winter 2001 Programming assignment #1 - Expanded Explanation. In this assignment you will learn to use the computers and the Microsoft Visual C++ compiler, and will modify the Solar program so that it uses better animation controls and supports a "single-step" mode. Due date: Wednesday, January 17. Academic integrity guidelines: You are expected to do your own, individual work. We will not have team projects in this course. For more details, see the academic integrity guidelines. For this assignment you should do the following: Smoothing the animation: There are several problems with the animation in the Solar demo as currently written, these are mostly due to the fact that the author of the program used integers to control the speed of the animation, and further his implementation was poorly done and causes the moon to move in jerks rather than moving slowing. You can see these problems by slowing down the animation until it almost stops (it might help to make the window larger so as to slow down the execution speed of the animation). You are asked to fix these problems by converting the program to use float's instead of int's for the variables that control the state of the animation and the speed of the animation. For this, you do not need to understand the details of how OpenGL uses rotations, but only that the variables TimeOfDay and DayOfYear are used to position the planet and the moon; and that AnimateIncrement is used to control the rate at which these variables change. Note that AnimateIncrement is equal to the number of hours between each frame displayed in the simulation. This will require various changes to the code --- the code should not get much longer, only different! The changes are however a little tricky since you may not have coded this kind of functionality before. Your code should support the following: Some technical issues that will come up as you smooth the animation. n-((int)(n/m))*mwhere, in the latter formula, nare float's (or double's). Single-step mode: This allows the user to use the "s" key to enter single step mode. If the animation is running, it is stopped. Then each time "s" is pressed, the simulation advances a single time step. You should add a new function Key_s to process these key strokes. Examine the existing C code to see how to do this glut function calls. Aliasing The term ''aliasing'' refers to visual artifacts caused by the digitalization of image displays. In this Solar demo you will see several examples of aliasing. In your programming assignment does not need to address aliasing issues, except that you should observe and try to understand the aliasing that is present. Demo solution: Professor Buss's solution to this homework assignment can be found in the public P: folder in SolarSolnA. You may run this program to see the desired functionality. Turn-in procedures. You do not need to actively turn in your program. Instead, I will ''ls'' your directories to record the fact you have stopped modifying your program, and we will set up times for you to demo your code and its functionality to the TA. You should not modify your code or recompile or anything after the due date. The source code must be easily readable (no bad problems with line spacing, or overly long lines) and your C++ project workspace must be already setup so that we can easily run, or modify and run your code. The grading will be done individually in appointments with the TA.
OPCFW_CODE
WrapRec is an open-source project, developed with C# which aims to simplify the evaluation of Recommender System algorithms. WrapRec is a configuration-based tool. All desing choices and parameters should be defined in a configuration file. The easiest way to use WrapRec is to download the latest release, specify the details of experiments that you want to perform within a configuration file and run the WrapRec executable. Get Started with a "Hello MovieLens" Example MovieLens is one of the most widely-used benchmark datasets in Recommeder Systems research. Start to use WrapRec with a simple train-and-test scenario using the 100K version of this dataset. To run the example Download the Hello MovieLens exmaple and extract it in the same folder that contains WrapRec executable. Run the example by the following command. In Windows, you should have .Net Framework, and in Linux and Mac you should have .Net Mono installed to run WrapRec. Linux and Mac mono wraprec.exe sample.xml The example performs four simple experiment with MoviLens 100K dataset. The result of experiments will be stored in a folder results with csv format. Check WrapRec Outputs to understand more about the restuls and outputs of WrapRec. You can start your experiments by modifying the sample configuration file. The configuration file is rather intuitive and easy to understand. Check Configuration section to undrestand the format of the configuration file in WrapRec. WrapRec is designed based on the idea that for any evaluation experiment for Recommender Systems (and generally in Machine Learning), three main components should be present - Defines the algorithm that is used to train and evalute Recommender System - Specifies the data sources and how the data should be splitted for training and evaluation - Defines a set of evaluators to evaluate a trained model The overal architecture of WrapRec is summerized in the Figure below. WrapRec is not about implementation of actual algorithms for recommender systems. Instead, it provides functionalities to easily wrap exisiting algorithms into framework, to be able to perform extensive evaluation experiments in single framework. Building Blocks in WrapRecTo perform an evaluation experiment with WrapRec, three main building blocks are required. Defines the algorithm that is used to train and evalute Recommender System Currently WrapRec is able to wrap algorithms from two Recommender System toolkits: MyMediLite and LibFm. You can also plug your own algorithm to this framework. Check How to Extend WrapRec to learn how to extend WrapRec with your own algorithm or third party implementations. Specifies the data sources and how the data should be splitted for training and evaluation In WrapRec data will be loaded through DataReader components. You can define in the configuration file what is the input data, what is the format and how it should be loaded. The data will be store in a Split defines, how the data in DataContainer can be splited for training and evaluation. WrapRec supports several splitting methods such as static, dynamic and Cross-Validation. Defines a set of evaluators to evaluate a trained model Evaluation Context is a component is WrapRec that consists of several Evaluators and store the results of evaluations that are done with In WrapRec all the settings and desing choices are defined in a configuration file. The overal format of the configuration file is defined as follow. - Components in the configuration file are loaded via Reflection and the parameters are dynamic. It means that you can specify the type of class and its properties and WrapRec creates the objects during runtime. - Parameters can have multiple values. WrapRec detects all parameters with multiple values and run multiple experiments, as many times and the catesian product of all parameters. - The three main components of an experiments (Model, Split and Evaluation Context) should be defined in the configuration file. In WrapRec evaluation experiments are defined in an experiment object. In the configuration file an experiment is defined in an experimentselement, you can define as many experiment as you want. experimentselement is the starting point of parsing the configuration file and runnig the experiments. You can specify which experiment(s) you want to run using For each experiment that you run with WrapRec, two csv files will be generated. The name of the files are The first file contains the results of experiment (in tab separated by default) and the second file contains some statistics about the dataset and splits that is used for that experiment. These two files are stored in the path that are specified by the attribute resultsFolder. You can change the delimiter of the generated csv files using attribute WrapRec log some information when the experiments are running. You can change the verbosity of experiments using the attribute verbosity. The values trace (which log more details) and ModelA model is defined with a modelelement. There are two obligatory attributes for a model element: idwhich is a name or id that you define for the model , and classwhich is a full type name (including namespace) of the model class. Advanced hint: If the class is defined in another assembly, the path of the assembly should be prefix in the attribute class with a colon, that is, [path to assmebly]:[full type name] parameters. Here is an example of a model definition: The class specifies WrapRec wrapper class and the parameters specify the properties of the model. See detailed Documentation to see possible parameters for different WrapRec models. All parameters can have multiple values separated by comma. For each combinatio of parameters WrapRec creates an instance of the model object. Hint: when you use a modelId in an experiment, for each combinatio of parameters one model instance will be used. An split is defined with an split element. With an split object you specify what is the source of data and how should the data be splitted into train and test sets. Using the attribute dataContainer you should specify what is the data source of the split. - Cross-validation (cv) FeedbackSimpleSplitclass. You can make your own split logic by defining your own custom class and use the type name in the parameter class. Your custom split class should extend class WrapRec.Data.Split. Check extending WrapRec to see how you can defined your ownd types. numFoldscan have multiple values (comma separated). This means that the for each combination of parameters one split object would be created. In WrapRec data are stored in a DataContainer object. The data is loaded into the DataContainer object via A DataContainer should specify which DataReaders should be used to load data into it. DataReaders specify how data should be loaded. WrapRec.IO.CsvReaderto read csv data. If you are using this class there are few more parameters namely hasHeader=[true|false]to indicate whether the csv has header or not , delimiter=[delimiter]to specify delimiter in the csv file. In addition you can specify the format of the csv file using the header=[comma separated fields]attribute. An Evaluation Context is defined with a An EvaluationContext object can have multiple evaluator objects that evaluate a trained model. Evaluators are defined with an Here is the fomrat of the evaluatorelement has an obligatory attribute classthat specifies the evaluator class. Depending on the evaluator class, more parameters can be used. Here is an example of an evaluation context with multiple evaluators: Here three evaluators are used. The firs two are simple RMSE ane MAE evalutors. If you want to use ranking-based evaluators, you can use class WrapRec.Evaluation.RankingEvaluators. Here you can specify more parameters. This evaluator measures the following metrics: - % of item coverage candidateItemsMode. It can have one of the following values: You can add your own logic to WrapRec or wrap other toolkits to it. Currently WrapRec wraps two recommmender systems frameworks of LibFm and MyMediaLite. If you want to contribute on WrapRec, feel free to fork the WrapRec github repository and make a pull request. You can also create your custom Experiment, Model, Split, DataReader and Evaluator in a different assembly and use them in the configuration file.
OPCFW_CODE
[Haskell-cafe] One-shot? (was: Global variables and stuff) ahey at iee.org Sat Nov 13 07:51:48 EST 2004 On Saturday 13 Nov 2004 10:39 am, Keean Schupke wrote: > Actually, I Think I'm wrong - I think its not even safe if you cannot > export the '<-' def. If any functions which use it are exported you are > in the same situation. I cannot say the kind of code in the example I > gave is good, can you? Infact the availability of these top level IO > actions seems to completely change the feel of the language... I've looked at your example, and the behaviour you describe is exactly what would be expected and what it is intended. That's the whole point of things having identity. The reason they have identity is because they are mutable and all users of a particular TWI are indeed mutating the same thing, just as all users of stdout are writing to the same The point is that if the shared TWI is something like an IORef this is (of course) extremely dangerous because anybody can write anything they like to it at any time. But that is not how this should be used. The module exporting one or more TWIs typically will not be exporting raw IORefs. It will be exporting a well designed stateful api which access IORefs etc via closures. It's the responsibility of the author of the exporting module to organise the code so that it delevers (on whatever promisies it's making) to all clients, and clients should not rely on anything that isn't being promised. So it seems to me that the only thing that's wrong here is your expectations (I.E. that a module should assume it has exclusive access to whatever state the TWI's it imports mutate). This is not so. If it wants it's own private TWI (a mutable queue say) it should not be importing another modules queue (not that any good design should be exporting such a thing anyway), it should be importing a newQueue constructor and making it's own queue (either at the top level or via normal IO monadic operations).. myQueue <- newQueue But there's no magic here. All IO actions have potentially unknown state dependencies and mutating effects, that's why they're in the IO monad. All the top level <- extension does is enable the user to extend the initial world state (as seen by main) with user defined state, but it doesn't fundamentally change nature or hazards of programming via the IO monad, for better or worse. More information about the Haskell-Cafe
OPCFW_CODE
Do you have a question? Post it now! No Registration Necessary. Now with pictures! - Posted on - how to count max in SQL? December 4, 2004, 6:41 am rate this thread question, but I think it is close enought to most php users and I always got find answers on this group. well, I need to count how meny rows have some maximum number. here is exact situation (striped down table) (pictID, jpeg name, #of votes) pict1, name1.jpg, 4 pict2, name2.jpg, 22 pict3, name3.jpg, 5 pict7, name7.jpg, 8 pict8, name8.jpg, 22 pict9, name9.jpg, 9 so, I need query (or set of querys) which will give me back rows 2 and 8 (I know, rows 1 and 7, but for easyer description, please let forget for moment that first is 0) I know what to do if I have only one picture with maximum votes: SELECT * FROM picts ORDER BY votes DESC LIMIT 1 (maybe not the best solution, but is working). now, I wanted to count how meny rows have MAX of votes, but I was unable to do that with my knowlage... in any combination (list), (while(list(too(much...) only failures... at the end I wanted to do SELECT with LIMIT so I can drow only those $query1 = "THAT one I don\'t KNOW"; $result1 = mysql_query ($query1) or die ("Query failed..."); list($new_limit) = mysql_num_rows($result1); $query2 = "SELECT * FROM picts ORDER BY votes DESC LIMIT $new_limit"; $result2 = mysql_query ($query2) or die ("Query failed..."); list($pict_data1,$pict_data2,$pict_data3) = mysql_fetch_array($result2); can anyone help me with that 1st query? or anyone have easyer solution? oh yes, I can not tell how meny pictures will have maximum number of votes. so far (in 4 years of manual handling of my site) there vere only 3 moments with 2 pictures sharing wining position, never more... normaly Re: how to count max in SQL? If your version of mysql supports subqueries (4.1 +) select * FROM picts WHERE votes = (select max(votes) as mvotes FROM picts GROUP BY votes ORDER BY mvotes DESC LIMIT 1); select max(votes) as mvotes FROM picts GROUP BY votes ORDER BY mvotes DESC select * FROM picts WHERE votes='22'; 22 being the mvotes value from the previous query.
OPCFW_CODE
Add Support For Heatmap It would be nice to have support for the Google Maps Heatmap Layer. https://developers.google.com/maps/documentation/javascript/examples/layer-heatmap The ability to pass in an array of {lat, long, weight} objects into a heatmap component would be great! Or is this out of scope for this component? As you have access to internal map object in api callback, you could add any google layer. All thing other than just render react components are out of the scope now ;-). For similar things I create HOC components over GoogleMap component and use features I need. An example of heatmap layer implementation: <GoogleMap defaultCenter={default_position} defaultZoom={6} yesIWantToUseGoogleMapApiInternals onGoogleApiLoaded={({map, maps}) => { console.log(points[0]); const heatmap = new maps.visualization.HeatmapLayer({ data: points.map(point => ( {location: new maps.LatLng(point['location'][1], point['location'][0]), weight: point['weight']})) }); heatmap.setMap(map); }} > where points is of the following structure: [ {location:[-1.131592, 52.629729], weight: 2}, {location:[-1.141592, 52.629729], weight: 3}, {location:[-1.161592, 53.629729], weight: 1}, ... ] @ruta-goomba in your example maps.visualization is undefined, could you explain how you did it? bump @jackzampolin what I did is adding: <GoogleMap bootstrapURLKeys={{ libraries: 'visualization', }} > ... Then maps.visualization.HeatmapLayer will be available. @whollacsek Thank you for getting me unblocked on that! Thank you much for the Heatmap hints! Got it working! You're welcome :) Le jeu. 18 mai 2017 à 06:32, Abigail Watson<EMAIL_ADDRESS>a écrit : Thank you much for the Heatmap hints! Got it working! — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/istarkov/google-map-react/issues/122#issuecomment-302297724, or mute the thread https://github.com/notifications/unsubscribe-auth/AAyYUKBZRwDGgKsX8OONm-7L6XvT8BcUks5r68nIgaJpZM4H12Ld . FYI for those in the future, the example above by ruta works, however the lat and lng are reversed in the point.map function. Should be location[0], then location [1] if you stick to the LatLng naming schema. heat map layer location is not exactly as given. FYI for future users, don't forget visualization lib. src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY**&libraries=visualization**" Yes, I specified 'libraries=visualization' in URL-It's display heat map but the issue is it not on the exact location on zoom-in and when we do zoom-out it's view gets split.
GITHUB_ARCHIVE
Opens a device and returns the device handle. The device needs to be defined in the CitectSCADA database. If the device cannot be opened, and user error checking is not enabled, the current Cicode task is halted. You can use this function to return the handle of a device that is already open. The DevOpen() function does not physically open another device - it returns the same device handle as when the device was opened. The mode of the second open call is ignored. To re-open an open device in a different mode, you need to first close the device and then re-open it in the new mode. When using an ODBC driver to connect to an SQL server or database, experience has shown that connecting only once on startup and not closing the device yields the best performance. ODBC connection is slow and if used on demand may affect your system's performance. Also, some ODBC drivers may leak memory on each connection and may cause errors after a number of re-connects. Note: If you are opening a database device in indexed mode (nMode=2), an index file will automatically be created by CitectSCADA if one does not already exist. If you feel a device index has become corrupt, delete the existing index file and a new one will be created the next time the DevOpen function is run. DevOpen(Name [, nMode] ) The name of the device. The mode of the open: 0 - Open the device in shared mode - the default mode when opening a device if none is specified. 1 - Open the device in exclusive mode. In this mode only one user can have the device open. The open will return an error if another user has the device open in shared or exclusive mode. 2 - Open the device in indexed mode. In this mode the device will be accessed in index order. This mode is only valid if the device is a database device and has an index configured in the Header field at the Devices form. Please be aware that specifying mode 2 when opening an ASCII device is ignored internally. 4 - Open the device in 'SQL not select' mode. If opened in this mode, you need to not attempt to read from an SQL device. 8 - Open the device in logging mode. In this mode the history files will be created automatically. 16 - Open the device in read only mode. In this mode data can be viewed, but not written. This mode is supported only by DBF and ASCII files - it is ignored by printers and SQL/ODBC databases. The device handle. If the device cannot be opened, -1 is returned. The device handle identifies the table where all data on the associated device is stored. INT hRecipe, hPrinter; ErrSet(1); ! enable user error checking hRecipe = DevOpen("Recipe", 0); IF hRecipe = -1 THEN DspError("Cannot open recipe"); hPrinter = DevOpen("Printer1", 0); IF hPrinter = -1 THEN DspError("Cannot open printer"); ErrSet(0); ! disable user error checking WHILE NOT DevEof(hRecipe) DO sRecipe = DevReadLn(hRecipe);
OPCFW_CODE
Confusing words in a economy news report: alone, on top of, absent The report also highlights the rapid growth of export financing from three Asian competitors: Korea, Japan and China. These countries provided significantly more export-credit support to their respective domestic companies and industries than did the United States in 2013. In addition, the report underscores two trends: unregulated competition is expanding and commercial banks have largely withdrawn from pockets of the export-finance arena, including providing support for small businesses. The United States faces more robust competition from export-credit agencies offering terms that are not regulated by the Organisation for Economic Co-operation and Development (OECD), which encourages global export competition based on free-market principles and mutually agreed-upon standards. For example, Ex-Im Bank support for all of its $15 billion in medium- and long-term financing was regulated by the OECD Arrangement, but other OECD member countries offered more than $60 billion alone of unregulated export financing support (on top of $83 billion in export financing governed by the OECD Arrangement). Nations that are not subject to the OECD framework, including Brazil, Russia, India and China, provided $115 billion in trade-related financing. Unregulated support totaled substantially more than all OECD-regulated support, a trend the report expects to continue and one which is poised to place U.S. exporters at a competitive disadvantage absent the tools made available by Ex-Im Bank. -- Export-Import Bank Report to Congress: Aggressive, Unregulated Financing from Foreign Competitors is Costing U.S. Jobs Source Does alone modify $60 billion? And according to the bracketed sentence, $60 billion is on top of $83 billion? I think I misunderstand it, but I'm quite confused by these inconsistent numbers. In addition, absent seems to be acting as a preposition here. If not, I think it should be absent from. Is it a typo? If I'm reading this correctly, Ex-Im Bank is a US-based financial institution regulated by the OECD. It offers $15 billion of export financing, which is part of the $83 billion total export financing governed by the OECD Arrangement. So to answer your questions: And according to the bracketed sentence, $60 billion is on top of $83 billion? Yes, this is correct. The OECD Arrangement is governing $83 billion of export financing, and the other OECD countries besides the US are offering an additional $60 billion on top of that. Does alone modify $60 billion? I'd say no. It actually modifies the "other OECD member countries". They are offering the additional export financing independent of the US. They are offering the financing themselves--alone. In addition, absent seems to be acting as a preposition here. If not, I think it should be absent from. Is it a typo? It's not a typo. It's taking the place of the phrase 'since they don't have' or 'if they don't receive' or 'without' something like that. But yes, it is functioning as preposition here. Google "absent define". I'm sorry the link doesn't work, but the search will. That's OK; DT corrected it for you. I consulted OALD and Macmillan but couldn't find this particular entry. Thx! http://dictionary.cambridge.org/us/dictionary/british/absent#2-1 -- http://www.learnersdictionary.com/definition/absent -- http://dictionary.reference.com/browse/absent -- http://www.jstor.org/discover/10.2307/454886?uid=3738392&uid=2129&uid=2&uid=70&uid=4&sid=21104825350531 http://www.macmillandictionary.com/us/dictionary/american/absent_14 -- http://www.oxforddictionaries.com/us/definition/american_english/absent This is a US usage; I just checked the BrE links. That's why I got nothing. And while I was on my definition binge, I saw a note that it comes from legal jargon, so that would make it even harder to find in a BrE dictionary. The default search engines are for BrE; I don't switch them very often. Come to think of it, are you using "the other OECD countries besides the US" to actually mean "the other OECD countries except the US"? Are they identical here? Yes, "besides" also means "except". "Beside" means "next to". At least, that's AmE uses them.
STACK_EXCHANGE
To apply for flair, please see. You should find the Ubuntu partition is now larger. Of course, Ubuntu will work fine without any such worries. And do the the same thing for the home partition too. Okay, got that problem solved. I'm not sure exactly how to rectify this. This to ensure that errors—if there are any—can be immediately corrected. That's why I suggested the possibility of a base install first, just to get grub set up. In this case you need to list the subvolumes and mount them. This is second approach to this. I first got a scary blue screen, but Windows re-booted and started an automatic repair that worked in about 2 minutes. There are very different assumptions made than our use case. Open a terminal window and type the command sudo fsck. Wish you all the best! However, don't do so until you're 100% sure your new cloned copy is working correctly. Copy any other non-system folder eg Documents etc from the old to the new Bootcamp drive and any folders that your working windows accesses. Can you suggest how I should fix this, please. Booted perfectly from both my desktop and my laptop. Anyway, other than running into the initial msg about needing to use the —force option see ++note below , ddrescue has been a dream. Do remember to connect your new bigger hard drive to the Linux computer as a storage device before migrating your Linux. You can also use the command sudo fdisk -l. After you have the confirmation that your operating system boots-up normally, use the same tool as for shrinking the file system to extend the cloned partition by adding the unallocated space. Σκεφτόμουν ότι ίσως να μπορούσα να κρυπτογραφήσω όλο τον σκληρό αλλά δεν είμαι σίγουρος εάν θα μπορώ να κάνω boot τα ubuntu γιατί θα έχει κρυπτογραφηθεί και το efi partition. Αν η συσκευή σου έχει ενεργοποιημένο το secure boot το firmware ελέγχει την υπογραφή του kernel. If you're unsure of which drive is which, look at the size -- each drive is displayed with its total size. Back then, we had a single tower with a P4 and a few gigs of ram. I almost forgot…I was setting up external bootable for my Surface Book 2 with the target being Kali Linux 2018. Update the fstab entry You've to properly update the fstab entry to properly mount the filesystems while booting. Secure boot caveat I have only tested these instructions with Secure Boot turned off. All went well, until your last two paragraphs where you are creating and entering the chroot system. No way I would have done the installation in this manner without the excellent procedure. Also, running sudo cfdisk only started the console and I had to use the interface to create a partition and write to it. I think the kernel already supports memory and cpu swaps without having to power down, so we should probably start bugging Intel for that, then I'll never have to shutdown again. I think I need to find out why my install failed to fill those directories, and yet produced no error message. As technology changes so do the choke points in your system. When you have an external drive it is critical that you use the —removable option in the last step. Moreover, we will copy the resolv. Aside, I think the error message should be a little clearer as it seems misleading and vague. However, my question is how exactly would I do that? The bash script automates the steps the accepted answer outlines. This was located in the second to last bullet point in the procedure. If you can see the partitions but cannot boot from the external drive try disabling Fast Startup on Windows and then shut down your computer. An example below, only commands, not much explanation. If there's any need of swap in future, you can just create a swap file. Disclaimer: This tutorial is a reader submission. Check the instructions above carefully. This can take a couple of hours, so get a snack and put in a movie. So, first step is chrooting, here's all the commands below, running all of then as super user. Now we need to assign a drive letter to it. Do I simply unallocate the Linux partitions - I do want to reinstall Linux later. Έχεις πειραματιστεί με κάτι σχετικό? God Bless You my friend. As mentioned in this article: Specify the source disk first and then the target disk. So, here's how I partitioned the disk. Thirdly I ran into the problem of it not booting correctly. I now visualize quite clearly the different steps I'll have to adapt and pass through. The default value is auto-detect. Any help would be appreciated! Probably because I am newish to Linux. Or is it safe to do without having to worry about even wearing and write rates? Don't just copy paste the commands below, modify them according to your system and requirements.
OPCFW_CODE
Obviously, to use this book, you need access to a ColdFusion MX server. If your company is already developing web applications with ColdFusion, the server should already be available to you. Or, if you are developing for a remote server, you should be all set. In either case, you just need to know where to put your templates; check with your system administrator or webmaster. If you don’t have access to a ColdFusion server, your first step is to pick an edition of ColdFusion. There are currently four editions of ColdFusion available to support the needs of various sized projects and organizations; all of them are available at Macromedia’s web site, http://www.macromedia.com (as of this writing, the latest release is ColdFusion MX 6.1): - ColdFusion MX Standard (Windows and Linux only) Formerly called ColdFusion MX Professional (through ColdFusion MX 6.0), the standard edition is designed for departmental and small group use. It contains access to all CFML language features, a 125KB document limit on Verity searches, and database drivers for MS Access (Windows only), MS SQL Server, and MySQL. Email handling in ColdFusion MX 6.1 Standard has been improved as well. The underlying engine is now capable of generating approximately 33KB emails per hour, an improvement over previous versions. - ColdFusion MX Enterprise (Windows, Linux, Solaris, HP-UX, and AIX) Contains all the functionality of ColdFusion MX Standard and adds server clustering, additional Type IV JDBC database drivers for most popular databases, a 250KB document limit on Verity searches, the ability to host JSP pages, servlets, EJBs, and import JSP tag libraries, as well as additional security, management, deployment, and performance features for hosting large-scale applications. Email handling in ColdFusion MX 6.1 Enterprise has be greatly improved. The engine is capable of generating approximately 1 million messages per hour, and contains additional features such as multi-threading, connection pooling, and the ability to use backup mail servers. ColdFusion MX Enterprise can be installed in one of two configurations: - Server Configuration Installs a single instance of ColdFusion MX Enterprise (or Developer) with an embedded J2EE server. This is the equivalent to a “standard” or “standalone” installation of ColdFusion from previous versions. - J2EE Configuration Installs one or more instances of ColdFusion MX Enterprise (or Developer) on top of an included licensed copy of Macromedia JRun, or on a third-party J2EE application server such as IBM WebSphere, BEA Weblogic, or Sun One. This allows you to write and deploy ColdFusion MX applications that leverage the underlying architecture of popular J2EE application servers. For a complete list of supported J2EE application servers, see Macromedia’s web site. - ColdFusion Developer Edition (Windows, Linux, Solaris, HP-UX, and AIX for Server or J2EE configuration; Mac OS X for J2EE configuration only) This is a for development-only version of ColdFusion MX Enterprise (Server or J2EE configuration) that limits access to the IP address of the development machine and one additional IP address per session. Additionally, it sets the document limit for Verity searches to 10k. ColdFusion MX Developer Edition allows you to build and test applications without having to purchase a full ColdFusion MX Enterprise license. The trial version of ColdFusion Enterprise automatically becomes the developer version once the 30-day trial period expires. Hardware requirements for running ColdFusion vary depending on your platform and the edition of ColdFusion you want to run. You should make sure the machine on which you plan to run the ColdFusion Application Server can meet the demands you might place on it. ColdFusion generally requires a system with 250 to 400 MB of hard disk space and between 128 and 512 MB of RAM, depending on the platform and whether the server is for development or production. Memory requirements are only a guideline. In general, the more physical RAM available to ColdFusion, the better it will perform, because many tasks performed by web applications are memory-intensive, such as intensive database queries, Verity indexing/searching, caching, and integration with other third-party resources. For the most up-to-date system requirements, please refer to the documentation that came with your edition of ColdFusion, or visit http://www.macromedia.com/software/coldfusion/productinfo/system_reqs/. If you work in an organization with an IT department, you should be able to get them to install and configure ColdFusion. Otherwise, you’ll have to perform these tasks yourself. Because ColdFusion is available for multiple platforms, installation procedures vary. For specific instructions on installing and configuring the ColdFusion Application Server, see the documentation provided with your edition of ColdFusion, or visit the Macromedia ColdFusion Support Center at http://www.macromedia.com/support/coldfusion/installation.html. Once you have a working ColdFusion installation, you’re ready to start programming. In the next chapter, we’ll dive in and learn about ColdFusion basics. For this material to make sense, though, you need to have some basic experience with web page creation and, in particular, HTML. If you don’t have any experience with HTML, you should spend some time learning basic HTML before you try to learn ColdFusion. For this, I recommend HTML & XHTML: The Definitive Guide, by Chuck Musciano and Bill Kennedy (O’Reilly & Associates). If you are planning to use ColdFusion to interact with a database, you may also find it helpful to have a general understanding of relational databases and SQL (Structured Query Language). For more information on SQL, see SQL in a Nutshell, by Kevin Kline with Daniel Kline, Ph.D. (O’Reilly). In ColdFusion MX 6.0, an additional version of ColdFusion MX known as ColdFusion MX for J2EE was available. In ColdFusion MX 6.1, this option has been rolled into a single product edition known as ColdFusion MX Enterprise.
OPCFW_CODE
I bought a TP-Link TR-MR3420 to replace my aging D-Link DIR-300 router which was beginning to play up. I’d been looking at the Unifi wireless modems which seem very good (from a first hand account I received) but a bit more expensive than this model, which has the added bonus of allowing you to use it to beam the internet signal from an EVDO USB modem around the house. As we have one of these sticks (we use it as back-up in case we lose our normal connection, and also to access the internet if any of us goes away somewhere) this seemed to fit our needs perfectly… and so it proved. Setting the thing up was pretty straightforward but I’ll skip the standard WAN set up and focus on the 3G/4G setup as the only difference between this and a normal router is that you have four options to choose from - whether you want to use it solely for use with an EVDO stick or a cable connection, or both in which case you can choose which should have priority when both are plugged in (and both have an internet signal coming in). The EVDO stick just plugs in the side and it connected to the internet automatically as soon as I plugged it into my new router. Here is the screen showing the four basic set up options: Here is the basic 3G/4G screen showing my settings which worked with my local provider (MagtiFix). This is the advanced settings screen. For my local provider in Georgia (MagtiFix) I just left the dial number as the default setting (*99#) and left the username and password blank, like so: That’s all there was to it. I tested it using both connections and it picked up the two different connections perfectly, though I must admit that I only plugged them one at a time as I don’t intend to actually leave both plugged in permanently. I haven’t used this for very long but the TP-Link gear I’ve used before has been pretty good. I also have a TP-Link Range Extender to boost the signal to the furthest reaches of our thick-walled home and so far so good - signal strength is excellent and we’ve had no real issues connecting three devices to the internet at any one time. The cable (WAN) connection speed is quite expectedly much better than the measly EVDO connection speed, but it’s great to be able to share the EVDO connection rather than fight over who gets the stick when the cable connection goes down - though we’ll still do that when there’s a power cut, but a decent UPS is next on the shopping list! It’s also great that this new router allows us to also connect to the internet via the EVDO stick using a mobile device - something which was impossible before now. It’ll also be useful to be able to take this router with us if we go to a guest house somewhere, and be able to connect to the internet from a few devices instead of a single laptop.
OPCFW_CODE
L. Ya. Rosenblum and A.V. Yakovlev. Signal graphs: from self-timed to timed ones, Proc. of the Int. Workshop on Timed Petri Nets, Torino, Italy, July 1985, IEEE Computer Society Press, NY, 1985, pp. 199-207. A paper establishing interesting relationship between the interleaving and true causality semantics using algebraic lattices. It also identifies an connection between the classes of lattices and the property of generalisability of concurrency relations (from arity N to arity N+1), i.e. the conditions for answering the question such as, if three actions A, B and C are all pairwise concurrent, i.e. ||(A,B), ||(A,C), and ||(B,C), are they concurrent “in three”, i.e. ||(A,B,C)? L. Rosenblum, A. Yakovlev, and V. Yakovlev. A look at concurrency semantics through “lattice glasses”. In Bulletin of the EATCS (European Association for Theoretical Computer Science), volume 37, pages 175-180, 1989. Paper about the so called symbolic STGs, in which signals can have multiple values (which is often convenient for specifications of control at a more abstract level than dealing with binary signals) and hence in order to implement them in logic gates one needs to solve the problem of binary expansion or encoding, as well as resolve all the state coding issues on the way of synthesis of circuit implementation. Paper about analysing concurrency semantics using relation-based approach. Similar techniques are now being developed in the domain of business process modelling and work-flow analysis: L.Ya. Rosenblum and A.V. Yakovlev. Analysing semantics of concurrent hardware specifications. Proc. Int. Conf. on Parallel Processing (ICPP89), Pennstate University Press, University Park, PA, July 1989, pp. 211-218, Vol.3 Моделирование параллельных процессов. Сети Петри [Текст] : курс для системных архитекторов, программистов, системных аналитиков, проектировщиков сложных систем управления / Мараховский В. Б., Розенблюм Л. Я., Яковлев А. В. – Санкт-Петербург : Профессиональная литература, 2014. – 398 с. : ил., табл.; 24 см. – (Серия “Избранное Computer Science”).; ISBN 978-5-9905552-0-4 (Серия “Избранное Computer Science”)
OPCFW_CODE
Usb HDD stops working with Task being blocked I started using a Raspberry Pi with an up to date Arch Linux Images. The configuration is a simple raspberry pi Ver. B, bought about 2 weeks ago ( so the usb limitation is fixed ), a 8GB SD Card for the main system, an external powered USB Hub and a 1TB Toshiba e.store basic usb hdd. The System is fully installed and works. The only problem is my HDD. From time to time the HDD stops working out of a sudden. At the begging I thought it's might a faulty file system, so I reformatted it to ext3 (GUID Partition Table). Then I thought it's a problem or something with setting the hdd to sleep mode cause hdparm was giving me this weird error. SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 ff 0a 00 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 So I wrote a cronjob which uses 'touch' to perform some action on the hard drive every minute, but the behavior still occured. From time to time the hdd just stopped working, the power led went black and when I tried to do something on the hdd my ssh connection just hang and no interrupt signal or something did work. This is what dmesg says to my error: [35282.602948] INFO: task scsi_eh_0:52 blocked for more than 120 seconds. [35282.626554] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [35282.652268] scsi_eh_0 D c055dd0c 0 52 2 0x00000000 [35282.652368] [<c055dd0c>] (__schedule+0x2ec/0x638) from [<c055caa4>] (schedule_timeout+0x16c/0x248) [35282.652424] [<c055caa4>] (schedule_timeout+0x16c/0x248) from [<c055e254>] (wait_for_common+0x108/0x190) [35282.652472] [<c055e254>] (wait_for_common+0x108/0x190) from [<c03fe810>] (command_abort+0xa4/0xec) [35282.652538] [<c03fe810>] (command_abort+0xa4/0xec) from [<c03af1a4>] (scsi_error_handler+0x378/0x484) [35282.652576] [<c03af1a4>] (scsi_error_handler+0x378/0x484) from [<c00422c0>] (kthread+0x84/0x90) [35282.652616] [<c00422c0>] (kthread+0x84/0x90) from [<c000eac0>] (kernel_thread_exit+0x0/0x8) Any idea's why this happens all the time? Any help will be appreciated. Like I mentioned, the USB HDD is connected through the usb hub which is powered externally. The cable is definitely not bad as it's a brand new hdd and the hdd just works fine on my mac ... After a while of investigation I found several similar bugs in connection with an old bug in the 3.6 Linux Kernel which is the default Kernel on the Arch/Raspberry Pi Installation guide. I finally managed to update my pi then to the latest kernel builds using pacman -Sy linux-raspberrypi-latest linux-headers-raspberrypi-latest which installs the newest Kernel builds for you. (Currently something around 3.9.x) No more problems since then ;)
STACK_EXCHANGE
Entity Component to no longer generate automatic groups Breaking Change: Not too many 😎 The following groups are no longer automatically created and maintained: group.all_automations group.calendar group.all_covers group.all_devices group.all_fans group.all_lights group.all_locks group.all_plants group.remember_the_milk_accounts (???) group.all_remotes group.all_scripts group.all_switches group.all_vacuum_cleaners Description: Disable the automatic creation of "all groups". They were not visible in the UI, and for anyone having things spread out over multiple rooms they were holding too many entities, so being useless. Before merging this, I do want to add an all_person group to track the person integration. Related issue (if applicable): fixes https://github.com/home-assistant/architecture/issues/177 Checklist: [x] The code change is tested and works locally. [x] Local tests pass with tox. Your PR cannot be merged unless tests pass [x] There is no commented out code in this PR. [x] I have followed the development checklist If the code does not interact with devices: [x] Tests have been added to verify that the new code works. HI @balloob is this still open for comments? Please let me ask if one could set a configuration option for this. I use the groups all_automations and all_scripts on a most regular base, and the all_lights group is used in many an automation. If these groups would disappear, and we'd only be able to create them manually that would add some serious maintenance bother. Having a config option like create_all_groups: True #defaulting to False would be very nice to have at hand, if all_groups are going to be deprecated. thanks for considering. The break change part need more end user friendly. @Mariusthvdb please check the arch issue home-assistant/architecture#177, it explained how to use entity_id: all replace group.all_light. However, I don't know if we have similar usages for scripts and automation. If not, I think we can easily implement one. The break change part need more end user friendly. @Mariusthvdb please check the arch issue home-assistant/architecture#177, it explained how to use entity_id: all replace group.all_light. However, I don't know if we have similar usages for scripts and automation. If not, I think we can easily implement one. I think you refer to: Use case: control all lights at once This can be done by specifying entity_id: all to light.turn_on. Am I correct in understanding that would not generate the card with the all entities, it would only be a valid substitute in services. My point and request was to allow for the generation of the all_groups, maybe with a config setting. Do you want me to add the request to the architecture thread also? tbh, I can't see the need of dropping it really, why is HA better without these groups? It is very convenient to have a tab with all_groups, and see what is going on.... Plant component: looks good to me Yikes! `group.all_devices' is a staple of my conditions for presence detection (if anyone is home or away)...why would this be removed? I'm a bit oldschool so maybe I'm missing some new config for this, but what is the workaround? Here is an example from my config: - condition: state entity_id: group.all_devices state: 'home'
GITHUB_ARCHIVE
bc behaves differently on Solaris and Linux I have the following problem that I have not resolved for a long time now. We have a Linux (x86_64 GNU/Linux) server and a Solaris (SunOS 5.10 i86pc Solaris) server where I work. On the linux server, the command bc -l gives me a calculator where I can easily work with the numerals and commands, along with using the left and right arrows to navigate. Using the up/down arrows gives me the history of my commands. It's another story on Solaris though. The arrow keys do not work at all. I cannot edit the line, nor can I get the history. Can someone here please help me set up the proper configuration of bc on the Solaris OS? You'll probably need bc compiled with readline support: https://www.gnu.org/software/bc/manual/html_chapter/bc_7.html Solaris may have GNU bc available (I haven't used it for a long time, so I don't remember). I am not a root user. So the only way is to download and compile my own version of bc? Could yiu suggest a link to a proper source? I downloaded bc-1.06 and tried to compiled. With option --with-realine it is not complied, with --with-weditline it is complied but get me segmentation fault when I am running it. It's --with-editline, without a w, I think (and that needs a BSD library, unlikely to be present. Do you have any GNU utility installed on your system? If not, try compiling GNU readline as well. Yes, I just made a typo here. But even so it does not work for me.;( I resolved the segmentation problem but the final compiled version with --with-editline still gets me the same problem. The behavior of bc on Linux variants is heavily influenced by Bash Try setting your SHELL environment variable to /bin/bash if you haven't already. As @muru points out, readline controls this behavior @eyoung100 Right now I have SHELL=/usr/gnu/bin/bash. Setting it to /bin/bash does not change anything in bc behavior (good on Linux bad on Solaris) Welcome to the difference between Unix (Posix) and GNU. Solaris is a Unix, Linux is GNU. The bc that comes with Solaris is quite historic. To get the same bc feeling as on Linux just install the gbc OpenCSW package. OP doesn't have root access. Can this package be installed by a user locally? I am sorry for tiresomeness but I am not so good in all these packages. When I click there I finally get the same bc source I already tried to compile. @muru, short answer: no It's probably not historic, but Unix.
STACK_EXCHANGE
To be fair to Apple engineers who track and debug iPhone X NFC problems, it has to be one of the hardest jobs to do because the normal syslog capture is unlikely to contain anything useful. Filing the usual Apple Bug Radar doesn’t help. Take a good look at the JR East gate errors in my iPhone X Suica Problem video: In all error cases the iPhone X screen shows the ‘all done’ check mark: iPhone X says ‘everything is OK’, the gate reader says ‘try again’. It’s a 2 way interaction. Apple engineers need both device logs and any SEP (Security Exchange Protocol) information they can get their hands on to find out what is going wrong with the iPhone X NFC. Unfortunately this deep geek kind of information can only be captured on site by a field test engineer working with counterpart system engineers from JR East and all. Last time I checked Apple was still advertising for one in Japan. On the bright side Apple engineers seem to have already fixed the iPhone X NFC problem with some kind of hardware tweak to later iPhone X production. The bad news is there is no reliable way to obtain a ‘Revision B’ iPhone X. When I exchanged my ‘Day 1’ iPhone X for another Day 1 iPhone X at the Tokyo Omotesando Genius Bar, none of the Apple Tech support people had ever heard of the iPhone X Suica problem (yeah, right) and the Genius guy NFC diagnostic check was very rudimentary: a USB card reader attached to a MacBook confirming a simple NFC signal just like a cash register, not a transit gate. In order for Apple to help customers with the iPhone X Suica problem Apple Support needs to do the following: - Stop playing dumb: support staff should be briefed on the problem and acknowledge it with customers who need assistance. - A reliable on-site diagnostic check or some other method to quickly identify bad units as the current tools cannot detect faulty NFC on Day 1 iPhone X units. - Maintain a good supply of clearly identified Revision B iPhone X exchange stock. It would be great if Apple comes to their senses and does the right thing for iPhone X customers who use transit cards and experience iPhone X NFC hardware problems. Unfortunately all Apple has offered is complete silence, playing dumb with everyone who needs help. Until there is news from Apple regarding the iPhone X Suica problem there isn’t any more to write about and I’m just a voice in the wilderness: the iPhone X Suica problem has not gained any traction with the tech press in America or Japan. I guess this will be my last post on the subject for a while. I hope I am wrong but it could be a very long wait. UPDATE: I have a manufacture date benchmark to test the Revision B iPhone X theory: iPhone X units manufactured on or after production week 18 (April) 2018 appear to be free of the iPhone X Suica problem. Details here.
OPCFW_CODE
Frog Snatchers has made it through to next semester! I'm excited to continue working on Frog Snatchers in the upcoming semester. I think there's a lot of cleanup that needs to be done before starting full on production, but it's all well within reach. With the new additions to the team, I think Too Tired will be even more of a power team to watch out for. These first 12-15 weeks of work definitely had a lot of ups and downs. Since our team had worked together before, we started off pretty strong. We were able to group up and brainstorm ideas, then get together and really talk about all of them. Adapting to the 8 AM class was no easy feat, and I think we struggled to convey our ideas and intents for a while. After a few weeks of hit or miss presentations, we continued to adapt and then started presenting information in a way that I think resonated with class members more, and ended up earning us much more valuable feedback. Around the halfway point was when I'd say we found a bit of a stride and started pushing out big updates onto the game. Working with Robbie, Luke, and Lillian is a blast, and I couldn't be happier with a four person team. There's a lot of synergy there, and I feel like I'm a real part of the team, even though I was the one who joined a few weeks late last semester. This time around, we had regularly scheduled meetings and a few Team Dinners which helped us maintain focus/motivation on the game, as well as enriching our personal relationships. Individually, I felt that I developed a lot of really useful skills and was able to really showcase what I've been learning in my Programming minor. I'm primarily a designer (as per the title on my eventual degree), but with Robbie taking on some of the producer tasks, I filled in on programming tasks when necessary. I was in charge of Systems Design, so I'd brainstorm mechanics and critical systems, flesh them out a bit on paper, and then prototype them. More often than not, the prototyped version I created would end up in the build the next week, with minor tweaks to make sure it fit into the game properly. This style of development leads very easily into quick iteration cycles and constant testing. In some ways, I felt that I was responsible for taking the ideas the team would come up with, and giving them some sort of tangible representation in the game. While this was daunting at first, I realized it was important to just get it done as well as I can, and quickly. The faster I can show something to the team, the quicker we can get feedback and improve on it. The best examples of this are the movement and combat systems. Every week I came back with at least one improvement to these systems, and now they're some top notch systems. They both have a lot of room to grow, but the growth on both of them within this semester is really impressive to me. I mostly attribute this to the success of constant iteration with clear intent, especially when its based on valuable feedback.
OPCFW_CODE
We are looking for a simple prototype made in Unity for a casino style game. This game will need to be optimized for mobile platforms (iOS / Android), This game will consist of a small bundle of gambling / casino games. The following will be included in the game: - Scratch Tickets - Slot Machines - Video Poker - Video Blackjack - Daily Powerball style game (players have the ability to purchase random tickets or create their own [ 5 numbers ]. Once a day numbers are randomly chosen and player receives amount displayed. ) Given that these are very simple / common games we do not expect it should take a very long to create each game. - Games need to be created in a way so that we can easily adjust the amount of coins it takes to play them as well as prize amounts, and experience players receive from playing them. - The game will feature a main "lobby" where players can select which game to play - Players gain experience for all actions ( a base amount for playing with an added bonus if they win) and gain levels at tiered experience amounts - In main lobby players have a bonus timer which upon reaching 0 (starts at 3 hours [02:59:56]) changes to "collect bonus" and provides players with extra coins to begin playing more - You can look at a game called "slotomania" for a decent idea of how the lobby, level system, and coin bonus systems should work Upon beginning the contract I will be able to stay in constant contact with you to answer any questions you have as well as provide drawings for how some areas should look. I may or may not have a simple design document to provide as well (not sure if it will be necessary for such a simple project) At the completion of the contract you will provide us with all assets / code so that our programmers may pick up where you leave off with the project. I can now note that you will in fact be provided with a simple design document that will give you basic information on any subtle changes that may be involved in each game as well as information on general screen layout. We will also not be providing you with art. For this prototype you will be using your own temporary mock up art and upon finishing the project we will have our artist work on final artwork. For the daily powerball style game it can be made local to the device for this prototype. In the final product it will be networked so that all players will compete against each other but for the purposes of this contract it will be very simple to the point that you do not need any extra databases,etc. Please also be sure to note that this project needs to be done using Unity 3D as that is the engine we are currently using for our games. The game will be 2D but it is important that you have experience with Unity 3D ( experience with 2D Toolkit is a plus). Hello, I believe I can produce this project to a high standard within the deadline and at a reasonable price. I have an iPhone, iPad, and a Samsung Nexus S running ICS 4.0 as part of my development equipment as well Plus 3 freelance font une offre moyenne de $317 pour ce travail We are ready to start working on the project. Please check PMB for our portfolio. That would be pleasure if we can get a chance to work with you. Thanks We have a team of experts in 2D and 3D game development. We have lot of experience in Unity 3D. We at Perfuture Technologies - software services firm based in India. We offer our services in developing Mobile applica Plus
OPCFW_CODE
from bottle import error, request, route, run, template import json from random import choice, randint import requests import socket HOSTNAME = socket.gethostname() #hostname = '0.0.0.0' HOSTPORT = 8080 ENDPOINT = 'http://wordtools-api:8081' # api layer host/port set in compose file # global variables representing state for the HTML template # to do: put these into a dictionary - in fact, consider using the word_packet dict last_anag = 'listen' last_search = 'am_s_ng' last_random = '10' last_wordlen = '5' last_rndrows = '10' last_pswd = '3' status = '' def process_word_packet(word_packet): if word_packet['status'] > 0: body = word_packet['message'].split() else: body = word_packet['words'] return body def writebody(output): return template('base', hostname=HOSTNAME, output=output, anagram=last_anag, \ search=last_search, random=last_random, wordlen=last_wordlen, rndrows=last_rndrows, \ pswd=last_pswd) @route('/') def root(): return writebody(None) @route('/anagram', method='POST') def anagram(): global last_anag anagram = request.forms.get('anagram').lower() uri = '/anagram/' + anagram status = ' URI = ' + uri word_packet = requests.get(ENDPOINT + uri).json() body = process_word_packet(word_packet) last_anag = anagram return writebody(body) @route('/finder', method='POST') def finder(): global last_search partial = request.forms.get('partial') uri = '/finder/' + partial status = ' URI = ' + uri word_packet = requests.get(ENDPOINT + uri).text body = process_word_packet(json.loads(word_packet)) last_search = partial return writebody(body) # generate a set of memorable passwords @route('/pswd', method='POST') def pswd(): global last_pswd numberstr = request.forms.get('num') word_packet = {'words': [], 'count': 0, 'status': 0} for i in range(int(numberstr)): passphrase = '' for j in range(2): # get 1 word between 3 and 7 chars wordlen = randint(3, 7) uri = '/rnd/1/' + str(wordlen) data_packet = requests.get(ENDPOINT + uri).json() word = data_packet['words'][0] # capitalize 50 % of words if randint(0, 1) == 0: word = word.capitalize() # add a random non-alpha seperator to the words passphrase += word + choice(',+-()#.@[]{}#_') # add a random number to the string passphrase += str(randint(0, 9999)) word_packet['words'].append(passphrase) word_packet['count'] += 1 body = process_word_packet(word_packet) last_pswd = numberstr return writebody(body) # serve random words of any length @route('/random', method='POST') def random(): global last_random numberstr = request.forms.get('num') uri = '/random/' + numberstr word_packet = requests.get(ENDPOINT + uri).json() body = process_word_packet(word_packet) last_random = numberstr return writebody(body) # serve random words of fixed length @route('/rnd', method='POST') def rnd(): global last_wordlen global last_rndrows wordlen = request.forms.get('wordlen') numberstr = request.forms.get('num') uri = '/rnd/' + numberstr + '/' + wordlen word_packet = requests.get(ENDPOINT + uri).json() body = process_word_packet(word_packet) last_wordlen = wordlen last_rndrows = numberstr return writebody(body) # simple test of presentation container @route('/test') def test(): return '<h1>Presentation layer operational</h1>' # simple test of API container @route('/apitest') def apitest(): word_packet = requests.get(ENDPOINT + '/test').json() return word_packet['words'] @error(404) def mistake404(code): return 'Sorry mate, 404 - path not found.' @error(405) def mistake405(code): return '405 - Invalid arguments to form.' ''' @error(500) def mistake500(code): return '500 - Returning error from presentation layer.' + status ''' run(host=HOSTNAME, port=HOSTPORT)
STACK_EDU
Moku:Go combines 14+ lab instruments in one high-performance device. This application note uses Moku:Go’s Oscilloscope and its integrated waveform generator to investigate the forward bias behavior of a diode. Moku:Go combines 14+ lab instruments in one high performance device, with 2 analog inputs, 2 analog outputs, 16 digital I/O pins and optional integrated power supplies. Diode P-N Junction The diode is the simplest and most basic semiconductor device consisting of a single P-N junction. Since P-N junctions are the fundamental functional feature of many semiconductors, a strong foundational knowledge of the behavior of diodes is crucial to successful learning in more advanced lab experiments on transistors and other semiconductors. I-V curve characterization is a fundamental measurement and common lab experiment to aid in the understanding of semiconductor junctions. I-V curves are plots of current as a function of voltage. For a resistor, the I-V curve would simply be a straight line through 0 volts, and 0 amps. While dedicated I-V curve instruments exist, and some implementations use Source-Measure Units (SMUs) with appropriate software, these solutions require bulky and expensive, traditional stand-alone equipment. Here we show a diode I-V measurement using Moku:Go’s Oscilloscope and its built-in waveform generator. The captured data is shared to Excel (or alternatively MATLAB) to enable the student to manipulate the captured data and present an I-V curve for an IN4001 diode. Thus the experiment can be done with Moku:Go and no other instruments. To plot an I-V curve for an IN4001 diode we have set up the circuit in Figure 1. Figure 1: forward bias diode circuit R1 represents the output impedance of the waveform generator. Moku:Go’s Oscilloscope channel 2 is used to measure the voltage applied across both the diode and a current limiting & sensing resistor (R2). Then the Oscilloscope channel 1 measures the voltage across R2, a 1% tolerance 100 Ω resistor, allowing us to calculate the current through the diode. There is a waveform generator integrated into Moku:Go’s Oscilloscope instrument. This is used to generate a triangle wave set to an amplitude of 3.2 V with a 1.6 V DC offset. Thus, the diode is always forward biased and allows us to apply a swept voltage, frequency set at a low, not important 50 Hz. Figure 2 shows the waveform generator set up in the macOS app. Notice we only use waveform generator channel 1 (green); channel 2 (purple) is off. The Windows app is very similar. Figure 2: Moku:Go’s waveform generator configured to sweep voltage We can now use Moku:Go’s Oscilloscope to observe the voltage on channel 1 and channel 2 (refer to Figure 1 for the channel probe points). Figure 3 shows the Oscilloscope, channel A (input 1) in red, channel B (input 2) in blue and the behavior of the diode is evident. We have also used the oscilloscope math channel to plot the X-Y curve in orange and this shows the general expected I-V curve of a diode. Figure 3: Moku:Go’s Oscilloscope, X-Y Channel and integrated waveform generator Referring to the circuit in Figure 1; we see that the current in the diode: Idiode = (Vch1/100) and that the corresponding Vdiode = Vch2 – Vch1 Since Moku:Go is connected to the app via USB-C or the network, we can simply export the oscilloscope data to a CSV file, then calculate and plot Idiode vs Vdiode in Excel, MATLAB or similar. Figure 4: exporting data to CSV Resulting I-V plot After importing the CSV from Moku:Go into Excel, Vdiode and Idiode is calculated and plotted. The resulting I-V plot for the IN4001 diode is shown in Figure 5 exhibiting a typical forward bias turn-on voltage; after which we see large increases in current. Figure 5: measured I-V plot for forward biased IN4001 diode We have used Moku:Go and its Oscilloscope and integrated waveform generator to investigate, measure, and record the I-V behavior of a diode. This was accomplished with a simple breadboard and one Moku:Go. No other lab equipment was needed to demonstrate this common electronic engineering lab experiment. Benefits of Moku:Go For the educator & lab assistants - Efficient use of lab space and time - Ease of consistent instrument configuration - Focus on the electronics, not the instrument setup - Maximize lab teaching assistant time - Individual labs, individual learning - Simplified evaluation and grading via screenshots For the student - Individual labs at their own pace enhance the understanding and retention - Portable, choose pace, place and time for lab work be it home, on campus lab or even collaborate remotely - Familiar Windows or macOS laptop environment, yet with professional-grade instruments Moku:Go demo mode You can download the Moku:Go app for macOS and Windows. The demo mode operates without the need for any hardware and provides a great overview of using Moku:Go Have questions or want a printable version? Please contact us at email@example.com
OPCFW_CODE
# coding: utf-8 from http import client from socket import timeout import json from errbot import BotPlugin, botcmd class WhydBot(BotPlugin): """ Basic Err integration with whyd.com """ @botcmd def whyd_last(self, message, args): """Display the last track of a user. Example: !whyd last djiit """ if len(args) < 1: return 'I need a username to fetch hist last track!' try: status, res = self.request_playlist(args) except timeout: return 'Oops, I can\'t reach whyd.com...' if status != 200: return 'Oops, something went wrong.' return 'Last {user} track on Whyd: {track}.'.format( user=args, track=self.format_track(res[0])) @botcmd def whyd_hot(self, message, args): """Display the top 3 track on Whyd. Example: !whyd hot """ try: status, res = self.request_playlist('hot') except timeout: return 'Oops, I can\'t reach whyd.com...' if status != 200: return 'Oops, something went wrong.' return ('Current top tracks on Whyd:\n' + '\n'.join([self.format_track(i) for i in res['tracks'][:3]])) @staticmethod def format_track(track): """Format a single track.""" return '{name} (https://whyd.com/c/{track_id})'.format( name=track['name'], track_id=track['_id']) @staticmethod def request_playlist(url): """Fetch whyd.com playlist.""" conn = client.HTTPSConnection('whyd.com', timeout=5) conn.request('GET', '/{url}?format=json'.format(url=url)) r = conn.getresponse() return r.status, json.loads(r.read().decode())
STACK_EDU
Unable to Import configuration on airwave from controller Environment : Airwave version 7.7 + , monitoring/managing Aruba controller Unable to Import the config of a controller, and end up showing mismatch on Airwave. when we doing an import configuration, if we do a # tail -f /var/log/httpd/error_log from the airwave CLI, we could see the below crash: Explains that in the controller config, we enabled WMM in SSID profile and configured the DSCP mapping with two integer values, instead of one. in this above example error, we configured "24,46". We could check on the controller by doing this: (controller's CLI) # show wlan ssid-profile <ssid profile name> Please look in the output and If you see two values for anyone of the dscp mapping for wmm (wireless multimedia) (usually there will be 4) for example, for this SSID profile we can see 24,46 for DSCP mapping for WMM voice AC. (controller) #show wlan ssid-profile <test ssid> SSID Profile "test ssid" SSID enable Enabled DTIM Interval 2 beacon periods 802.11a Basic Rates 6 12 24 802.11a Transmit Rates 6 9 12 18 24 802.11g Basic Rates 1 2 802.11g Transmit Rates 1 2 5 6 9 11 12 18 24 36 48 54 Station Ageout Time 1000 sec Max Transmit Attempts 3 RTS Threshold 2333 bytes Short Preamble Enabled Max Associations 64 Wireless Multimedia (WMM) Enabled Wireless Multimedia U-APSD (WMM-UAPSD) Powersave Disabled WMM TSPEC Min Inactivity Interval 0 msec Override DSCP mappings for WMM clients Disabled DSCP mapping for WMM voice AC 24,46 DSCP mapping for WMM video AC N/A DSCP mapping for WMM best-effort AC N/A DSCP mapping for WMM background AC N/A Multiple Tx Replay Counters Disabled Hide SSID Enabled Deny_Broadcast Probes Enabled This setting is to set the priority for that module's traffic. if we have two values, it is causing this issue, we could change that value to one. if nothing configured for wmm settings on the controller the default value will be one integer. for more info on how to configure, WMM settings, what are the default values etc, please refer to the links in related links section. # config t # wlan ssid-profile <ssid profile name> # wmm-vo-dscp <enter one value> Hit enter to save it. wmm-vo-dscp is for voice traffic and likewise below are for others. (TR-WLC7210-01) (SSID Profile "test-VOIP") #wmm-? wmm-be-dscp DSCP used to map WMM best-effort traffic wmm-bk-dscp DSCP used to map WMM background traffic wmm-override-dscp-map.. Override DSCP Mappings for WMM clients wmm-ts-min-inact-int WMM TSPEC Min Inactivity Interval(0 - 3600000 msecs) wmm-uapsd Wireless Multimedia (WMM) UAPSD Powersave wmm-vi-dscp DSCP used to map WMM video traffic wmm-vo-dscp DSCP used to map WMM voice traffic After making this change, please do an audit and re-import the config, we should be able to import.
OPCFW_CODE
One of the potential challenges that application layering technology creates is around interlayer application conflicts with respect to the corresponding host operating system. In an effort to address these potential challenges, Liquidware Labs developers have created a feature called Micro-Isolation. In the graphic below, typical application conflicts occur when App 1 interacts with a shared DLL for example on the host operating system. If App 2 attempts to launch and it requires access to the same shared DLL, then the launch will fail. Historically, this is why Microsoft created the WinSXS architecture within windows, in an effort to address these application conflicts. The basic premise was that a copy of shared DLL’s or files would be taken and stored under the WinSXS folder structure for every unique application install or as needed. This works but causes the WinSXS folder to become bloated over time. What is Micro-Isolation? FlexApp Micro isolation is a technology that engages automatically to resolve inter-layer application conflicts. The applications within the FlexApp Layer are still perceived as native to the OS and other application layers. FlexApp is just redirecting the layered applications request for a file or registry key to its own layer so two versions of the same file or registry key can coexist. Why is this important? Without Micro-isolation when creating independent layers for an application, each layer is unaware of other layers and can potentially conflict at the file and registry level causing failures. What does this mean for your Application strategy? Normally the only way to solve application conflicts is to combine the conflicting layers into one large layer. This creates management problems, when you can’t just update a single layer and you have to deal with larger layers. This is also back to the old problem of having everything in the base image problem but at the application level, which is what everyone is trying avoid. FlexApp Layering Strategy With any new technology there is often a mad scramble to establish best practices within the enterprise. One of the questions that has come up of late with respect to application layering is around the strategy of creating the layers, specifically how many applications should be included within the layers. Opinions vary, and one of the suggestions proposed early on has been to add many applications into a single app layer. Although on the surface this seems like a good idea, there are a number of short and long-term challenges with this approach. The following list represents a few of the different scenarios to address these challenges: Multi App Layer is the process of including multiple random applications within an Application Layer. Although technically possible, this approach has challenges with respect to management and logistics of updating the individual applications within the layer. The long-term organizational overhead could be daunting with respect to this approach App Layer Suite is when vendor or enterprise home grown application suites are included within a FlexApp layer. With up front analysis and testing this approach could be extremely beneficial for enterprise environments. Simply by reducing the install / configuration time of the application suite for each deployment is worth its weight in gold. Departmental App Layers, are somewhat similar to the Multi App Layer scenario. The primary difference centers on the pre-determined cohesion of a subset of applications installed across endpoints within a single department. Significant testing and analysis often goes into the configuration of these application environments. So conceptually redirecting that cohesive application environment to a FlexApp layer is a plausible use case Single App Layers, like the name implies, is the process of redirecting individual application installs into a FlexApp Layer. The application lifecycle benefits of leveraging Single FlexApp Layers are clear. The ability to manage and update the corresponding applications within the FlexApp layers allows for a more streamlined logistical approach. The Micro Isolation feature, helps evolve the FlexApp Layering technology into a truly dynamic Application Layering platform. No need to stack multiple apps in layers, instead create a single layer for each application which allows for ultimate granularity in assigning apps to users, groups and machines. It also makes updating applications very simple by updating a single package for a single app rather than dealing with cumbersome multi app layers. Lastly makes packaging applications and deploying them much more successful without dealing with conflicts manually!
OPCFW_CODE
Can Hedges' d be used to compare unlike effects? I've got a question related to the use of Hedges' d in a meta-analysis of insect fitness information. Here is a link (pdf) to the paper I'm working with. In the appendices, the fitness information is presented. Hedges' d is used to standardized the effect of multiple-mating (polyandry) on various fitness metrics across many different species. I am trying to compare the effects of polyandry in social insects on the effects of polyandry in nonsocial insects. The trouble seems to be, however, that very few studies investigate fitness in the same way in these two groups. Social insect fitness is usually measured in terms of parasite resistance or colony gender ratio, whereas nonsocial insect fitness is often measured by things like fertility, fecundity, adult body size, and other, somewhat simpler metrics. So, my question is, can I compare the the Hedges' d for the effect of polyandry on nonsocial insect fecundity with the Hedges' d for the effect of polyandry on social insect parasite resistance? Would this be a meaningful comparison? Part of the data I'm working with is in the pictures below. Here is a description from the methods section of the paper describing what they're doing: "The common effect size we calculated was Hedges' d [i.e. J-corrected Hedges' g sensu Rosenberg, Adams & Gurevitch (2000); but note that Cooper, Hedges & Valentine (2009) refer to the J-corrected effect as Hedges' g]. We preferentially calculated Hedges' d using the mean and a measure of variance (standard deviation or error) for each treatment derived from individual female values. Means and measures of variance were extracted from summary tables, the text or figures (using Image J v. 1.43). Where this approach was not possible, we converted test statistics (t, F or χ2) or P values from tests of the main effect of the ‘number of mates’ treatment to Hedges' d using the software package MetaWin v. 2.0 (Rosenberg et al., 2000). We then calculated the variance using the number of females per treatment as the sample size. In a few cases only the total sample size was provided. If so, we set the sample size as equal across treatments." Here is the social insect fitness information: Here is (some of) the nonsocial insect fitness information: By "Hedges' d", do you mean Cohen's $d$ or Hedges' $g$? It's L.V. Hedges being referred to, so the possessive could be Hedges' or Hedges's, but certainly not Hedge's. Edited accordingly. Whether the coefficient is his is a different question. Tables are not readable at least by me on my machine. @NickCox, if you right click -> view image, they get much bigger. I just added some information in the bod the question regarding what they're actually calculating. @gung Thanks, but I think that's browser-dependent. I would suggest doing a moderator analysis who could test the sociality factor. You can do almost everything you want with meta-analysis but you need to specify it clearly in your protocole and you can test the impact of your decision. If this makes sense to you, I will make a more detailed answer. I am voting to close this as unclear since the code still the OP did not come back as requested to clarify the position.
STACK_EXCHANGE
Datastructure thoughts Structure 1: { board: { you: [null, null, null, null, null, null, null], opponent: [null, null, null, null, null, null, null] }, player: { you: { name: 'Inooid' hero: 'Mage' health: 30, mana: 1, weapon: { portait: 'imgurl' damage: 2, durability: 8, callback: fn }, heropower: { portait: '', mana: 2, fireOff: fn } } opponent: { name: 'Inooid' hero: 'Mage' health: 30, mana: 1, weapon: null, heropower: { portait: '', mana: 2, fireOff: fn } } }, decks: { you: [], opponent: { count: 30 } }, hand: { you: [], opponent: [] }, history: [], turn: } Structure 2: { you: { player: { name: 'Inooid' health: 30, mana: 1, armor: 0, damage: 0, weapon: null, hero: null }, deck: [], hand: [], board: [null, null, null, null, null, null, null] }, opponent: { player: { name: 'Kanopi', health: 30, mana: 1, weapon: null, hero: { heroClass: 'Warlock' portrait: '' power: { mana: 2, portrait: 'images/heroes/portraits/warlock.png', callback: function() { } } } heropower: { mana: 2, portait: } } }, turn: 'you' } My preference goes to structure 2, but if there's any other way of doing it, I am open to suggestions! Thanks to Liquidor for this. His thoughts on the state: http://pastebin.com/raw/UA9jgGrH @inooid Have you considered trying to normalize the state shape like in the redux docs? The flatter state shape might make it easier to work with. Something like this: { playersById: { 'you':{ id: 'you', name: 'Inooid' health: 30, mana: 1, armor: 0, damage: 0, weapon: weaponId1, hero: heroId1 deck: [cardId, cardId, ...] board: [minionId, minionId, ...] }, 'opponent':{ id: 'opponent', name: 'Kanopi', health: 30, mana: 1, weapon: weaponId2, hero: 'heroId1' heropower: 'heropowerId1' } }, turn: 'you', heroesById: { 'heroId1':{ heroClass: 'Warlock' portrait: '' power: powerId }, 'heroId2':{{ heroClass: 'Warlock' portrait: '' power: powerId } }, weaponsById: { ... }, heropowersById: { ... } minionsById: { ... } } @Bebersohl Thanks for your input! This issue is slightly outdated, but you've touched some valid points. I will reconsider the overall state as soon as I have some more time to work on this. 👍 Appreciate your input a lot! Cool, let me know when you start working on this again. I would like to contribute. Brandon On Fri, Dec 16, 2016 at 11:10 AM, Boyd Dames<EMAIL_ADDRESS>wrote: @Bebersohl https://github.com/Bebersohl Thanks for your input! This issue is slightly outdated, but you've touched some valid points. I will reconsider the overall state as soon as I have some more time to work on this. 👍 Appreciate your input a lot! — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/inooid/react-redux-card-game/issues/13#issuecomment-267643770, or mute the thread https://github.com/notifications/unsubscribe-auth/AG5hhRJ7bct-sxItIqQVfHABrWm1gF10ks5rIsXpgaJpZM4H-SkI .
GITHUB_ARCHIVE
import { ReadableTimePipe } from './readable-time.pipe'; const minute = 60; const hour = 60 * minute; const day = 24 * hour; describe('ReadableTimePipe', () => { const pipe = new ReadableTimePipe(); it('create an instance', () => { expect(pipe).toBeTruthy(); }); it('Format 27 minutes and 17 seconds', () => { expect(pipe.transform(27 * minute + 17)).toEqual('27m 17s'); }); it('Format 2 hours and 10 minutes 40 seconds', () => { expect(pipe.transform(hour * 2 + minute * 10 + 40)).toEqual('2h 10m'); }); it('Format 3 days 11 hours and 10 minutes', () => { expect(pipe.transform(3 * day + 11 * hour + 10 * minute)).toEqual('3d 11h'); }); it('Format only seconds', () => { expect(pipe.transform(9)).toEqual('9s'); }); it('Format 2000 seconds', () => { expect(pipe.transform(2000)).toEqual('33m 20s'); }); it('Format floats 189.567 seconds', () => { expect(pipe.transform(189.567)).toEqual('3m 9s'); }); it('Format negative numbers returns empty string', () => { expect(pipe.transform(-1232323)).toEqual(''); }); });
STACK_EDU
LLVM 17 was released in the past few weeks, and I'm continuing the tradition of writing up some selective highlights of what's new as far as RISC-V is concerned in this release. If you want more general, regular updates on what's going on in LLVM you should of course subscribe to my newsletter. In case you're not familiar with LLVM's release schedule, it's worth noting that there are two major LLVM releases a year (i.e. one roughly every 6 months) and these are timed releases as opposed to being cut when a pre-agreed set of feature targets have been met. We're very fortunate to benefit from an active and growing set of contributors working on RISC-V support in LLVM projects, who are responsible for the work I describe below - thank you! I coordinate biweekly sync-up calls for RISC-V LLVM contributors, so if you're working in this area please consider dropping in. A family of extensions referred to as the RISC-V code size reduction was ratified earlier this year. One aspect of this is providing ways of referring to subsets of the standard compressed 'C' (16-bit instructions) extension that don't include floating point loads/stores, as well as other variants. But the more meaningful additions are the extensions, in both cases targeted at embedded rather than application cores, reusing encodings for double-precision FP store. Zcmp provides instructions that implement common stack frame manipulation operations that would typically require a sequence of instructions, as well as instructions for moving pairs of registers. The RISCVMoveMerger performs the necessary peephole optimisation to produce cm.mvsa01 instructions for moving to/from registers a0-a1 and s0-s7 when possible. It iterates over generated machine instructions, looking for pairs c.mv instructions that can be replaced. instructions are generated by appropriate modifications to the RISC-V function frame lowering code, while the RISCVPushPopOptimizer looks for opportunities to convert a cm.pop into a registers, deallocate stack frame, and return zero) or registers, deallocate stack frame, and return). Zcmt provides the cm.jalt instructions to reduce code size needed for implemented a jump table. Although support is present in the assembler, the patch to modify the linker to select these instructions is still under review so we can hope to see full support in LLVM 18. The RISC-V code size reduction working group have estimates of the code size impact of these extensions produced using this analysis script. I'm not aware of whether a comparison has been made to the real-world results of implementing support for the extensions in LLVM, but that would certainly be interesting. LLVM has two forms of auto-vectorization, the loop vectorizer and the SLP (superword-level parallelism) vectorizer. The loop vectorizer was enabled during the LLVM 16 development cycle, while the SLP vectorizer was enabled for this release. Beyond that, there's been a huge number of incremental improvements for vector codegen such that isn't always easy to pick out particular highlights. But to pick a small set of changes: vsetivli instruction that is used to vtype control register. LMUL in the RISC-V vector extension controls grouping of vector registers, for instance rather than 32 vector registers, you might want to set LMUL=4 to treat them as 8 registers that are 4 times as large. The "best" LMUL is going to vary depending on both the target microarchitecture and factors such as register pressure, but a change was made so LMUL=2 is the new LMUL (register grouping) for RISC-V, however in the case of the immediate forms of vsetvl occuring in the input, LMUL can be If you want to find out more about RISC-V vector support in LLVM, be sure to check out my Igalia colleague Luke Lau's talk at the LLVM Dev Meeting this week (I'll update this article when slides+recording are available). It wouldn't be a RISC-V article without a list of hard to interpret strings that claim to be ISA extension names (Zvfbfwma is a real extension, I promise!). In addition to the code size reduction extension listed above there's been lots of newly added or updated extensions in this release cycle. Do refer to the RISCVUsage documentation for something that aims to be a complete list of what is supported (occasionally there are omissions) as well as clarity on what we mean by an extension being marked as "experimental". Here's a partial list: It landed after the 17.x branch so isn't in this release, but in the future you'll be able to use --print-supported-extensions with Clang to have it print a table of supported ISA extensions (the same flag has now been implemented for Arm and AArch64 too). As always, it's not possible to go into detail on every change. A selection of other changes that I'm not able to delve into more detail on: CONFIG_CFI_CLANG in the Linux tree) but the target-specific parts were previously unimplemented for RISC-V. This gap was filled for the LLVM 17 release. memcpy all gained optimised RISC-V specific versions. There will of course be further updates for LLVM 18, including the work from my colleague Mikhail R Gadelha on 32-bit RISC-V Apologies if I've missed your favourite new feature or improvement - the LLVM release notes will include some things I haven't had space for here. Thanks again for everyone who has been contributing to make the RISC-V in LLVM even better. If you have a RISC-V project you think me and my colleagues and at Igalia may be able to help with, then do get in touch regarding our services.
OPCFW_CODE
What Is Overfitting? In general, overfitting refers to the use of a data set that is too closely aligned to a specific training model, leading to challenges in practice in which the model does not properly account for a real-world variance. In an explanation on the IBM Cloud website, the company says the problem can emerge when the data model becomes complex enough that it begins to overemphasize irrelevant information, or “noise,” in the data set. “When the model memorizes the noise and fits too closely to the training set, the model becomes ‘overfitted,’ and it is unable to generalize well to new data,” the company writes. “If a model cannot generalize well to new data, then it will not be able to perform the classification or prediction tasks that it was intended for.” So, because of its contours favoring the data that it was trained against, the data model is more likely to produce false positives or false negatives when used in the real world. What Can Cause Overfitting? In some ways, overfitting stems from issues with how the original data model was built, creating gaps in the machine’s understanding. This can happen for many reasons — importantly, that a model was built for specific outcomes rather than slightly more generalized ones. (There is also a threat of the opposite problem, underfitting, which happens when the data model isn’t mature enough, creating false positives or false negatives.) Overfitting can introduce inefficiency into the business, adding costs. For example, overfitting can lead to issues in detecting security threats to internal platforms, allowing risks to enter a network undetected. When used in data forecasts, it can create a misunderstanding of how big the need for a product is, leading to problems with how that demand is managed within the supply chain. EXPLORE: How predictive analytics helps financial institutions manage risk. In some cases, overfitting can represent a form of algorithmic bias, in which errors in the data model create negative outcomes for the end user — for example, if people are more likely to be denied for a loan or credit based on a predetermined level of risk that doesn’t account for their specific circumstances. Challenges in attempting to build data models ethically reflect the importance of taking steps to avoid overfitting when bias or discrimination is a concern. Cassie Kozyrkov, the chief decision scientist at Google, said in a 2019 presentation that a key element in battling algorithmic bias caused by methods such as overfitting is to test heavily against the available data. “Computers have really good memory,” Kozyrkov said, according to VentureBeat. “So the way you actually test them is that you give them real stuff that’s new, that they couldn’t have memorized, that’s relevant to your problem. And if it works then, then it works.” How Can IT Teams Test and Detect Overfitting? Strong testing is the key factor in avoiding overfitting — and a key tell of an overfit model is that when the model is put into a real-world setting, it strongly underperforms compared with its performance against the training model it used. As data science blogger Juan Orozco Villalobos of the website BrainsToBytes noted, the variance in how the model performs in the real world compared with the test set tells the full story. “The easiest way to find out if your model is overfitting is by measuring its performance on your training and validation sets,” Villalobos says. “If your model performs much better with training data than with validation data, you are overfitting.” He adds that introducing more test data can help strengthen the model against such quirks over time. However, it’s worth keeping in mind that there may be some cases in which overfitting is preferred. The security company CrowdStrike, for example, has found that in the methods it uses to prevent malicious data, overfitting may be preferable to a more generalized approach. “Across many problem domains, models that heavily overfit the training data perform better than the best models that do not,” writes Robert Molony, a senior data scientist at CrowdStrike, in a blog post. “This observation has been replicated across many problem domains and model architectures.” LEARN MORE: Find out how to secure your data all the way to the endpoint. How Can IT Teams Prevent Overfitting? Avoiding overfitting comes down to building a strong data model and testing it heavily, using tools such as CDW AmplifiedTM Data Services to help analyze the capabilities of your model. In a blog post for the website Towards Data Science, David Chuan-En Lin, a PhD student at Carnegie Mellon University’s Human-Computer Interaction Institute, explains that a number of strategies can help prevent overfitting in data models. Among them: - For large data sets, set aside a portion of the data for testing the results of the training set. (Lin recommends that about one-fifth of the data be set aside for testing purposes.) This enables a re-creation of real-world conditions by allowing the data set to be tested against information not included in the model. - For smaller data sets, apply data augmentation to artificially increase the size of the data set. Lin notes that this approach is effective in cases of image classification, in which images can be rotated or warped to create additional variables. - For data sets with large numbers of features, simplify the number of features analyzed so that the data model is not built with a high degree of specificity. - Apply regularization techniques to the models, such as L1 or L2 regularization or eliminating layers within the model, to remove complexity from the model. Beyond these more traditional approaches, up-and-coming technologies could also provide a potential solution to the overfitting problem, depending on the use case. For example, the GPU manufacturer NVIDIA has been building methods for using synthetic data in training deep neural networks. Last fall, the company announced its Omniverse Replicator, a data generation engine that can help create synthetic data in use cases such as autonomous driving or robotics. In a recent interview with IEEE Spectrum, the company’s vice president of simulation technology and Omniverse engineering, Rev Lebaredian, said using synthetic data can make it easier to account for issues of algorithmic bias “because it’s much easier for us to provide a diverse data set.” “If I’m generating images of humans and I have a synthetic data generator, that allows me to change the configurations of people’s faces, their skin tone, eye color, hairstyle, and all of those things,” he says.
OPCFW_CODE
7 Most Trending Programming Languages of 2019 Aspiring developers need to know what languages to learn; they need to select the right education and work on a skill set that will impress future employers and land their dream job. So what are the top programming languages? And what is the best one to learn? We’ve compiled a list for you that highlights the most in-demand programming languages based on current job postings on the market. Here are the Top 7 programming languages with the most job posting on Indeed as of January 2019: - Java – 65,986 jobs - Python – 61,818 jobs - C++ – 36,798 jobs - C# – 27,521 jobs - PHP – 16,890 jobs - PERL – 13, 727 jobs This year Java grew by around 6% compared to last January, which was right around 62,000 job postings at the time. Java is just about to celebrate its 24-year birthday, and as a programming language, it has definitely stood the test of time. Java was developed by a Canadian computer scientist that used to work with Sun Microsystems, James Gosling. It’s a language that lets developers “write once, run everywhere,” (WORA), which means its compiled code, also known as bytecode, can run on almost any platform without recompilation. Python was released more than a decade ago and was designed by Guido Van Russan a Dutch programmer. It is a high-level programming language that is used as a ‘glue’ language to connect large existing software components. Also, it is an object-oriented programming language that offers a vast collection of useful libraries and extension for developers and programmers. Python is often described as simple and easy to learn, with a readable syntax that decreases the cost and time of program maintenance. This year, Python is skyrocketing with an increase of about 24% with 61,000 job postings compared to last year’s 46,000. C++ was designed as an enhanced version of the C language by Bjarne Stroustrup a Danish computer scientist. Its four-year development started way back in 1979 and was released in 1983. C++ is usually used for game development, drivers, client-server applications, system/application software, and embedded firmware. This year, C++ grew in popularity by 16.22% compared to last year with almost 37,000 job postings. C# is a Microsoft programming language and is a hybrid of C++ and C languages. It lets developers build secure applications such as XML Web services, client-server, Windows client and database applications that run on the .NET Framework. C#’s job postings didn’t grow that much over the year, but it’s still one of the most popular languages. PHP or Hypertext Preprocessor is created by Rasmus Lerdorf,of a Danish-Canadian programmer. It’s an open-source general-purpose scripting language for web development and can be embedded into HTML code from the server side execution. It’s commonly used to draw data out of a database on web pages. PHP’s job postings increased by 2,000 compared to last year. PERL’s first appearance was in 1987, designed by an American computer programmer, Larry Wall. Wikipedia says it’s a “family of two high-level general-purpose, interpreted, dynamic programming languages, Perl 5 and Perl 6.” Perl’s popularity didn’t increase this year but is still one of the most popular programming languages to learn. The Top Programming Languages There are 256 known programming languages in the world. So This is a list of the most popular programming languages which they update every month. A statically typed, cross-platform, general-purpose programming language, currently at #31, can enter the top 20 because of its fast adoption in the industrial mobile app market. This prediction only shows that the tech industry is moving faster than it has ever been and if we don’t keep up, we would be left behind. It’s important to note, however, that the list doesn’t show the best programming language there is. Its main aim is to help developers know whether their skill is up-to-date with their level of expertise and what programming language they can add to their skill set. The key to becoming a successful developer is to have a never-ending desire to learn and grow. Mastering one programming language is commendable, but sometimes, it proves to be a liability as it becomes a developer’s limitation. Employers look for developers with different skills in programming who can be taught new languages quickly and skillfully. Therefore, learning a new programming language is imperative if you want to become a successful, full-fledged developer. “Every time you post something online, you have a choice. You can either make it something that adds to the happiness levels in the world—or you can make it something that takes away. “Once I got home, though, and saw several packages on my front porch, all the crap from the day disappeared. A few had smiley faces on them. Squealing, I grabbed the boxes. Books were inside-- new release books I'd preordered weeks ago.”
OPCFW_CODE
#include "main.h" #include "cmd_dead.h" #include <iostream> #include <memory> #include <stack> using namespace std; using namespace ccspp; int cmd_dead(CCSProgram& program) { set<shared_ptr<CCSProcess>, PtrCmp<CCSProcess>> visited; set<shared_ptr<CCSProcess>, PtrCmp<CCSProcess>> frontier; map<shared_ptr<CCSProcess>, CCSTransition, PtrCmp<CCSProcess>> pred; frontier.insert(program.getProcess()); int depth = 0; while((opt_max_depth < 0 || depth < opt_max_depth) && !frontier.empty()) { set<shared_ptr<CCSProcess>, PtrCmp<CCSProcess>> frontier2; for(shared_ptr<CCSProcess> p : frontier) { visited.insert(p); set<CCSTransition> trans; try { trans = p->getTransitions(program, !opt_no_fold); } catch(CCSException& ex) { if(opt_ignore_error) { cerr << "warning: " << ex.what() << endl; continue; } else { cerr << "error: " << ex.what() << endl; return 1; } } if(trans.empty()) { stack<CCSTransition> path; while(pred.count(p)) { CCSTransition next = pred[p]; path.push(next); p = next.getFrom(); } if(opt_full_paths) { cout << *path.top().getFrom(); while(!path.empty()) { CCSTransition next = path.top(); path.pop(); cout << " --( " << next.getAction() << " )-> " << *next.getTo(); } cout << endl; } else { cout << "["; bool first = true; while(!path.empty()) { if(!first) cout << ", "; first = false; cout << path.top().getAction(); p = path.top().getTo(); path.pop(); } cout << "] ~> " << *p << endl; } } for(CCSTransition t : trans) { shared_ptr<CCSProcess> p2 = t.getTo(); if(!visited.count(p2)) { frontier2.insert(p2); pred[p2] = t; } } } depth++; frontier = move(frontier2); } return 0; }
STACK_EDU
M: Practical attack against TLS/SSL and RC4 - tomvangoethem http://www.rc4nomore.com/?hn R: api It's fairly likely that rumors of the NSA's ability to 'decrypt SSL' refers to RC4 vulnerabilities. R: throwaway507 Are you sure about that? This attack requires a js exec in browser to generate lots of traffic containing the cookie. It's a little impractical to use in a SIGINT capacity. My bet is still on precalculated DH like logjam attack. Still the quote from @ioerror is: "RC4 is broken in real time" so that's either hyperbole or there is an attack better than 75 hours still out there. R: netheril96 NSA has been ahead of the state of art in cryptography, as the past has shown. So perhaps they already have an even more practical attack on RC4 for a long time. R: theandrewbailey While I agree that RC4 should die in a fire, this attack seems impractical to me. > To successfully decrypt a 16-character cookie with a success probability of > 94%, roughly 9x2^27 encryptions of the cookie need to be captured. Since we > can make the client transmit 4450 requests per seconds, this amount can be > collected in merely 75 hours. How likely would that amount of network traffic and energy consumption cue the potential victim that something malicious is going on? R: tomvangoethem Colleague of the author here. I guess that 4450 requests/s to one IP, or even spread across multiple IPs, could trigger some alarms if the victim is alert. Unfortunately, I'm not that familiar with IDS/IPS's to answer that with much confidence. In any case, an attacker has a lot of options. The requests do not need to be made sequentially, so an attacker could basically start and resume his attack whenever he wants, e.g. when the victim is away from keyboard (which he can estimate based on the network traffic someone usually generates). An attacker could also simply slow down the number of requests/s, although this results in a larger number of hours required for a successful attack. As for energy/CPU consumption, I don't think that'd be a big concern. When the practical attack was performed, the CPU usage went up to around 75%, still allowing one to visit other websites without noticing anything. So unless one would closely monitor the CPU/network usage, I don't think the average victim would notice it. R: theandrewbailey > As for energy/CPU consumption, I don't think that'd be a big concern. What if this attack targeted a phone or laptop? Battery would die faster, device would get warmer, and fans spin up. R: the8472 If your last line of defense in encryption/network security is noticing that the fan spins up more often then you already lost the game. Why even bring it up? R: theandrewbailey It's not a line of defense. I'm thinking about less computer literate people. Do you know a friend or family member that would call you and say "My laptop's really hot, loud, and slow, but it's not doing anything!" and ask for advice? R: the8472 Do you really think they're the kind of people who'll be targeted by this attack instead of some refined version in the future? Do you really think this scenario is realistic and worthy of consideration at all? R: vishwajeetv As Roy T. Fieldings once said in his research paper, "Cookie-based applications on the Web will never be reliable!" [https://www.ics.uci.edu/~fielding/pubs/dissertation/evaluati...](https://www.ics.uci.edu/~fielding/pubs/dissertation/evaluation.htm) Section 6.3.4.2 R: schmichael Does anyone else find it ironic that not only is this link HTTP, but HTTPS is broken for this domain? [https://www.rc4nomore.com/](https://www.rc4nomore.com/) Hopefully the NSA MITMs it with an "RC4 is fffiiinnneee" message. R: ceejayoz It's not really their fault - they're hosting on Github, whose infrastructure presents a Github.com cert to HTTPS requests. R: mbrubeck [https://vanhoefm.github.io/rc4nomore/](https://vanhoefm.github.io/rc4nomore/) is a valid HTTPS address for the site. R: cyphar I feel "practical" is too strong of a word here. It's probably a __more __practical attack than previous attacks, but that doesn 't make it practical by a long stretch. "Only" 75 hours, where you have to force the victim to do make a very large number of encrypted messages. IMO, this wouldn't work when trying to break someone's SSL connection at the local Starbucks. R: dlitz > ...but that doesn't make it practical... If I had a dime for every penny of damage caused when people downplay the practicality of attacks against deployed crypto... 75 hours is enough time to attack a laptop left plugged in at the office over a 3-day weekend, and there's no reason why you'd have to attack only one laptop at a time. The paper also says, "capturing traffic for 52 hours already proved to be sufficient", so it's not like 75 hours is some hard minimum. Also: "Our attack is not limited to decrypting cookies. Any data or information that is repeatedly encrypted can be recovered." "We can break a WPA-TKIP network within an hour." RC4 is dead, dead, dead. As with MD5, the writing's been on the wall for a while now, and attacks are only going to get better. R: yuhong The attack numbers are under artificially generated network traffic. R: gipsies Yes, but we present several techniques on how to generate these amounts of data. For TLS and HTTPS you can use JavaScript. For WPA-TKIP you need control of one TCP connection, and that is enough to generate the data. We're not saying it's a point and click attack, but it's a very good reason to start worrying :) R: userbinator The keys they used were only 128 bits, whereas RC4 actually supports up to 2048 bits. I wonder how much that affects their results. (AFAIK the 128 bits is an export restriction thing, upgraded from the previous trivially-breakable 40 bits.) Also, 16 characters seems awfully short for a cookie, especially one meant for authentication purposes. R: xyzzy123 I don't think SSL/TLS allow key lengths > 128 bits with RC4. Export is 40 or 56 bits. You can see most supported ciphers here: [https://www.openssl.org/docs/apps/ciphers.html](https://www.openssl.org/docs/apps/ciphers.html) e.g: TLS_RSA_WITH_RC4_128_MD5 RC4-MD5 TLS_RSA_WITH_RC4_128_SHA RC4-SHA TLS_ECDH_RSA_WITH_RC4_128_SHA ECDH-RSA-RC4-SHA TLS_ECDH_ECDSA_WITH_RC4_128_SHA ECDH-ECDSA-RC4-SHA
HACKER_NEWS
Is it poor form to look through a user's previous questions when they ask a very bad one? It seems that when a user asks a bad question, that they have likely asked a number of other poor questions in the past as well. (I only have anecdote to back this up, no hard evidence) After voting to close a question that I consider to be particularly poor, is it a bad idea to look through some of the offending user's other questions for other poor questions that may be deserving of a close-vote or a down-vote and possibly popping into a chat room to ask others to look at questions I think should be closed? I could see the abuse filters picking up this sort of behavior, since it is targeted at a single user, but at the same time it seems to me that cleanup should happen when cleanup is needed. Is doing this considered abusive behavior? Is there a better way address this? That's great form. How else can we find patterns of extremely bad content, or right-out abusive behaviour (like asking the same question over and over because the OP is not happy with the initial answer)? So then, perhaps the question that should really be asked is how this behavior can be made to be considered "acceptable" by the abuse filter? (I haven't run into it yet that I know of, but I worry, which keeps me from doing this more aggressively.) You mean the downvote abuse filter? Yeah, good point... although maybe knowing that we should cast only 2-3 downvotes on a single user is a good thing. Any really extreme stuff could then be flagged @Won't - But, what about the abuse filter(s)? My heart says that almost every close-vote I give should also be accompanied by a down-vote, but for some users that could be a fairly high number. I found myself giving a lot fewer downvotes once I gained the privilege to vote to close. I save my downvotes for the questions where it feels like the question could not possibly be improved. (Closed questions can be re-opened by others; downvotes can only be reversed by me, and then only if someone takes the time to tell me that the question has been improved.) No, I don't think that is bad. But I might be biased, because I do this all the time. The only things to watch out for are: Don't serially (down)-vote the user, or your votes will probably be automatically reversed. It's intentionally a secret exactly what the threshold is, but use some common sense. It doesn't do any good to downvote poor questions in the name of cleaning up the site if those votes are just going to be automatically reversed in 24 hours. Don't fall into the trap of enacting a personal vendetta against the user. Remember, you're assessing only their questions here, not them as a person. The correlation is strong between one poor question and many poor questions, and that's the only thing you should be operating on. Always judge each question independently on its own merits. If you see a glimmer of hope, a chance of improvement, always take the opportunity to be proactive. Some good options include editing the question yourself to improve it, and/or leaving comments for the user suggesting things (s)he can do to improve either that question or, more generally, their future questions. I'm glad to know I'm not the only one who uses a bad question as a flag for possible deeper problems. And yes, I do make sure to evaluate each question on it's own, tending to err on the side of leaving things open rather than close everything. The serial down-voting is a concern, especially when a user's questions are border-line close-vote eligible and only a down-vote is warranted. In those cases, I tend to only do a few and leave comments. I do that too - good point about the serial downvoting. You might also mention serial flagging - there was an incident a few months ago where someone went through a users account and flagged dozens of their (admittedly poor) posts. Not recommended behavior, needless to say @Adam: I suppose I somehow missed that episode, but serial flagging won't be reversed, it'll just get the moderators after you with pitchforks, crazily muttering something about the number of flags in the queue. The better approach is clearly to raise a single flag and use that to explain the problem. Yep. See Tim's comment here: http://meta.stackexchange.com/questions/120806/what-should-we-do-when-a-single-user-has-pending-flags-on-many-of-their-answers Not only is this not poor form, it is in fact a good thing to do - it helps weed out bad questions from the site, but with any luck help educate the user as to what makes a bad question. I suggest that you post comments on such other questions explaining how they can be improved as part of the process. That was what I thought, but I wanted to check. In general, I do try to offer at least a pointer towards the FAQ and what constitutes a good question. A caution: if you were to comment on several of a user's questions then they would quite likely notice that all the comments were from you (same name in a bunch of inbox entries) and possibly feel stalked or stomped on — not a mood conducive to learning. This is a very useful thing to do. It takes the focus off of filter poor questions and focuses on filter terrible users (and spam). It's something I do all the time. One such example where it turned out to be helpful is described in this question, where the user was abusing the system. After reading an extremely poor question (badly written and way off-topic), I clicked on the user and discovered a repost of the same thing on a different SE site (also closed), and a question that was migrated then closed. So yes. Do it. It can help prevent "straight-up abuse".
STACK_EXCHANGE
Unconventional, pretty bicycle helmets My wife and do a lot of urban riding - mostly on bike lanes - and usually we don't wear a helmet for these rides. But I'd like us to start, and I'd like to get her a nice helmet, one that doesn't look like a standard bicycle helmet. I found a couple of companies that make the sort of thing I'm considering - for example - Yakkay, Nutcase or Lazer. Other than being more expensive, do these helmets have serious drawbacks? We live in a pretty hot, humid climate, so ventilation is definitely a concern. Some clarifications: I want to point out that even though I described it as "hot and humid" here, lots of people - probably a sizable minority - do ride with helmets. I ride with one on the longer rides, but I have a habit of skipping it for the short urban ones. The two of us agree, in principle, about wearing a helmet, but I think she'll be much happier starting with something pretty. Here is Tel Aviv, by the way. Those seem to be very poorly vented. Ventilation is a major issue with helmets in just about any climate. I'd suggest you buy an inexpensive standard helmet and wear it awhile to become familiar with the various issues with helmets. But, please, WEAR A HELMET. If you don't like bicycle helmets, there are a lot of other sports (for example, roller-blade skating, polo (on horses), mountain-climbing, etc.) that use helmets which might have a more "eye-friendly" looks. Other than that, I think you have already mentioned the (probably) most fashion-oriented options. But don't worry, "ugly helmet" is just a concept they try to put on our heads... ;oP There seems to be a lot of concern about how hot and humid it is here. Yes, it's hot and humid in Tel Aviv. But it's also an exceptionally flat and concentrated city - most rides are less than 5 km and without much elevation gain, and don't require much effort. Ventilation is important, but this isn't the worst place in the world to sacrifice ventilation for fashion. I wouldn't worry about hot and humid. If you're sweating a lot, wear a headband to keep it out of your eyes, and your bike probably has an attachment for a water bottle. Use it. The less ventilation, the more safe it is if you get in a crash. I only go for the helmets that BMX riders and downhill racers wear. I got my girlfriend a colourful one. Personally, I think that good cycling helmets are beautiful. The lines are elegant. The transitions between the surface of the helmet and the vents are graceful. The way that the helmet increases in size from front to back is reminiscent of the wind. They're really quite astounding pieces of technology, if you really look at them. On the other hand, cheap bicycle helmets just look bulky and blunt. And the type that most people seem to be linking to, the "brain buckets" that look like skateboarding or horse racing helmets, are just hideously ugly. I know that doesn't really answer your question, but as you mentioned ventilation is a big issue and I really think it's worth it to have a good cycling helmet. And there's a big difference in ventilation between a $40 and a $100 helmet. I think if you drop the money on a good cycling helmet, you'll be much happier with it than you will with a "fashion" helmet. Assuming I ride a lot, but know nothing about helmets, can you tell me how to look for a /good/ helmet? Gladly! Here's my answer to that in another person's question: http://bicycles.stackexchange.com/a/9756/4239 Naturally, scroll up and down to see others' opinions. These "hideously ugly" helmets are likely safer because they have a more spherical shape to better address rotational impacts, and fewer vents that objects can penetrate. More that here. It's not clear to me how spherical helmets are supposed to mitigate rotational injuries and the article you linked doesn't explain it nor does it link to a source that backs up that claim. In fact, it says, "Nothing has been shown one way or another though, for bicycle helmets." The penetrating objects thing makes sense though. On the other hand, I've had a couple of crashes that resulted in broken helmets (pretty ones and ugly ones) and head injuries and I've never been at risk for a penetrating object regardless of the helmet. Ventilation, however, is a daily concern. I have this nutcase helmet: I really like it, it's comfortable to wear and I love the way it looks. I'd say the ventilation was good but not great. It's certainly enough for the conditions I ride in - the temperature is rarely above 25°C and humidity is generally somewhere around 60%. If the temperature and humidity you ride in is a lot higher, then you probably want something with a bit more airflow. I remember the first time I got shot out of a cannon... @lawndartcatcher Ha! The ventilation on that looks absolutely terrible. It's all very well describing the temperature and humidity where you live but how much effort are you putting out while cycling. That helmet looks like it would be too hot for me at 15-20C. Bern makes less 'sporty' helmets such as the BERKELEY for women and the BRENTWOOD for men which may be slightly more ventilated than other options. However, a traditional cycling helmet with lots of ventilation is probably going to be much cooler, unless you're doing very casual riding. the Berkeley helmet http://www.bernunlimited.com/assets/products/Womens_Helmet/summer12/berkeley/main-berkeley-atlantic.jpg My wife and I have Bern helmets. I have the G2. I have never found it to be "too hot", even when riding 100+ miles at nearly 90 degrees (F). I notice the lightweight design, and much appreciate the sun visor. (I have expected it to feel hotter on some occasions, it jus has never felt that much hotter.) Beware of helmets that have other than smooth plastic surfaces. I don't know if there is any scientific evidence for it but my practical sense tells me that a helmet with cloth surface (like yakkay helmets) can lead to extra injury in the case of an accident. In many cases you tend to slide on your helmet when you crash on the street/bike lane and if your helmet is covered with a materiel that tends to have higher friction than the standard plastic shell your neck must bear the extra forces that are generated. Swedish company Hövding manufactures an "invisible" helmet, essentially an airbag. See in action here. (source: hovding.com) That's just plain crazy. Do you have one? How is it? I don't have one. Yet. :) Note that like an airbag in a car, this is single use only. My personal favorite helmet company, both for quality and style is Kask Safety. They make well made, craftsman quality helmets with modern engineering for safety, fit, comfort and ventilation, and with old world touches likes butter soft italian leather straps. You can find their road cycling helmets here and their Urban designs here. a Kask Urban helmet http://www.kask.it/kask/components/com_virtuemart/shop_image/product/bianco%20nero%20rosso%20copia.jpg Depending on your budget, you could probably get something hand painted. After some quick Googling, I found this on Etsy. Looks like a standard Bern or Nutcase helmet, but it's been painted. There's probably some manufacturer warnings about not painting the helmet, but I wouldn't imagine that paint would have much of an effect on these hardshell helmets, as long as you used a non-industrial paint. And it would be safer than no helmet at all. You could probably contact your local highschool or arts college to see if they have any students who were interested in doing something like this as a project. Probably would be a lot cheaper to hire someone locally than to pay $300 for the helmet on Etsy. Found the non-Etsy site that seems to have a larger selection here. What matters is not whether the paint is "industrial" but what solvents it's based on. If it's water-based, it'll come off in the rain; if it's not water based, you need to worry about the solvents weakening the plastics. Ribcap is another alternative helmet which does not have a hard shell, but a semi-flexible material which hardens on impact. Traditionally, this was used for snowboarding, but it is now being promoted for cycling use. However, I wouldn't trust this as much as a standard cycling helmet, it would offer more protection than going without a helmet. a Ribcap helmet http://www.ribcap.ch/images/sized/media/collection/ribcap_jackson_red_big_2-530x410.png The fact that you both suggest them and hesitate to trust them is a point against, but I think either way they'll be too hot for our climate (but pretty enough).
STACK_EXCHANGE
request mysql with curl I have a mysql running. I can access it with mysql -h <IP_ADDRESS> -u root -P 8009 -p (password requested) Now I want to make requests with curl : curl <IP_ADDRESS>:8009 ???? I need to pass the password and some simple commands (INSERT in one database) Is it possible/ how to do that ? EDIT : Since it seems impossible, I broaden the scope of the question. I'm open to wget or whatever to makes simple commands in a mysql base without installing specific utilities (mysql itself, python packages, etc.). So using commonly available commands in bash. I have no real control on the environment from which I will make the requests. This smells like an XY-problem: https://meta.stackexchange.com/a/66378 In short: There's no nice/neat way to speak MySQLs protocol with easy tools, it's a binary protocol optimized for efficiency. If really neccessary, install a webserver on the MySQL server (or another host) which will take HTTP(S)-requests (from curl, wget, etc.) and send them to the database. @Tobias Mädel My X-problem is : I want to create a Milvus-mysql docker compose. But the milvus container has to wait that mysql is not only up but to have created a database "milvus". And the milvus docker has not "mysql" in it. @Tobias Mädel Anyway, the Y problem is interesting by itself because, for testing/debug purposes, it can be handy to make "curl ..." without installing mysql on the client side. You can't. curl is a HTTP client. MySQL doesn't use the HTTP protocol. There was apparently a plugin that added a HTTP API, but the MySQL Labs page says this: These binaries were created by MySQL testing servers. They are NOT FIT FOR PRODUCTION. They are provided solely for testing purposes, to try the latest bug fixes and generally to keep up with the development. Please, DO NOT USE THESE BINARIES IN PRODUCTION. So you're out of luck. Thank for the answer. In fact I'm not totally stuck to curl. I edited the question to be less specific. @LaurentClaessens MySQL uses a proprietary protocol. You will need MySQL-specific, or at least database-specific, tools. Using password is only possible in the browser-based REST API for MySQL Cloud Service. It does not apply to your case - the protocol to communicate with MySQL is not HTTP, so you cannot use an HTTP client like cURL. Thank for the answer. In fact I'm not totally stuck to curl. I edited the question to be less specific. You can't get away from using a MySQL client of some kind.
STACK_EXCHANGE
Xamarin App with Serverless Backend In a fast-paced world of technology, developing mobile applications that are both scalable and efficient has become increasingly important. Xamarin, a widely-recognized cross-platform app development framework, has risen in prominence due to its ability to simplify the creation of applications for Android and iOS devices. In this article, we’ll examine the process of constructing a Xamarin app with a serverless backend, leveraging the capabilities and adaptability of Back4App. This dynamic pairing enables developers to craft versatile, feature-packed applications while eliminating the burden of server administration, cutting down on development time, and enhancing the app’s overall performance. What are C# and .NET? C# (spoken as “C-sharp”) is a contemporary, object-oriented programming language developed by Microsoft as a component of the .NET initiative, which was launched in 2000. C# is intended to be user-friendly, potent, and adaptable, drawing on elements from languages like C, C++, and Java. It is commonly utilized to develop diverse applications, encompassing web applications, desktop applications, mobile apps, and games. .NET (spoken as “dot net”) is a software framework, also created by Microsoft, which offers a runtime environment along with a collection of libraries, tools, and services to simplify the process of building, deploying, and managing applications. .NET accommodates multiple programming languages, such as C#, Visual Basic .NET (VB.NET), and F#. The framework is designed with cross-platform capabilities, enabling developers to craft applications that can function on various operating systems like Windows, Linux, and macOS. What is Xamarin? Xamarin is a versatile and powerful cross-platform app development framework designed to enable developers to build native-like applications for multiple platforms, including Android, iOS, and Windows, using a single codebase. Created by Xamarin Inc., which was later acquired by Microsoft, Xamarin leverages the C# programming language and .NET framework, providing a consistent and familiar development environment. Using Xamarin, developers can reduce development time and effort while still achieving exceptional performance and native UI experiences. Xamarin offers a comprehensive set of tools, libraries, and runtime environments, including Xamarin.forms for shared UI development and Xamarin.iOS and Xamarin.Android for platform-specific implementations. As a result, Xamarin has become popular among developers looking to create high-quality, maintainable, and efficient cross-platform applications. Benefits of Using Xamarin Xamarin presents a range of advantages for developers and organizations aiming to develop cross-platform applications. Some of the primary benefits of using Xamarin include: - Unified Codebase Xamarin empowers developers to create a single codebase in C# that can be utilized across platforms like Android, iOS, and Windows. This consolidates the development process and simplifies maintenance and updates for applications. - Native-like Performance Xamarin-built applications can deliver performance levels comparable to native apps, as Xamarin can access platform-specific APIs and take advantage of hardware acceleration. This ensures a seamless user experience akin to that of native applications. - Native User Interfaces and Experiences Xamarin offers tools and libraries, like Xamarin.Forms to help developers craft native UI and UX for each platform. This guarantees that Xamarin-developed apps resemble native apps on every platform, providing a consistent experience for users. - Reusable Code and Libraries Xamarin enables developers to generate shared libraries across different platforms, minimizing the need for platform-specific coding. This leads to less code duplication and easier maintenance, as developers only need to modify the shared codebase. - Robust Community and Support Xamarin boasts a large, active community of developers, as well as comprehensive documentation and resources from Microsoft. This makes it simpler for developers to find assistance and solutions to challenges they may face during development. - Compatibility with .NET Ecosystem Xamarin is closely integrated with the .NET ecosystem, offering developers access to an extensive selection of libraries, tools, and services to enhance their applications’ functionality. - Cost Efficiency Xamarin is a cost-effective option for businesses, as it reduces the time and resources needed to create cross-platform applications. Developers can save on development and maintenance expenses by employing a single codebase and shared libraries. Integrating Xamarin with Serverless Backend Back4App: Step-by-Step Guide - Create a Back4app Account To get started, follow the URL to sign up on the Back4app platform for effortless backend integration. - Create a New App Once you have logged into Back4app, proceed to create a new app by selecting an appropriate name and choosing between a relational or non-relational database. Simply click the “NEW APP” button to initiate the process. Make sure to opt for the “Backend as a Service” when creating the app. Enter the app’s name and select the appropriate database provider based on your specific requirements. This will create the “CMS” app. - Choose a Suitable Development Framework Navigate to the menu tab and click on “API.” From there, choose your preferred platform for app development. In this scenario, we will opt for the Xamarin development framework. Once you click on the Xamarin icon, you will be directed to a new screen displaying the Xamarin environment setup documentation, which provides instructions for seamless integration with Back4App. - Download and Install Visual Studio To run the Xamarin Project, first, we need to download “Microsoft Visual Studio.” You can use any other IDE as you like. Go to the following web link for Visual Studio and download the Community Version. After downloading, open the setup and install Visual Studio after selecting your requirements. Visual Studio has been successfully installed. - Download Xamarin Project from Back4App Repo To download the Xamarin starter project, go to the following URL and download the zip file of the Git repo. After downloading, extract the file into any directory. - Setup Xamarin Project in Visual Studio Now open the Visual Studio and click on “Open a project or solution.” Go to the extracted folder, select “App1.sln”, and import it into Visual Studio. After loading the project, it will ask for Android SDK License permission. So, click on “Accept.” Install the NuGet packages in this project by right-clicking on the App solution and then selecting the following “Manage NuGet Packages” option. Search for the Parse package and install it. Search for Xamarin Android Package and install it. - Install an Android Emulator in the Visual Studio To further execute the application, we need an emulator for the display. So, click on the following “Android Emulator” icon. Click on the “Create” button to create a virtual device. It will start downloading the device and will be successfully installed. - Setup the Integration Keys with Back4App To integrate our Xamarin project with the Back4App application, we have to integrate the Application ID and .NET Key from the back4App platform. So, go to your application and copy the required data. Now paste the copied keys into the “strings.xml” file under the “values” folder in “resource.” - Run and Test the Xamarin Application Run the application by clicking on the Emulator Run icon. It will start running the project and loading the mobile screen. The project has been successfully executed. Now go to the Back4App platform to verify that our Xamarin application has been successfully integrated. So the “Installation” class has been successfully created with some data. To sum up, the combination of Xamarin and Back4App for creating cross-platform applications with a serverless backend offers a potent and streamlined solution. Xamarin’s unified codebase, native-like performance, and deep integration with the .NET ecosystem empower developers to build high-caliber apps for various platforms. The serverless architecture provided by Back4App further complements this approach by eliminating server management burdens, cutting down development time, and enhancing app performance overall. While there are certain limitations to using Xamarin, the collective advantages of Xamarin and Back4App make them an appealing option for developers and organizations seeking to develop scalable, easy-to-maintain, and feature-packed cross-platform applications. As technology continues to advance, the fusion of robust development frameworks like Xamarin with serverless backend services like Back4App will undoubtedly play a pivotal role in the mobile app development landscape.
OPCFW_CODE
#include "hermes1d.h" #include "legendre.h" #include "lobatto.h" #include "quad_std.h" // This test makes sure that the derivatives of // the Lobatto shape functions starting with the // quadratic one are the Legendre polynomials // at all possible quadrature points #define ERROR_SUCCESS 0 #define ERROR_FAILURE -1 int main(int argc, char* argv[]) { // maximum poly degree of Lobatto function tested int max_test_poly_degree = MAX_P; int ok = 1; // precalculating the values of Legendre polynomials // and their derivatives, as well as of Lobatto shape // functions and their derivatives, at all possible // quadrature points in (-1,1) precalculate_legendre_1d(); precalculate_lobatto_1d(); // maximum allowed error at an integration point double max_allowed_error = 1e-12; // loop over Lobatto shape functions starting with // the quadratic one for (int n = 2; n < max_test_poly_degree + 1; n++) { // looking at the difference at integration points using // Gauss quadratures of orders 1, 2, ... MAX_QUAD_ORDER for (int quad_order=0; quad_order < MAX_QUAD_ORDER; quad_order++) { int num_pts = g_quad_1d_std.get_num_points(quad_order); double2 *quad_tab = g_quad_1d_std.get_points(quad_order); for (int i=0; i<num_pts; i++) { double point_i = quad_tab[i][0]; //double val = fabs(legendre_val_ref(point_i, n-1) - // lobatto_der_ref(point_i, n)); double val = fabs(legendre_val_ref_tab[quad_order][i][n-1] - lobatto_der_ref_tab[quad_order][i][n]); printf("poly_deg = %d, quad_order = %d, x = %g, difference = %g\n", n, quad_order, point_i, val); if(val > max_allowed_error) { printf("Failure!\n"); return ERROR_FAILURE; } } } } printf("Success!\n"); return ERROR_SUCCESS; }
STACK_EDU
(Automated) business processes evolve over time! And they usually evolve faster than IT systems do. So how can business process changes be delivered to the users quickly? Let’s look at an example: Assume we have a process for vacation planning for the staff of a large company. Initially the process was automated based on the knowledge of the human resource department. After 2 months new insights require a process change. The process should be optimized to speed up the decison whether vacation is granted or not. The process has evolved and the changes have to be put in place as soon as possible. This is a common situation and actually one of the promises of business process management is: Deliver business value fast. Sounds simple, but how can we deliver the changed process? There are serveral options to put the changed process in place: Option 1: Parallel The changed process coexists with the initial one for a period of time. Existing process instances must continue with the inital process definition. Example: Users of the process are gradually trained to use the changed process. Some departments can still use the initial process, some use the new one. The process is triggered by IT systems as well. Those systems should have a smooth upgrade path. Action: Create a new version of the process and deploy it in parallel to the one already in place. |--- Startable V1 --------> |--- Instances V1 --------> |--- Startable V2 ---------> |--- Instances V2 --------> Option 2: Merge The changed process replaces the initial one. Existing process instances must continue using the changed process definition. Example: Law changes render invalid the initial process. As of now all processes, including already running instances, must run with the latest process definition. Action: Create a new version of the process and migrate existing instances to the new process definition. |--- Startable V1 ------|--- Startable V2 ---------> |--- Instances V1 ------|--- Instances V1 + V2 ----> Option 3: Phase Out The changed process replaces the initial one. Existing process instances must continue with the inital process definition. Example: Process analysis caused the process to be optimized, so that it can be executed in less time. All users should immediately use the changed process. To keep effort low, already running process instances should continue running with the inital process definition. Action: Create a new version of the process and deploy it in addition to the one already in place. Prevent the initial process version to be started by disabling the start events. |--- Startable V1 --------| |--- Instances V1 --------------------| |--- Startable V2 ---------> |--- Instances V2 --------> Be aware of endpoints: If process versions are provided in parallel like in scenario 1 and 3 and connected to technical endpoints, for instance filedrops or web services, those endpoints might collide. Changing the structure of an endpoint, for instance the message payload, might cause incompatibility as well. In those cases (which are likely to happen) the endpoints must be versioned. Alternatively a dispatching mechanism can be used to route messages to the appropriate process version. As you can see versioning is am important concept for process evolution. Which strategy to use depends on the process and the particular business requirements. The options introduced in this blog post might help to take the right decision. Make sure your process platform supports the options you need.
OPCFW_CODE
Are the accounts of Canon 7D images getting less sharp after a lot of video shooting true? I have had a few accounts of people telling that if you shoot (only a few hours of) video with Canon 7D / 5D Mark II, the quality of the pictures you take later becomes worse, due to the strain / heating put on the sensor. Is that true ? There are many reasons any given camera's images can become less sharp over time. I doubt you're going to get a solid confirmation, as it would be incredibly difficult to isolate this from other sources of sharpness reduction/variation, outside of perhaps a DxOMark laboratory (with controlled conditions, several brand new cameras, known-good lenses, etc). Even if a few people agree, or even demonstrate with sharpness tests, there are many variations and other issues that can cause the same behaviour. Shot-to-shot variation First up, any measure of sharpness will vary between single shots, anyone asking or answering this question should be aware that any testing/demonstration of this will require finding the sharpest images in a selection of shots taken in ideal conditions. Even changes in temperature/humidity might affect the sharpness if you're really getting picky. Other sources of sharpness reduction General use of the camera will mean bumps and knocks, of both camera and lenses. To test this properly, you'd basically need to keep the camera mounted on a tripod, with one single lens, in a controlled environment, and never move it. Otherwise it seems likely that many cameras (especially Pro cameras that get a lot of use!) will take a few knocks and bumps along their life, and it's very easy to forget that time you put it down hard, or dropped your camera bag. Nothing noticeably broke/changed at the time, just like all the other times, but perhaps the AF sensor shifted slightly, perhaps the CMOS sensor shifted, perhaps a lens element shifted... and so on. Other things can also do the same: environmental changes, cleaning of the camera, each shutter/aperture actuation, the list goes on... In general, cameras deteriorate over time, for many reasons. Humans as a source of data Humans are very good at interpreting correlation as causation, and at making assumptions about how things work from very little data. This is both good and bad, in that it sometimes works well, and sometimes we get it very wrong. We also (unintentionally, and unconsciously) misremember things in a way that fits our understanding of the situation/system/laws of nature/whatever. We 'regenerate' memories based on our understanding, so if we believe that videoing caused our camera to lose sharpness, then we will remember events (perhaps incorrectly) in a way that supports this. I won't offer my own opinion on the matter (for the 3rd reason) but would suggest this can only be definitively answered by some very controlled experiments (by trusted experts). This is an amazingly neutral & yet informative answer. I accept it. Thank you.
STACK_EXCHANGE
1. Download "Setup.exe" to your desktop. 2. Unzip it to C:\PM. This is IMPORTANT. The demo files have "hard coded" filenames and will not work from anywhere else. When you are done, you should have the following files. Note the directory: C:\PM. Probability mapper can be run directly, without talking to the resource allocation code or Access. There are 4 demo files you can load and play with. We will work on one here. Launch "PM.exe". That should give you a blank window: Choose File->Open and select "demo1.sar" Your screen should now have an annotated map. A small portion of it: This already has a subject profile. Select Case->Edit case details to see that it's a 3yo boy. There are some extraneous fields showing too. The View menu has some interesting options: Using the menu, turn off Distance Rings and turn on Stat Rings. Now the screen has percentile rings for this 3yo child, (based on some very old statistics). Change the subject to a 25yo Hiker. Notice that the stat rings change to be much wider (about 3km out): (Notice the unrealistically close 99% stat ring. Did I mention this was only a proof of concept?) If we select View->Network Graphs, we get a (really low res) view of the distributions themselves: Go back to View->Map Screen. (Yes, it should be a toggle.) Now, Select View->Shade Regions. You will see the regions (squares) roughly shaded by probability. (Adam wasn't able to get transparency working in his toolkit, or fine shadings. Those would be trivial now.) Actually, as far as I can tell, these shadings don't change as we change profiles, so either the color profile is very course, or this feature broke in the packaged version. That's too bad, as it was the flagship feature of the software. I must look into this. OK, turn off the shadings so we can see the map again. If you right-click on the map, you get a context menu which lets you create, view, or edit regions. It also lets you set the terrain/vegetation type, which changes the color of the region border. It should also alter the POA for the area, though the packaged version seems not to use them. Another small bug: the list of terrain types repeats itself repeats itself repeats itself. Selecting "Mark Location" lets you create a new labeled point, like this one: Selecting "Draw Region" lets you draw a polygon region, which will get a POA based on distance (and in theory, terrain type). Right click to set terrain type. Here we have drawn an irregular polygon and set the terrain type to "drainage": Notice that it does not need to be connected to other search areas. Loading a new behavior model PM can use different behavior models. Just "Select Network File" from the case menu. The demo comes with two behavior models, both early models from Adam Golding's thesis work, based on the limited Virginia dataset. The default is "honors2.dne". There is also "snobonly.dne". The SARBayes download page has links to some other models, including our 2002 version of Syrotuck's model. The statusbar reports which model you are using. Descriptions and evaluations of these older models are available from the downloads page under "Other Reports". The most recent is my 2002 NASAR presentation is: http://sarbayes.org/nasar.pdf . The good news is that these models could predict the right distance (to within 1km) about half the time. The bad news is that they did just as well by disregarding lost person category. Now that we have a much larger database available (ISRID, the International SAR Incident Database), we hope to create and test some other models. When we do, PM or its successor can just load them! Other Map Functions The first three options let you load, calibrate (georeference) a map, and set the PLS. The context menu lets you quickly draw an initial grid. The last two options let you save (dump) the POA map to a file, or load one from a file. (We called it POC then, sorry.) These are very useful for communicating with a resource allocation program, like SAR.exe (included). Admittedly, communicating with a text-mode allocation program is cumbersome. It would be nice if they were integrated. They are, in a chewing-gum-and-baling-wire sort of way, using Microsoft Access. Launching from Microsoft Access This part requires Microsoft Access. It was written in Access97, and may not run on later versions. However, you can download the runtime from: http://www.zlcsoftware.com/ftp/A97RT.EXE André reports better luck using the Access97 viewer available from an EPA underground storage tank database website. Direct link to the installer: http://www.epa.gov/swerust1/ustacces/runtime.exe The EPA database website, with instructions: http://www.epa.gov/swerust1/ustacces/uav30.htm Launch AGM97.mdb: . You should see a screen like this: The top "Main Menu" window is the GUI, such as it is. The other window shows the tables that were used to make it, and we will ignore it. If you are lucky, you can now "Load a Map from Probability Mapper", use the default resources (or modify them using an Access GUI), and "Suggest" allocations with either Charnes Cooper or Washburn. These suggestion routines will run SAR.exe with the appropriate calls. However, I find that I can no longer Load, Import, or Export anything, because it is an Access97 file running on my copy of Access2003. If I use the version installed by A97RT.EXE, I get a different error about an ISAM file. Perhaps more later.
OPCFW_CODE
GitLab.com Security Certifications and Attestations In support of our ongoing commitment to information security and transparent operations, the GitLab Security Compliance teams are dedicated to obtaining and maintaining industry recognized security and privacy third party certifications and attestations. The benefits from these activities include: - increases visibility and confidence in our information security program - increases ease in onboarding and managing GitLab as a vendor - ensures we are meeting all requirements of a strong and comprehensive information security program aligned with industry best practices - enables our field teams to quickly share the state of our security program with potential and existing customers - reduces the need for GitLab’s security team to fill out individual customer security questionnaires or assessments Generally, the scope of the items listed on this page include GitLab.com, the GitLab.com production environment, and global policies and procedures relied upon for control implementation. Are you looking for security certifications/attestations for GitLab Dedicated? Please look here. - SOC 2 Type 2 Report: Security, Confidentiality and Availability Criteria - The SOC 2 Type 2 report is available for customers and potential customers upon request. The report is scoped to GitLab.com. There are elements of the report that cover organizational-level security considerations (e.g., Business Continuity Planning, Risk Assessments, etc.) which go beyond the scope of GitLab.com as a SaaS product and speak to the mature state of GitLab’s information security program. - SOC 3 Report: Security, Confidentiality and Availability Criteria - The SOC 3 report is available for general use by both customers and potential customers upon request. Please see SOC 2 Type 2 Report above for scope. - ISO/IEC 27001:2013 Certification - This standard specifies the requirements for establishing, implementing, maintaining and continually improving an information security management system (ISMS). The certificate is scoped to GitLab SaaS services (GitLab.com and GitLab Dedicated). There are many elements of the certification that cover organizational-level security considerations (e.g., Business Continuity Planning, Risk Assessments, etc.) which go beyond the scope of GitLab SaaS services and speak to the mature state of GitLab’s information security management program. - ISO/IEC 27017:2015 Certification - This standard establishes guidelines for information security controls applicable to the provision and use of cloud services. - ISO/IEC 27018:2019 Certification - This standard establishes commonly accepted control objectives, controls and guidelines for implementing measures to protect Personally Identifiable Information (PII). - ISO/IEC 20243-1:2018 Self Assessment - This is a set of guidelines, requirements, and recommendations that address specific threats to the integrity of hardware and software COTS ICT products throughout the product life cycle. Scoped to GitLab.com and GitLab self managed. - PCI DSS SAQ-A Self-Assessment - GitLab partners with PCI-compliant credit card processors in order to ensure adequate protections of payment processing information. - CSA Consensus Assessments Initiative Questionnaire v3.1 Security Self-Assessment - Based off the Cloud Controls Matrix and the CSA Code of Conduct for GDPR Compliance. - CSA Trusted Cloud Provider - Standardized Information Gathering Questionnaire Self-Assessment - Annual Third Party Penetration Test The following security certifications and attestations are currently on our roadmap for consideration and have not yet been formally committed or contracted: - SOC 2 Type 2 Report - ISO/IEC 27001:2013 Certification: Surveillance audit - Software Bill of Materials (SBOM) - PCI Attestation of Compliance - Cloud Security Alliance (CSA) Star Level 2 - ISO/IEC 27001:2022 Certification: Recertification Requesting Evidence of Certifications or Attestations GitLab’s SOC3 report is publicly available and can be found within the Community Package on our Customer Assurance Package webpage. The nature of some of our other external testing is such that not all reports can be made publicly available. Not only do these reports contain very detailed information about how our systems operate (which could make a potential attack against GitLab easier) but these reports also contain proprietary information about how these audit firms conduct their testing. For these reasons we can only share certain documentation with prospective customers that are under an NDA with GitLab or with current customers bound by the confidentiality of our customer agreements. The reports should not be shared with anyone other than the individual requestor(s). Current or Prospective customers may request these through their Account Manager, or by using the Request by Email option on the Customer Assurance Package webpage. GitLab Team Members should follow the Customer Assurance Activities workflow and use the option for “CAP Request”.
OPCFW_CODE
In May, organizers of the fifth annual Dialogue on Reverse Engineering Assessment and Methods opened up participation for the annual evaluation of computational systems biology methods (BI 05/28/2010). Since the first DREAM conference was held in 2006, the meeting's main objective has been to “catalyze the interaction between experiment and theory in the area of cellular network inference and quantitative model building in systems biology,” according to the project's website. This year marks the fifth year of the conference and the fourth set of challenges. DREAM 5 will include four challenges: the Epitope-Antibody Recognition Challenge, the TF-DNA Motif Recognition Challenge, the Systems Genetics Challenge, and the Network Inference Challenge. Datasets for each challenge can be downloaded from the DREAM 5 website. Winners of previous DREAM challenges have included teams from Yale University, the Genome Institute of Singapore, and Columbia University. This week, BioInform spoke with Gustavo Stolovitzky, a scientist at IBM Research and one of the founders of the DREAM project, about past, present, and future challenges and the evolution of the reverse engineering field. Below is an edited transcript of the conversation. How has the field of reverse engineering as a whole evolved since DREAM began? What have been the major improvements? What challenges still remain? I think [the field] has been evolving. Because there are other datasets that are available and other data biotechnologies that are available it's very difficult to determine what role DREAM has played in the evolution. I think that right now we have created a robust set of gold standards that researchers are checking against each time they want to [know] how well their algorithms are doing. They can compare [the algorithms] with the best in a particular challenge from previous years. The other thing that we have learned is that the combination of some perturbations and some algorithms appear to give a lot more intuition and correct answers when it comes to determining which genes interact with which genes. For example, each time we give systematic mutants as datasets in some of our network inference challenges, we observed that the best teams sometimes are the ones who make the very simplest predictions, which is simply, 'If this changes, what else changes most,' and basically assembling that information. That seems to produce a lot more information than what we call multi-factorial perturbations. One other thing that we have observed is that it seems that even when an algorithm is doing pretty well, when it's combined with other algorithms that are doing well, the aggregation of the algorithms produces results that are better than any of the algorithms individually. That's interesting because that means that rather than trying to see whether my algorithm is the best, what I should try to do is to try to find partner algorithms that work best in complementarity with mine in order to get the most out of the data. Give me a little background on how the project began and on past challenges. The idea is to expose data, for which we know what the results of the analysis should be, [to] participants and [let them] make predictions that allow us to evaluate or assess the accuracy of the methods to analyze the data. The data varies from year to year. In 2007, we started off with data that had to do with reconstructing networks of either protein-protein or gene regulatory networks. Since [then] we explored other [types of] systems biology data. That includes not just big systems and the inference of networks but predictions of what would happen if perturbation occurred in a system. This year we have four challenges. One of the challenges is predicting binding specificity of peptide-antibody interaction. Basically we are asking, of this set of several thousand peptides, which ones are going to be recognized by typical antibodies in our blood streams. The second challenge is something similar. We are trying to find out the extent to which we can predict the binding of a transcription factor towards regulatory elements using protein-binding microarrays. [The] third is a systems genetics challenge and we are trying to understand the extent to which we can leverage genetic information and gene expression to predict phenotypes. In this particular case, we have a dataset from soybean. We have a lot of [soybean] microarrays and a lot of recombinant inbred lines, which are lines that are basically homozygous in all loci. What we are asking is, 'Can you predict the phenotype?,' which in this case is how susceptible to some pathogens these plants are. The last challenge is on network inference. We are asking for the prediction of the gene regulatory networks of three organisms and a fourth in silico network. These challenges are independent and people can participate independently. There are separate communities that work on protein and antibody interactions and on network inference so typically people that participate in one challenge do not participate in other challenges. So far, participation has been very encouraging. In the network inference challenge, we have 142 downloads, in the systems genetics challenge we have [about] 50 downloads, [about] 130 in the challenge in the transcription factor-DNA motif challenge, and about a hundred or so in the epitope-antibody recognition challenge. [ pagebreak ] Have you seen an increase in the number of entries since you began? The number of teams that participated in previous years has grown systematically. In DREAM 2 we had 36 teams, in DREAM 3 we had 40, and in DREAM 4 we had 53 teams. We have been kind of putting some pressure on the community in the sense that as opposed to other challenges that occur every so often or every other year, we have been releasing challenges every year. We have also been offered data. In the first DREAM, we had to [ask researchers] to provide data. This year we practically didn't have to ask for any dataset because we had more datasets than we could use. Have winning entries from previous challenges been adopted by the larger research community? These things percolate in the community slowly. I think that we are giving the winning entries a forum to [publish] their algorithms and results in a publication. For example, in PLoS One there is an online collection of articles pertaining to DREAM 3 and we are creating an equivalent for DREAM 4. I would say that it's a little bit too soon to expect dramatic change because people tend to be very attached to the methods that they develop. If they see that their method is not working well they will try to improve it rather than adopt another one. [One thing that's] on our radar but we haven't had the time to do it is to create a repository of algorithms from where users could pick and choose what algorithms would work best for their data. I believe that will facilitate the dissemination of algorithms that are doing better specifically in our challenges. In the past there has been significant participation from researchers in academia. Has industry participation grown as well? Not in sufficient numbers. I think that most of the participants are mainly academics that come from all over the world. Have you made any changes to this year's DREAM? Yes, this year's challenges are different from last year's, for example. We are trying to create a variety of challenges [but] we try also to keep some continuity. For example, the network inference challenges have changed a little bit because before we had smaller networks of 10, 50, and 100 nodes at the most that we were probing; now we have networks in the hundreds of nodes. That is a considerable change because there are some algorithms that will not be able to run because they only run for small networks. But overall the nature of the questions we are asking are similar. How soon can teams start submitting entries? The deadline for submission is Sept. 20 so we will probably be open for submissions about two weeks earlier. There was some talk about whether you would release the names of teams that perform badly. What is your decision on that issue? We prefer not to release the names of the [groups that] don't do well because it somehow stigmatizes those groups. This should be a community effort that helps the community create better ways of analyzing data. It might not serve that purpose to [point out which groups] didn't do well. There are some reasons why we should. We should [let researchers know which] methods don't work. We try to describe those methods that don't seem to work very well in particular challenges in our overview articles without naming the specific researchers. This is the fifth year of the DREAM project. The original NIH funding was for five years. Will there be a DREAM 6? We are thinking of getting some funding. In general, we mostly used the funding for the curation of the website and for the conference. It's a very inexpensive operation. I think that we could continue without much funding from external sources if we continue to use the platforms from Columbia and the goodwill from data producers and the support from IBM.
OPCFW_CODE
Tests for 0.3.8 Please put all the apps you tested here and write if they work or not. Note: If there is an updated version of an application available and this updated version works in ReactOS, please update the entry in the list accordingly! |Works||There is no issue| |Failed||This does not work| |Run w/o result||Run without fundamental functionality| |Not tested||No test has been performed.| Stuff in Downloader |Firefox 1.5||Works||Works||Works under VBox 2.1.2. "Move the mouse to download" bug is still present. Youtube.com behaves the same way as it does under Firefox 2.0.| |Firefox 2.0||Works||Works||Tested under VirtualBox 2.1.2. A vanilla install seems to work fine, though Youtube.com doesn't seem to display videos when Adobe Flash player is installed. Also, the "move the mouse to download" bug seems to still be present.| |Thunderbird 2.0||Works||Works||Tested in QEmu, sent and received an email.| |SeaMonkey||Failed||Not tested||Installer hangs after clicking next on the Quick Launch page of setup. Tested under VBox 2.1.2.| |Mozilla ActiveX Control||Works||Works||Tested under VirtualBox 2.1.2. Generally seems to work, with a few caveats: The Mozilla ActiveX download window cannot be Cancel'd or closed, and sometimes the OS will bluescreen when the download progress bar completes. Also, the address bar in ReactOS Explorer cannot be used to enter a website address, but the ReactOS homepage can be navigated okay.| |Off By One Browser||Works||Run w/o result||Tested under VirtualBox 2.1.2. Mostly works okay, although opening the "Start and Home Page" dialog box causes problems ranging from minor (Off By One freezing until the Start menu button is clicked on) to severe (Start menu exploding, followed by BSOD). Suffers from errors when closed, leading to the OS slowing down a whole lot.| |mIRC||Works||Run w/o result||There are severe drawing issues (you can't see what you are typing).| |Samba TNG||Works||Works||I was able to successfully download files from a Windows network share.| |Miranda IM||Works||Failed||SSL is not supported, see bug 3686. Text in menues is messed up.| |Putty||Works||Works||Has some very minor drawing issues.| |Abiword||Works||Run w/o result||Tested under VirtualBox 2.1.2. Installs okay, but has severe drawing issues when running.| |OpenOffice||Run w/o result||Not tested||Error 404| |IrfanView||Works||Works||You need to copy mfc42.dll (VirtualBox 2.1.0)| |IrfanView Plugins||Not tested||Not tested||Link issue apparently.| |zeckensack´s glide wrapper||Works||Works||Works in QEMU. Tested on Diablo 2 Shareware.| |Microsoft XML 3||Not tested||Not tested| |OLE viewer and Microsoft Foundation Classes Version 4||Not tested||Not tested| |Visual Basic 3 runtime.||Not tested||Not tested| |Visual Basic 4 runtime||Not tested||Not tested| |Visual Basic 5 runtime||Not tested||Not tested| |Visual Basic 6 runtime||Not tested||Not tested| |Visual Studio 6 runtime||Not tested||Not tested| |Visual Studio 2005 runtime||Not tested||Not tested| |Visual Studio 2005 runtime SP1||Not tested||Not tested| |Visual Studio 2008 runtime||Works||Failed||Instalation seems to work fine (except some drawing problems). The runtime library is installed in winsxs folder. However no application seems to be able to use it because of bug #4083. Tested in VMWare Server 1 |ReactOS Build Environment||Works||Failed||There are several errors in cmd that prevent it from running correctly, bison.exe requires MSVCP60.DLL| |MinGW||Not tested||Not tested| |FreeBASIC||Works||Works||Tested on VBox 2.1.2.| |ScummVM||Works||Failed||Error : Unable to access application data directory (VirtualBox 2.1.0, QEMU). http://forums.scummvm.org/viewtopic.php?p=32952&sid=0dc0829e7a8f817dc5c2e0052ff1b836 set APPDATA manually and it works, <set APPDATA="C:\Documents and Settings\my username\Application Data">, <echo %APPDATA%> to check |Diablo 2 Shareware||Works||Works||Works in QEMU. VideoTest need not be run (though it dies prematurely), d2fix.exe needs to be applied, otherwise game window does not show.| |Tile World||Failed||Failed||Extraction failed with error : Warning occurred on one or more files (VirtualBox 2.1.0). Extraction works in QEMU, but game does not run.| |OpenTTD||Works||Works||Tested in QEMU. Some files are needed from original installation of Transport Tycoon Deluxe (or from alternative graphics sets, neither of which is present in default package).| |LBreakout2||Works||Failed||Exception : ExceptionCode: c0000005, ExceptionAddress: ffffffff (VirtualBox 2.1.0).(Look regression gap in LMarbles) Note : Pigglesworth has a fix for this issue to be committed soon. LBreakout2 works fine when it's applied. |LGeneral||Works||Failed||Exception : ExceptionCode: c0000005, ExceptionAddress: ffffffff (VirtualBox 2.1.0).(Look regression gap in LMarbles) Note : Pigglesworth has a fix for this issue to be committed soon. LGeneral works fine when it's applied. |LMarbles||Works||Failed||Exception : ExceptionCode: c0000005, ExceptionAddress: ffffffff (VirtualBox 2.1.0). Regression Gap: 37532-37780.Maybe the same for all the SDL apps Note : Pigglesworth has a fix for this issue to be committed soon. LMarbles works fine when it's applied. |WinBoard||Works||Works||WinBoard runs, but it starts minized. Right click the program tab and maximize it - then it works 100%.| |7-Zip||Works||Works||Tested under VirtualBox 2.1.2 and QEMU. Has some slight drawing issues, but seems to be able to add/extract files from an archive without issue.| |uTorrent||Works||Run w/o result||Installs fine, registry settings (eg. make default application) do not persist. Was able to open a torrent, but the application locks up with "Unable to locate timer in message queue" messages appearing regularly in debug log. Nindaleth reports that he can download without issue. Here's how I do it: Install Colin's 0.3.8 prerelase. Download Firefox 1.5, download uTorrent from the fixed link in SVN, install uTorrent. Visit Pirate Bay or some other source of torrents, download some recent torrent (very high possibility of someone still seeding). Close Firefox, run uTorrent. Setting as default torrent app doesn't work, ignore. Set upload speed and continue. File->Add torrent, navigate to torrent (probably on desktop), open it, continue through any dialogs without changes and download...| |Audio Grabber||Works||Failed||An exception occurs when starting the application. (VirtualBox 2.1.0).| |Simple Direct Media Layer (SDL) Runtime||Works||Works| |Simple Direct Media Layer (SDL) Mixer||Works||Run w/o result||ReactOS has no audio support as of now, this cannot be tested.| |DOSBox||Works||Run w/o result||It runs,but the DOSBox console closes automatically after few seconds. Note : Pigglesworth has a fix for this issue to be committed soon. DOSBox works fine when it's applied. Stuff not on the Downloader list |YSFlight||Not tested||Not tested| |CPU-Z 1.49||Works||Works||Tested on QEMU. Everything works switfly, not in the "well, it works" way as older versions.| |Acrobat Reader 5.0.5/6||Works||Works||Issue when closing the application, but pdf are opened without problem. Also version 6 works but with some UI issues (installer,splash screen)| |Foxit Reader 3.0||Failed||Run w/o result||Tested on QEMU. Setup fails, but program is somehow executed at the same time. PDF can be opened and displayed, program crashes immediately after that.| |QIP 2005 build 8081||Works||Run w/o result||Tested on QEMU. It is not able to connect.| VMware Graphics Drivers |VMware Server 1.0.7||Works||awesome as always| |VMware Workstation 6.0.4||Not tested|
OPCFW_CODE
[20:08] * AuroraBorealis sighs [20:08] <AuroraBorealis> for some reason when bazaar calls gpg it doesn't call pinentry and it just fails =( [20:13] <AuroraBorealis> it seems that bzr explorer needs to use --no-tty [20:13] <AuroraBorealis> for the gpg command [21:02] <jelmer> AuroraBorealis: please file a bug :) [21:06] <AuroraBorealis> k [21:16] <AuroraBorealis> submitted. https://bugs.launchpad.net/bzr-explorer/+bug/847388 [21:17] <jelmer> AuroraBorealis: thanks [21:17] <AuroraBorealis> i just commit using the regular terminal command and it works, so i think its just a --no-tty issue [21:17] <AuroraBorealis> however i dont know where in the code the command gets called so i can't test it xD [21:18] <jelmer> AuroraBorealis: it's in bzrlib/gpg.py [21:27] <AuroraBorealis> it seems that the tty environment variable is not set in ubuntu [21:27] <AuroraBorealis> and thats whats causing it to freak out [21:28] <AuroraBorealis> not sure if thats bad or not [21:39] <AuroraBorealis> it also appears that --no-tty fixes it, but i'm unsure of how to integrate that :3 [21:48] <AuroraBorealis> annnnnd i seem to of figured out a solution. but now how to i generate a patch or something :o [21:49] <jelmer> AuroraBorealis: I'd recommend running the gpg related tests and fixing anything that breaks: ./bzr selftest --no-plugins gpg [21:50] <AuroraBorealis> after i make my changes? [22:02] <jelmer> AuroraBorealis: yep [22:03] <AuroraBorealis> kk [22:03] <jelmer> AuroraBorealis: well, ideally you should fix the tests to expect what you want to happen and then fix bzrlib.gpg [22:03] <jelmer> but this works too, and makes more sense given you've already changed bzrlib.gpg. [22:03] <AuroraBorealis> where are the tests located? [22:03] <AuroraBorealis> and this is kinda hard to test as it requires bazaar explorer [22:04] <AuroraBorealis> (as os.environ("TTY") has to not be set) [22:04] <jelmer> AuroraBorealis: this shouldn't require bzr explorer - I just meant tests that make sure that --no-tty is specified [22:06] <AuroraBorealis> yeah, i'm assuming these are unit tests. but the way to check that --no-tty is only specified correctly is to be using it where the environment variable for tty is not set, like in a GUI [22:06] <AuroraBorealis> unless i'm misunderstanding something [22:11] <jelmer> AuroraBorealis: sure, but in the unit test you can override the environment variable and see if the right parameters are specified [22:11] <AuroraBorealis> ah ok. [22:11] <jelmer> AuroraBorealis: see bzrlib.tests.TestCase.overrideEnv [22:22] <AuroraBorealis> i cant seem to run this bazaar branch i checked out from launchpad because it can't import "shlex_split_unicode" [22:26] <AuroraBorealis> any idea how to fix that jelmer so i can run the tests? xD [22:35] <Noldorin_> hi jelmer [22:39] <jelmer> hi Noldorin_ [22:39] <jelmer> AuroraBorealis: I'm not sure - that should only be necessary on Windows I think [22:39] <AuroraBorealis> well the test cases are working i guess [22:39] <AuroraBorealis> so i'm trying to figure how how i borked those :< [22:46] <Noldorin_> jelmer, i am finding many bzr bugs these days... [22:46] <Noldorin_> while researching this issue [22:47] <jelmer> Noldorin_: what exactly? [22:47] <Noldorin_> jelmer, for a start, https://bugs.launchpad.net/bugs/846122 [22:48] <jelmer> Noldorin_: is the history empty after that operation perhaps? [22:48] <Noldorin_> what history? [22:48] <jelmer> Noldorin_: what's the output of "bzr revno" [22:49] <Noldorin_> jelmer, 46 [22:49] <jelmer> Noldorin_: that's odd - is this in a bzr-git tree? [22:50] <Noldorin_> nope [22:50] <jelmer> Noldorin_: bound branch? [22:51] <Noldorin_> jelmer, maybe [22:51] <Noldorin_> i forget [22:57] <AuroraBorealis> is there a way to use 'print' when you are using these test cases? it seems that its not actually printing them (and causing other tests to fail) [23:03] <jelmer> AuroraBorealis: I generally use bzrlibtrace.mutter [23:03] <jelmer> AuroraBorealis: I generally use bzrlib.trace.mutter [23:04] <jelmer> AuroraBorealis: that will be printed as part of the test output if a test fails [23:05] <AuroraBorealis> also another question [23:05] <AuroraBorealis> bzrlib.TestCase.overrideEnv says that the environment variable will be reset after each test [23:06] <AuroraBorealis> is a test the entire class that extends TestCase or are they the individual methods? [23:06] <lifeless> the method [23:07] <lifeless> the class is a way to group related tests and (sometimes) test helpers that are specific to that group [23:07] <AuroraBorealis> ok. so i guess i have no idea why the environment variable for 'tty' would be none [23:12] <AuroraBorealis> as i thought it would be None if you were running like a user interface, but its none even if you invoke bzr from a command line [23:12] <AuroraBorealis> so i have no idea how to distinguish when you should add the --no-tty switch or not [23:37] <AuroraBorealis> well, it appears that just adding --no-tty works, so how do i submit this as a patch to the bug report i opened?
UBUNTU_IRC
There are 50 imagesprite in the program. When I group them, I can adjust the height (width. etc.) in a cycle, for example. But how can I query the touch of a group member? Do each member need a separate “Touched” command? I can’t use “when anyimage.touched” because “Component” is just a meaningless string. Usually you define global lists for separate sprite types like bullets, targets, etc depending on their intended use. In your Any Sprite Touched Event, you can apply an is-in-list test to the component, to determine if it is a bullet or a target, etc. No common “Touched” command indicating which sprite is affected? In the Blocks Editor there is a separate Any Component section, or look in the Right Click menu of a Sprite.TouchDown Event for the Make Generic option. I tried but the return “component” is a meaningless font, not the component name This is an example of Generic component events, using Buttons instead of Sprites. You need to keep a map of component names or numbers. Example: I think what Tetrimino wants to do is have the name of the Sprite returned, but the Block returns the underlying system record: So for example the above Block, when the User touches the Sprite Component named “ImageSprite3”, returns something like: com.google.apinventor.components.runtime.imagesprite@440a1988 …which is no doubt correct but not developer friendly Edit: This of course means that the developer cannot inject the next action as he can with the individual Sprite Touched Block: So, really, the Generic Block is not delivering what one would intuitively expect. Err. I think this comes down to a misunderstanding of how the any component event handlers work. The component variable in the handler is exactly what it is, it is a reference to the component instance. It’s not a component name. When you try to turn it into a string, it gives you an internal Java representation of the component (fully qualified class name + memory location of the object). Why exactly would you need the component name versus manipulating the component directly since you’ve already got it? For example, if you want to change the image of the sprite, you can use the set ImageSprite.Image of component … to … block to change its image. You plug the component parameter into the component input on the block and a text element representing the new image into the If you haven’t already, you may want to take a look at Don’t Repeat Yourself (DRY) with Any Component Blocks, which includes an example of manipulating Ball components in a snow globe app. It’s not ImageSprites, but it’s close enough that you may be able to adapt it to your code depending on what you’re trying to do. What if, on touch of Sprite 3, I want to do something to Sprites 2 and 4? you could have all components in a list and find the corresponding component in that list... That works with other, regular components…got an example with Sprites? I still feel the Generic Sprite Blocks should deliver in the same way as the individual Blocks. Just use the regular event handler of Sprite3 in this case …but that is the point Taifun - in Tetrimino’s App, there are 50 sprites! Someone still doesn’t understand the problem. If I have a game with 20X50 fields, at 1000 sprites then I cannot write a procedure for every sprite. If the sprites are in the list, why is “component” not the number of the sprite in the list? This is how the DEPLHI programming language works. The string you just returned cannot be used for anything !!! “Component” should be the component name or serial number in the list! Don’t com.google.apinventor.components.runtime.imagesprite@440a1988 So useless and unnecessary! You are absolutely wrong about that being useless. May be it feels useless to you because you don't know how to use it. @ewpatton explained how it works and makes total sense for ALL of the cases in which you need to identify a component generically. You just have to find the way to translate or convert that basic piece of information into what you need for your particular case, like I showed you in the previous post. Also, out of curiosity, what are you going to do with the component name? You can’t use its name for referencing it anywhere! Perhaps a peek at a game that separates out its underlying object model from the user view might help you see how to set up your app. In the following app, how many Sprites or Balls do you see? I’m inclined to say that if you’ve got 1000 of anything in App Inventor then you may want to consider switching to another tool. App Inventor isn’t really well equipped to deal with scale because we don’t have a good mechanism to reference components by their name or any other way that doesn’t require some work by the developer. The typical way you would do something involving many components would be to put them into a list like @Italo demonstrated. Of course, this means you will end up with a list that’s 1000 items long (or 20 lists of 50 items, etc.). Components are objects, so you can do lookups using them such as by using the lookup in pairs block, or by comparing component references using the = (logical equal) block. Taking @ChrisWard’s example, I originally thought as @Taifun suggested where I would just implement the specific handler. If you wanted to leverage generic handlers, you can also compare against specific components like so: However, if you’re looking to generalize that–say for a given ImageSprite n you want to access ImageSprite n+1, then you’d want to do something like: We also had a paper recently where we discussed some of these generalizability issues and ways they could be addressed that may be an interesting read for some, although we haven’t decided on what the final implementation should look like:
OPCFW_CODE
How can I prevent another site from trying to phish my customers by cloning the look and feel of my site? I have a site hosted on Godaddy. Recently, another site that I don't control has copied the look and feel of my site. They look exactly like my site, except the domains are different.. Godaddy's staff said there is nothing they can do to prevent that site cloning the look and feel of my site. Is there anything I can do to stop another site from effectively cloning my site? How can I keep my customers from being duped by this? In short, what you're asking is impossible. Short of you looking up the contact information for that domain (through WHOIS), and explaining to the site administrator that you'd rather they not do that, there's no way for you to tell another site not to redirect to you. Have they copied your site and hosted it in their own account, or are they just pointing the DNS for the domain to your site? You can test this by changing one thing on your site then seeing if the other site updates instantly. BTW it would be much easier if you told us the two sites in question. There is no technical solution for a social problem of trying to build a website that looks like yours, because no matter how you make the website look, every color, every font, every text and every picture that can be downloaded and displayed by your visitors can also be downloaded and copied by the imposter. There may be other than technical solutions to this problem and you should get some legal advice if you are serious about it. That having been said, I can suggest few things that would make it harder to more expensive to look exactly as your site: At Go Daddy you can purchase an SSL certicicate and you can set up DNSSEC extensions for your domain. It will make your site look clearly more legitimate than the fake site. You can use some commercial fonts on your website and contact the foundry that you bought them from if they are copied by the imposter. For example see Typekit by Adobe. See also Commercial foundries which allow @font-face embedding. You can buy some stock photography or other graphics to be used on your website and contact the company that you bought it from if they are copied by the imposter. You can have some part of your design being changed frequently to make it harder for the fake website to always look exactly the same. You can find some legal advice on how to successfully sue the imposter for violating your copyright or trademarks. And last but not least, you can actually inform your users about that problem. You can add a short but visible message to every page of your website advising visitors to watch out for the fake website with a link to more detailed explanation of the problem and ways to distinguish the genuine website from the fake one. You didn't provide any actual links or say how exactly it is copied, or whether every change that you make to your website is instantly copied to the imposter website or not, so this is the most that I can recommend in those circumstances. I love the concept of use someone else sharks (lawyers) to solve my problems +1 (+1000 if I could) Funny, I host with GoDaddy and my site too was being mirrored live. If this is the case for you (that another domain copies your code live), you can do two things to mess with them: First, add this to your head section: <script> if (window.location.hostname !== "yoursite.com") { alert("DANGER! LEAVE THIS SITE IMMEDIATELY. This domain is attempting to deceive you. Visit the true version at yoursite.com"); window.location = "http://yoursite.com"; } </script> Second For good measure, I encrypted my HTML page. This way, if the person mirroring your site realizes you added the alert above, they can't just change "yoursite.com" to their own! Search the web for tools to encrypt HTML and JavaScript. I suggest leaving GoDaddy. I have three other sites that get far more traffic, and I've never had this happen. I am sorry to hear that your site has been copied. If the duplicate site is registered or hosted with Go Daddy you can email us at<EMAIL_ADDRESS>with the full details of the issue and our Abuse team can investigate what is occurring. If the duplicate site is not registered or hosted with Go Daddy you will want to contact the registrar/host that they are using and see if they have any process in place for issues like this.
STACK_EXCHANGE
How can 2-indanone be prepared? Would adding phosgene to ortho-xylene lead to the formation of 2-indanone? I don't think so. Also, you should avoid phosgene like the plague. It will kill you faster than that. How about just buying it? http://orgsyn.org/demo.aspx?prep=CV5P0647.. this might help you.. if there is any confusion or want the proper answer.... plz comment. i would be writing the procudure clearly.. If you want to start from ortho-xylene 1 I would recommend dibromination using N-bromosuccinimide to obtaining ortho-di(bromomethyl)benzene 2, followed by the addition of 1,3-dithianide anion 3 (prepared from formaldehyde and 1,2-ethanedithiol followed by treatment with NaH). While formaldehyde is electrophylic, 1,3-dithiane is nucleophylic if deprotonated (eg. with NaH, forming 1,3-dithianide ion) and can attack on the bromomethyl side of the molecule to get 4. This polarity inversion is called umpolung. A second treatment with NaH gives the cyclized product 5. Deprotection of the thioacetal gives 2-indanone 6. With this reversed polarity approach no phosgene is needed for the synthesis. Care would need to be taken in the NBS reaction. NBS is also a source of electrophilic bromine and will brominate activated aromatic rings in polar solvents (personal experience). 1,3-ditianide?? @Tetrahydrocannabinol Yes, it is the dithioacetal of formaldehyde. While formaldehyde is electrophylic, 1,3-dithiane is nucleophylic if deprotonated (eg. with NaH, forming 1,3-dithianide ion) and can attack on the bromomethyl side of the molecule. This polarity inversion is called umpolung. If you are not familiar with it I suggest you to look it up, because it is very interesting and useful. If memory serves me correctly, the anion 3 of 1,3-dithiolane undergoes fragmentation. That is why 1,3-dithiane is used. If you need to start with ortho-xylene, then you should use fazekazs's approach. I searched through SciFinder and found that the most commonly reported synthesis of 2-indanone involves oxidation of indene using a variety of conditions. The following Organic Synthesis prep highlights this approach, although it certainly uses some outdated methodology (distillation by aspirator vs. rotary evaporation, for example). Newer reports use a variety of oxidations, including oxone or the the Wacker process. In this example, indene 1 is added to a mixture of formic acid and hydrogen peroxide to afford the monoformate ester of indene 1,2-diol 2. The monoformate ester was hydrolyze with sulfuric acid to give 2-indanone 3. However, when I first saw your question, I wanted to propose a synthesis based on a Dieckmann condensation on diethyl phenylene-1,2-diacetate 4 followed by decarboxylation of the $\beta$-ketoester 5. However, I did not find this approach in the literature, and both the diacetate ester and its parent diacid are of comparaple price to 2-indanone. Another interesting idea that I did not find in the literature is a pericyclic approach based on photolysis or thermolysis of beznocyclobutene 5 in the presence of carbon monoxide. Following electrocyclic ring opening, intermediate 6 can react with CO in 4+1 cycloaddition. An isocyanide could be used instead of CO, and hydrolysis of the imine would give 2-indanone. Alternatively, there may be a transition metal species that could mediate this transformation.
STACK_EXCHANGE
## Lab 5: Required Questions - Dictionaries Questions ## # RQ1 def merge(dict1, dict2): """Merges two Dictionaries. Returns a new dictionary that combines both. You may assume all keys are unique. >>> new = merge({1: 'one', 3:'three', 5:'five'}, {2: 'two', 4: 'four'}) >>> new == {1: 'one', 2: 'two', 3: 'three', 4: 'four', 5: 'five'} True """ "*** YOUR CODE HERE ***" for i in dict2: dict1[i] = dict2[i] return dict1 # RQ2 def counter(message): """ Returns a dictionary of each word in message mapped to the number of times it appears in the input string. >>> x = counter('to be or not to be') >>> x['to'] 2 >>> x['be'] 2 >>> x['not'] 1 >>> y = counter('run forrest run') >>> y['run'] 2 >>> y['forrest'] 1 """ "*** YOUR CODE HERE ***" out = {} for i in message.split(): if i not in out: out[i] = 1 else: out[i] += 1 return out # RQ3 def replace_all(d, x, y): """ >>> d = {'foo': 2, 'bar': 3, 'garply': 3, 'xyzzy': 99} >>> replace_all(d, 3, 'poof') >>> d == {'foo': 2, 'bar': 'poof', 'garply': 'poof', 'xyzzy': 99} True """ "*** YOUR CODE HERE ***" # Was looking into one line for loops and stuff, so its a mess r = {i: d[i] if d[i] != x else y for i in d } return r # RQ4 def sumdicts(lst): """ Takes a list of dictionaries and returns a single dictionary which contains all the keys value pairs. And if the same key appears in more than one dictionary, then the sum of values in list of dictionaries is returned as the value for that key >>> d = sumdicts ([{'a': 5, 'b': 10, 'c': 90, 'd': 19}, {'a': 45, 'b': 78}, {'a': 90, 'c': 10}] ) >>> d == {'b': 88, 'c': 100, 'a': 140, 'd': 19} True """ "*** YOUR CODE HERE ***" sumout = {} # Run through list of dicts for i in lst: # Run through each list for j in i: # Add to sum if j not in sumout: sumout[j] = i[j] else: sumout[j] += i[j] return sumout #RQ5 def middle_tweet(word, table): """ Calls the function random_tweet() 5 times (see Interactive Worksheet) and returns the one string that is of length right in middle of the 5. Returns a string that is a random sentence of average length starting with word, and choosing successors from table. """ "*** YOUR CODE HERE ***" def construct_tweet(word, table): """Returns a string that is a random sentence starting with word, and choosing successors from table. """ import random result = ' ' while word not in ['.', '!', '?']: result += word + ' ' word = random.choice(table[word]) return result + word from math import ceil current_list = [] # Call construct tweet 5 times into a list # So that it then has 5 sentances of the length of for i in range(0,5): current_list.append(construct_tweet(word, table)) # Sort by length for i in range(0, len(current_list)): for j in range(0, len(current_list)): # Now begin to sort the lests by length if len(current_list[i]) > len(current_list[j]): temp = current_list[i] current_list[i] = current_list[j] current_list[j] = temp # The return the central most one, this serves as the median(which is an average type) of the # list lengths return current_list[ceil(len(current_list)/2)] import doctest if __name__ == "__main__": doctest.testmod(verbose=True) # Inputing some of the code from the lab def shakespeare_tokens(path='shakespeare.txt', url='http://composingprograms.com/shakespeare.txt'): #"""Return the words of Shakespeare's plays as a list.""" import os from urllib.request import urlopen if os.path.exists(path): return open('shakespeare.txt', encoding='ascii').read().split() else: shakespeare = urlopen(url) return shakespeare.read().decode(encoding='ascii').split() def build_successors_table(tokens): table = {} prev = '.' for word in tokens: if prev not in table: table[prev] = [] table[prev] += [word] prev = word return table
STACK_EDU
Add Next Gen MCAS to risk level calculation Addresses #1207. With the switchover to Next Gen MCAS scores, @alexsoble added a new assessment type. At that time, the risk calculation wasn't addressed. So a student's Next Gen MCAS scores weren't taken into account for the risk calculation. I've made it so the risk calculation looks for the most recent Next Gen MCAS score. If the student has no Next Gen MCAS score for that topic, the calculation looks for the most recent MCAS (legacy) score. If the student has neither, a MissingStudentAssessment is returned. My assumption is that a student wouldn't have a legacy MCAS test taken after the Next Gen MCAS. (Is this valid @snoopyuri? Otherwise, I could compare the dates). I also assumed that the risk level for Next Gen MCAS would be similar to the legacy MCAS. (Specifically NME=>3, PE=>2, ME=>1, EE=>0). Any thoughts @snoopyuri ? I've updated the reasons for the risk level to use both legacy and next gen MCAS performance levels. This minimized that additional work of tracking whether the MCAS score was legacy or next gen. I've also flattened the logic a bit to remove the McasRiskLevel and StarRiskLevel and centralized it in the Assessment class. Mostly because I found it really confusing. At some point, we'll need to figure out what assessments we'll support and how we calculate risk from them in generic or district-specific ways. Just read through it. Actually, makes a lot of sense and will make it easier to customize in the future for us and other districts. Meeting tomorrow with the high school so should get a better sense of what their risk level calculation will look like. From: Justin Hildebrandt<EMAIL_ADDRESS>Sent: Tuesday, October 31, 2017 12:42 PM To: studentinsights/studentinsights Cc: Harel, Uri; Mention Subject: [studentinsights/studentinsights] Add Next Gen MCAS to risk level calculation (#1225) Addresses #1207https://github.com/studentinsights/studentinsights/issues/1207. With the switchover to Next Gen MCAS scores, @alexsoblehttps://github.com/alexsoble added a new assessment type. At that time, the risk calculation wasn't addressed. So a student's Next Gen MCAS scores weren't taken into account for the risk calculation. I've made it so the risk calculation looks for the most recent Next Gen MCAS score. If the student has no Next Gen MCAS score for that topic, the calculation looks for the most recent MCAS (legacy) score. If the student has neither, a MissingStudentAssessment is returned. My assumption is that a student wouldn't have a legacy MCAS test taken after the Next Gen MCAS. (Is this valid @snoopyurihttps://github.com/snoopyuri? Otherwise, I could compare the dates). I also assumed that the risk level for Next Gen MCAS would be similar to the legacy MCAS. (Specifically NME=>3, PE=>2, ME=>1, EE=>0). Any thoughts @snoopyurihttps://github.com/snoopyuri ? I've updated the reasons for the risk level to use both legacy and next gen MCAS performance levels. This minimized that additional work of tracking whether the MCAS score was legacy or next gen. I've also flattened the logic a bit to remove the McasRiskLevel and StarRiskLevel and centralized it in the Assessment class. Mostly because I found it really confusing. At some point, we'll need to figure out what assessments we'll support and how we calculate risk from them in generic or district-specific ways. You can view, comment on, or merge this pull request online at: https://github.com/studentinsights/studentinsights/pull/1225 Commit Summary Updating risk levels to centralize logic and add Next Gen MCAS. Adding factory for next gen math assessment. Adding spec for case where a student has both next gen mcas and legacy mcas. File Changes M app/models/assessment.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-0 (25) M app/models/student_assessment.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-1 (11) M app/models/student_risk_level.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-2 (26) D app/risk_levels/mcas_risk_level.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-3 (18) D app/risk_levels/star_risk_level.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-4 (24) M spec/factories/student_assessments.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-5 (6) M spec/models/student_risk_level_spec.rbhttps://github.com/studentinsights/studentinsights/pull/1225/files#diff-6 (19) Patch Links: https://github.com/studentinsights/studentinsights/pull/1225.patch https://github.com/studentinsights/studentinsights/pull/1225.diff — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/studentinsights/studentinsights/pull/1225, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AeOaNma1QgkxJkBN3yhGVNaolueN2qLBks5sx03qgaJpZM4QNDvj. @alexsoble or @kevinrobinson Want to sanity check me on this? And you would be correct @alexsoble. 😧 Now I remember thinking, huh, why does 0 give N/A. Skipped over it and convinced myself it was right. Fortunately, code was working correctly, just bad test case. Passes because I left out the ! on the lets for next_gen_mcas_math_ee and mcas_math_w. Fixed the test in the branch. Should I merge into master? Deploy to prod (though this is just a test case).
GITHUB_ARCHIVE
LifeScienceWeb Services: Integrated Analysis of Protein Structural Data Charles Moad*, Randy Heiland*, Sean D. Mooney *Pervasive Technology Labs Center for Computational Biology and Bioinformatics, Department of Medical and Molecular Genetics Indiana University, Indianapolis, Indiana 46202 Abstract Services Model Visualization of Mutations on Protein Structures Visualization of protein structural data is an important aspect of protein research. Incorporation of genomic annotations into a protein structural context is a challenging problem, because genomic data is too large and dynamic to store on the client and mapping to protein structures is often nontrivial. To overcome these difficulties we have developed a suite of SOAP-based Web services and extended the commonly used structural visualization tools UCSF Chimera and Delano Scientific PyMOL via plugins. The initial services focus on (1) displaying both polymorphism and disease associated mutation data mapped to protein structures from arbitrary genes and (2) structural and functional analysis of protein structures using residue environment vectors. With these tools, users can perform sequence and structure based alignments, visualize conserved residues in protein structures using BLAST, predict catalytic residues using an SVM, predict protein function from structure, and visualize mutation data in SWISS-PROT and dbSNP. The plugins are distributed to academics, government and nonprofit organizations under a restricted open source license. The Web services are easily accessible from most programming languages using a standard SOAP API. Our services feature secure communication over SSL and high performance multi-threaded execution. They are built upon a mature networking library, Twisted, that allow for new services to easily be integrated. Services are self-described and documented automatically enabling rapid application development. The plugin extensions are developed completely in the Python programming language and are distributed at Web services are an efficient way to provide genomic data in the context of protein structural visualization tools. Our goal is to define a set of bioinformatic web services that can be used to extend protein structural visualization tools, and other extensible computational biology desktop applications. We are currently focused on extending UCSF Chimera (http://www.cgl.ucsf.edu/chimera/) and Delano Scientific PyMOL (http://pymol.sourceforge.net). Our services use the SOAP protocol and are currently developed using open source Python-based projects. We provide mapping between mutations and SNPs and protein structures. The mutations are mapped using Smith-Waterman based alignments. Swiss-Prot mutations and nonsynonymous SNPs in dbSNP are currently supported. See http://mutdb.org/ for a current list of the versions of each dataset we provide. LSW server SOAP client WSDLs Twisted (twistedmatrix.com) pywebsvcs.sf.net client (We will address service discovery in the future) Software Plugin Extensions The LSW Website contains developer tools and mailing lists, and we encourage other developers to extend their applications using our services. We have extended UCSF Chimera and Delano Scientific PyMOL to access our services. The three primary services we provide now are: 1. Disease associated mutation and SNP to protein structure mapping and visualization 2. Protein sequence and structure residue analysis with PSI-BLAST and S-BLEST Web services are an efficient way to provide genomic data in the context of protein structural visualization tools. Our goal is to define a series of bioinformatic web services that can be used to extend protein structural visualization tools, and other extensible computational biology desktop applications. Our current focus is on extending UCSF Chimera (http://www.cgl.ucsf.edu/chimera/) and Delano Scientific PyMOL(http://pymol.sourceforge.net). 3. Catalytic residue prediction using a support vector machine (Youn, E., et al. submitted) Installation Plugin installation is easy and can be performed for a user without root privileges. Currently, all platforms supported by UCSF Chimera and PyMOL are supported and include UNIX platforms, LINUX, Mac OS X and Windows XP. For either of the two clients supported (PyMOL or UCSF Chimera), simply follow the directions linked on the download page at http://www.lifescienceweb.org/. They will thereafter be available from the menu, as shown below. Figure 1: Screen grab of the current services list from http://www.lifescienceweb.org/. Services currently offered include: • ClustalW alignments • Mutation <-> PDB mapping Using PSI-BLAST and S-BLEST, we provide analysis of residue environments that match between protein structures in a queried database. Additionally, if the found environments represent similar structure or function classes, the environments that are most structurally associated to those environments are returned. This service is authenticated and SSL encrypted, and all coordinate data and analysis data are stored on our servers. Currently, users can query the ASTRAL 40 v1.69 and ASTRAL 95 v1.69 nonredundant domain datasets, as well as other commonly used nonredundant protein structure databases. Controller features include (from the top): Figure 5: S-BLEST controller window shown using UCSF Chimera. http://www.lifescienceweb.org/ Project Goals Figure 3: MutDB controller window , shown using PyMOL. Automated Sequence and Structural Analysis of Protein Structures • Tabbed selection of query type and controller options. • Query entry text box and resulting hits from PDB shown below, with PDB ID, chain, residues, and TITLE of PDB. On the right, the control box has (from top): • Tabs for selecting hits in database with matching environments (or significant sequence similarity using PSI-BLAST) or common functional annotations in the hits. • A pull down selection box showing the PDB ID’s with matching environments and the Z-score between the best environments. Upon selection the hit is downloaded and displayed in the visualization window (left). • A button to retrieve a ClustalW alignment between the the selected hit structure and the query. • Once a PDB ID above is selected, the coordinates are downloaded and the mutations from Swiss-Prot (SP) and dbSNP (SNP) are retrieved. The database source, type, position, mutation and wildtype flag are displayed. Upon selection, the mutation is highlighted in the coordinate visualization window. • The most significantly matched residue environments between the query and the hit. Displays Z-score, the matched residues, the ranking of that match (overall for that query residue environment) and the Manhattan distance. When residues are selected from this list, the coordinates in the visualization window are aligned using a the Chimera match command. • Below the windows a ClustalW alignment is shown • Status window that displays the number of mutations or PDB coordinates found. • Mutation information window displays a link to the source (which opens in the browser), the position and annotations in that may be available, including PubMed ID (as link), phenotype and a link to MutDB.org. Figure 2: Running our tools from the client application, shown using PyMOL. Figure 4: MutDB structure visualization window showing a highlighted mutation using PyMOL. • SVM based catalytic residue prediction • Sequence conservation based on PSI-BLAST PSSM Figure 6: S-BLEST controller window showing the function analysis tab using UCSF Chimera. Updates The annotations are currently updated every 2-3 months. Internally, we provide services for annotating genes or coordinates not in the PDB usually through a collaboration. For information on how to do this please contact Sean Mooney, email@example.com. Acknowledgements CM and RH are funded through the IPCRES Initiative grant from the Lilly Endowment. SDM is funded from a grant from the Showalter Trust, an Indiana University Biomedical Research Grant and startup funds provided through INGEN. The Indiana Genomics Initiative (INGEN) is funded in part by the Lilly Endowment. The authors would like to thank the authors of UCSF Chimera and PyMOL for their help in extending their applications. You can download these tools from the following: • UCSF Chimera: http://www.cgl.ucsf.edu/chimera/ • Delano Scientific PyMOL: http://pymol.sourceforge.net Citations Dantzer J, Moad C, Heiland R, Mooney S. (2005) "MutDB services: interactive structural analysis of mutation data". Nucleic Acids Res., 33, W311-4. Peters B, Moad C, Youn E, Buffington K, Heiland R, Mooney S, “Identification of Similar Regions of Protein Structures Using Integrated Sequence and Structure Analysis Tools”. Submitted. Mooney, S.D., Liang, H.P., DeConde, R., Altman, R.B., Structural characterization of proteins using residue environments. Proteins, 2005. 61(4): p. 741-7.
OPCFW_CODE
I respectfully disagree Neil... Shouldn't an app claiming to make DVD-Audio discs as a feature be able to handle multichannel files, a staple of that format? I might agree if we lacked that option, and Steinberg wasn't selling it as a feature. But they are so that's kind of moot. Yesterday I tried dropping both interleaved and split multichannel files into the Montage, after the File editor failed to open them in sync. It was ugly and not great fun. I'm still not sure why "multichannel" montages exist, or at least how to use them (no help from the non-existent docs, and the help system doesn't make things easy to find). Furthermore, while Cubase, Nuendo, Logic and ProTools are all great multichannel editors, most aren't very good at flexible deliveries (that's really what I do as a mastering engineer: deliver multiple files formatted for various consumer playback media and targets). They might be able to handle the processing and tasks by various kludges, but lack useful features like meta-tagging, text/delivery documentation, clip/object based processing, autospacing, and advanced fade handling. Some are necessities, some nice-to-haves, but all common in mastering daws. One of WL7's unique and powerful features is it's terrific file editor. It can open/munge any mono or interleaved stereo file you throw at it, even broken ones. It doesn't require any voodoo or magic to add more channels to it's existing interleave-reading capabilities. But every mastering engineer who delivers files for DVD or gets sources from video houses REQUIRES some tool to edit/munge/tag interleaved multichannel sources, even if we process elsewhere. These grunt-level tools are distinct from DAWs, but already part of WL7's arsenal. All of this applies to analysis in spades - where is this functionality? Why is it missing? Finally some of the competition already does this fairly well. soundBlade is actually pretty good at some multichannel work, as is Peak on mac (not sure about PC). So by most measures, WL7 does need to improve it's flexibility. What's lacking across the board is smart handling and reading of multichannel files in even conventional formats. It would be an improvement to state what kind of multichannel files WL7 prefers. It would be sufficient for it to cover the basics: AIFF and WAV interleaved and split files (in the most-common/familiar .L/C/R and .1/2/3 configurations). And it would be ideal if we could accept most conventional formats and deliver to same, but I realize this might be something we get in a major revision down the line, 7.2 or later. To me this looks more like an opportunity than a problem or bug. If WL7 handled interleaved multichannel files identically to stereo interleaved files it would have a real, compelling advantage over the competition. If nothing else it checks off a box/need when doing research on what to buy. At any rate, in many modern mastering rooms multichannel has been around for quite awhile and shows no signs of going away as targets multiply. I see no functional benefits to using a multitrack DAW for mastering tasks, and they are incapable of doing many of the basic jobs that need doing. I understand that many here work in stereo-only world, delivering CDs and tracks for iTunes. At least a few of us have other unmet multichannel needs. Maybe Steinberg can win a few customers by addressing it, with the added sales and features benefiting all?
OPCFW_CODE
In my previous post in the series I talked about my experience with CAD and some platforms that are available for Hobbyists. What I did not discuss was the history of CAD software and some of the key concepts that may differentiate one platform from another. However, I am not going to write the unabridged history of CAD, if you want to read that, I highly recommend going to Wikipedia and falling down the rabbit hole. First, what do I mean when I say CAD? CAD, or Computer Aided Design, is a rather broad topic. It encompasses pretty much any software that can be used to aid in the process of designing something. This could range from PCB layout software (such as EAGLE or KiCAD) to a 2D drafting package (LibreCAD, AutoCAD, etc) to a full 3D modeling suite (Inventor, Solidworks, Fusion360, etc). I prefer to think of each of these as ECAD, 2D CAD, and 3D CAD respectively. Based on the content of the first post in the series it should be no surprise that when I say CAD I am referring to 3D CAD, or more specifically a 3D CAD modeling package. It should also be noted that there is an entire other world of CAE, or Computer Aided Engineering, software out there. This consists of specialty automated design and simulation software. While CAD software can be classified under the CAE umbrella, I am not going to talk about much about CAE software in this series. Now that I’ve drawn a line in the sand between 3D CAD and other CAD flavors, I want to make another distinction between 3D CAD and 3D modeling. 3D modeling is certainly the broader term; any software that can be used to design a 3D shape can be considered 3D modeling software. But, as all squares are rectangles and not all rectangles squares, I do not consider all 3D modeling software to be a 3D CAD program (in the engineering sense). There are a lot of software packages out there that are wonderful for creating (designing even…) a 3D shape (Blender, Maya, 3DSMax all come to mind) but they are insufficient (in my mind) for creating an engineered component. So, what sets a piece of 3D modeling software such as Blender apart from from a 3D CAD package such as Inventor Fusion 360? The key difference lies in how each represents a 3D shape. Blender, and programs like it, are designed specifically to create models that are represented as a mesh while a 3D CAD program represents a model as a series of features. Mesh Based Modeling Having your model as a mesh gives you a lot of flexibility in creating smooth organic shapes. A mesh based modelling program includes a lot of features to increase or decrease the density of the mesh and you are able to push/pull faces of the mesh to sculpt the model. In a model such as this the mesh is used to represent a series of surfaces, and as such, this is sometimes referred to as surface modeling. Now, a lot of CAD packages do include tools for creating surfaces and provide some level of being able to sculpt your model. These mesh surfaces are usually defined by a Non-Uniform Rational B-Spline, or NURBS surface. However, although these packages include include tools for working with surfaces and meshes, they are not ideally suited for this task. There is a ton of software available for editing mesh based models. Mesh based models are commonly stored in an STL or OBJ file (read more on these formats here and here). Both formats are extremely common in both 3D printing and videos games and as such software packages to edit or create may be of interest to many hobbyists. In the future I plan on using Blender for some other projects, but, I feel that mesh based models (and the associated software packages) are not ideal for designing any sort of mechanical assembly. Feature Based Modeling Most engineering 3D CAD packages use a concept called feature based modeling (this is sometimes referred to as parametric modeling). In feature based modeling, you build your part one feature at a time. These features can each be revisited to change or modify your design. There are typically many feature types used to build a model (such as extrudes/cuts, lofts, sweeps, revolves) and features used to modify a part (such as chamfers or fillets). Features that are used to define the shape of your model typically start with a sketch or 2D profile that is going to be used to define the shape. In an extrude or cut this 2D profile is extended linearly to add or remove material (think prism). A revolve takes the 2D profile and extends it radially around an axis (think torus). A sweep takes the profile and extends it along a user defined path which is provided as a separate 2D (or 3D) profile. A loft is a special case that uses multiple 2D profiles to define a 3D shape. These multiple profiles are interpolated between to generate a solid. It is helpful to think of each feature type of being a pair. An extrude or cut feature both take a 2D profile and extend it linearly to add material or remove it. Similarly, a revolve, loft or sweep can either add or remove material. Some programs treat these pairs as separate feature types while others treat them as the same feature type with a parameter that indicates whether material is being added or removed. Additional features can be used to modify the shape of a part. This includes mirrors and patterns where a feature is repeated, chamfers and fillets that allow edges to be broken or rounded, and a myriad of other functions that modify the shape of the part. In feature based modeling, each feature is typically defined by its edges and faces. This representation is called Boundary Representation, or B-Rep. These CAD documents are typically stored in formats that are proprietary to their platform. However, there are standard formats that most CAD packages can access such as STEP and IGES. Other Modeling Schemes Mesh and feature based modeling aren’t the end-all-be-all of 3D CAD. There are many other types of representing 3D data. You can read a lot more about other modeling schemes at the Wikipedia page on Solid Modeling. The one that you will encounter as a hobbyist most often is Constructive Solid Geometry (CSG). In CSG, primitive shapes are combined using boolean operations (intersect, difference, join) to create complex 3D solids. Most CAD packages have support for performing these boolean operations on a feature or mesh based model.
OPCFW_CODE
Coredump after to_binary() => to_term() roundtrip I am able to reproduce a core dump with the simple roundtrip() function below (see https://github.com/evnu/rustler_core_dump). The function takes a term, encodes it into a binary, reencodes into a term and returns that term wrapped with a 1 within a tuple. #[macro_use] extern crate rustler; use rustler::{NifEncoder, NifEnv, NifResult, NifTerm}; rustler_export_nifs! { "Elixir.RustlerCoreDump", [ ("roundtrip", 1, roundtrip), ], None } fn roundtrip<'a>(env: NifEnv<'a>, args: &[NifTerm<'a>]) -> NifResult<NifTerm<'a>> { let original: NifTerm = args[0].decode()?; let binary = original.to_binary(); let roundtripped: NifTerm = binary.to_term(env); Ok((1, roundtripped).encode(env)) } When I use the following Elixir implementation to load the nif, running test results in a core dump. defmodule RustlerCoreDump do use Rustler, otp_app: :rustler_core_dump, crate: "rustler_core_dump" def roundtrip(term), do: throw(:nif_not_loaded) def test do reference = make_ref() IO.inspect(reference) {1, reference} = roundtrip(reference) IO.inspect(reference) end end Note that inspecting the resulting reference assigned after roundtrip seems to be crucial: I need to either IO.inspect(reference) or match with {1, ^reference} = roundtrip(reference) to reproduce this. Running the Example mix run -e RustlerCoreDump.test ==> rustler Compiling 1 file (.yrl) Compiling 1 file (.xrl) Compiling 2 files (.erl) Compiling 6 files (.ex) Generated rustler app ==> rustler_core_dump Compiling NIF crate :rustler_core_dump (native/rustler_core_dump)... Finished release [optimized] target(s) in 0.0 secs Compiling 1 file (.ex) Generated rustler_core_dump app #Reference<0.3640706538.2123366403.78090> [1] 7004 segmentation fault (core dumped) mix run -e RustlerCoreDump.test Version Information rust: rustc -V rustc 1.26.0 (a77568041 2018-05-07) Elixir: elixirc -v Erlang/OTP 20 [erts-9.3] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:10] [hipe] [kernel-poll:false] Elixir 1.6.4 (compiled with OTP 20) mix.lock: %{ "rustler": {:git, "https://github.com/hansihe/rustler", "00bcc871cdacc70af35ed29daeb9e3f37cd3a1f4", [sparse: "rustler_mix"]}, } I added some debugging output: fn roundtrip<'a>(env: NifEnv<'a>, args: &[NifTerm<'a>]) -> NifResult<NifTerm<'a>> { let original: NifTerm = args[0].decode()?; eprintln!("{:?}", original); let binary = original.to_binary(); let roundtripped: NifTerm = binary.to_term(env); eprintln!("{:?}", roundtripped); Ok((1, roundtripped).encode(env)) } Resulting run: mix run -e RustlerCoreDump.test Compiling NIF crate :rustler_core_dump (native/rustler_core_dump)... Compiling rustler_core_dump v0.1.0 (file:///home/mo/tools/rustler_core_dump/native/rustler_core_dump) Finished dev [unoptimized + debuginfo] target(s) in 0.47 secs #Reference<0.441875240.1587806209.52051> #Ref<0.441875240.1587806209.52051> <cp/header:0x0000000000000000> A similar bug was fixed with OTP 20.3.7: OTP-15080 Application(s): erts Fixed bug in enif_binary_to_term which could cause memory corruption for immediate terms (atoms, small integers, pids, ports, empty lists). I can still trigger the segfault with the example above. @evnu Could you test it again on latest master? ah, you beat me to it @hansihe The example does not segfault any more, thank you! I needed to adapt the conversion into a Term as to_binary() now returns an OwnedBinary: let roundtripped: Term = binary.release(env).to_term(env); Note that the resulting roundtripped does not equal the value put into the NIF: eprintln!("{:?}", original == roundtripped); #=> false Converting it within Elixir with :erlang.binary_to_term/1 produces the original value again, though. I guess I would need to use env::binary_to_term() to convert back into the original term.
GITHUB_ARCHIVE
Fatal: unable to open config file: stat /mnt/external/restic/config: no such file or directory I have two working backups using these docker containers. One fully working backup to Backblaze using S3 buckets and one partially working that backs up the same data to an external direct attached storage NAS. The containers are both running on the same QNAP NAS. When I say fully working above I mean that all three containers / container functions (backup, prune, and check) are starting and working without error. The partially working scenario is backing up without issue but both the prune and check containers are having issues finding the repository config file. Here is what the functioning backup logs to the external direct attached NAS look like: Checking configured repository '/mnt/external/restic' ... Repository found. Executing backup on startup ... Starting Backup at 2024-07-13 17:27:17 open repository lock repository using parent snapshot 5c9a1ce2 load index files start scan on [/mnt/restic] start backup on [/mnt/restic] scan finished in 36.511s: 251343 files, 2.964 TiB Files: 0 new, 2 changed, 251341 unmodified Dirs: 0 new, 13 changed, 23196 unmodified Data Blobs: 2 new Tree Blobs: 13 new Added to the repository: 11.877 MiB (5.870 MiB stored) processed 251343 files, 2.964 TiB in 1:56 snapshot de015c2c saved Backup successful Finished backup at 2024-07-13 17:29:14 after 117 seconds Scheduling backup job according to cron expression. new cron: 0 30 7 * * * This log, and the resulting backup, appears to clearly show that the repository (and therefor the config file) has been found. Here is what the logs look like for the check container: Checking configured repository '/mnt/external/restic' ... Fatal: unable to open config file: stat /mnt/external/restic/config: no such file or directory Is there a repository at the following location? /mnt/external/restic Could not access the configured repository. Not trying to initialize because SKIP_INIT is set in your configuration.. Scheduling check job according to cron expression. new cron: 0 15 9 * * * Here is what the logs look like for the prune container: Checking configured repository '/mnt/external/restic' ... Fatal: unable to open config file: stat /mnt/external/restic/config: no such file or directory Is there a repository at the following location? /mnt/external/restic Could not access the configured repository. Not trying to initialize because SKIP_INIT is set in your configuration.. Scheduling prune job according to cron expression. new cron: 0 0 8 * * * Yet I can run both the check and prune jobs successfully, manually, from the command line. Running the check job manually: docker compose run --rm backup check Running the prune job manually: docker compose run --rm backup prune Here is a screenshot showing the config file that cannot be found: My docker compose file looks like this: [rstrom@NASF8629F external-docker-backup]$ cat docker-compose.yaml version: "3.3" services: backup: image: mazzolino/restic hostname: restic-docker-backup-external # restart: unless-stopped restart: always environment: RUN_ON_STARTUP: "true" BACKUP_CRON: "0 30 7 * * *" RESTIC_REPOSITORY: /mnt/external/restic RESTIC_PASSWORD: <redacted> RESTIC_BACKUP_SOURCES: /mnt/restic RESTIC_BACKUP_ARGS: >- --tag qnap-rstrom-home-drive --exclude *.tmp --verbose TZ: America/Los_Angeles volumes: - /share/CACHEDEV1_DATA/homes/rstrom:/mnt/restic - /share/TerraMaster/qnaprestic:/mnt/external prune: image: mazzolino/restic hostname: restic-docker-prune-external restart: unless-stopped environment: SKIP_INIT: "true" RUN_ON_STARTUP: "false" PRUNE_CRON: "0 0 8 * * *" RESTIC_PRUNE_ARGS: >- --verbose RESTIC_REPOSITORY: /mnt/external/restic RESTIC_PASSWORD: <redacted> TZ: America/Los_Angeles check: image: mazzolino/restic hostname: restic-docker-check-external restart: unless-stopped environment: SKIP_INIT: "true" RUN_ON_STARTUP: "false" CHECK_CRON: "0 15 9 * * *" RESTIC_CHECK_ARGS: >- --read-data-subset=10% --verbose RESTIC_REPOSITORY: /mnt/external/restic RESTIC_PASSWORD: <redacted> TZ: America/Los_Angeles [rstrom@NASF8629F external-docker-backup]$ This job was created by copying the known working Backblaze docker-compose.yaml file and then modifying it to point to the external direct attached storage. The backup to the external storage works fine. I have tested mounting the backup on the external storage and navigating the backup file structure. It is only the prune and check portions of this that are not working. I have been over the docker-compose.yaml file many, many times and I cannot find anything wrong with it. Can someone, anyone, please tell me if there is something wrong with my configuration or if there is some bug that is causing this issue? Thanks! The docker compose file defines 3 different services, and you need to define the volume attachments for every service. So you need to add the volumes section to the prune and check sections as well. The docker compose file defines 3 different services, and you need to define the volume attachments for every service. So you need to add the volumes section to the prune and check sections as well. I have made the modification and added the volumes section to the prune and check sections and that appears to be working now. As I mentioned, I copied the basic docker-compose.yaml file from a functioning Backblaze backup that I have configured using this project. Everything about that backup, prune, and check is working without any errors for the Backblaze backup and there are no entries for the volumes in the prune or check sections of the docker-compose.yaml file for that backup. I'm curious why there is a difference. Thanks! check and prune both only need access to the repository. In the backblaze case, it's a remote repository so no docker volume is needed.
GITHUB_ARCHIVE
07-08-2012 09:47 PM Can some one help me resolve the following issue. We have a pcie bit file which gets loaded into system and when we do a restart to get the pcie enumeration the bit file hangs. but the same bit file actually works fine in two other systems. 07-08-2012 10:36 PM Can you add more information as to what you mean by "the bitfile hangs" during restart, so that others can help you. Do you mean your PC hangs during restart (boot) or the FPGA is not detected or something else? Also what do you mean by "two other systems"? I assume here is that you mean two other FPGAs/board, and not two other PCs or two other PCIe system (like two different bridges or switches). Can you clarify on this as well? Can you give us more information on your PCIe setup? Is this PCIe GEN1 or GEN2? How wide is the lane (x1, x4, etc)? What device is this for? What core version? Is this your own / custom design or is this the example design Xilinx provided; Have you tried the example design that comes with the core (PIO design -- generated along with the core and reside in the example design folder within the core directory)? If they're two different FPGAs or board, are they the exact same board design (termination, routing, etc.) and FPGA type (speed grade, silicon version, etc.)? First thing I would suggest you to check if you meet timing on the design; pay close attention to any unconstraints clk path if there's any. Double check your constraints and make sure all the constraints that is provided in the example design is in your current .ucf now. You may have variance in PVT that is enough to make it not work. Also, it would be great if you can do a quick ChipScope captures and figure out which ltssm_state your core is stuck on. In normal operation, this ltssm_state should indicate that you are in state L0 (more information about this signal is in the PCIe core User Guide). Also, add RXstatus signal from the transceivers and check if there's any issue with the link (more information about this signal can be found in the device transceiver User Guide). 07-08-2012 10:53 PM The PC hangs when i load that particular bit file on same board on PC1 and PC2. same bit file and same board works fine on PC3 and PC4. board is a custom board. and we have working PCIe bit file which works on all the PC's. The bit file for which we have hang issue is having some other functions on top of the pcie bit file. 07-08-2012 10:59 PM When you add more functions on the bad bitfile, Did you increase the amount of BARs or change the size of the BARs? If so, how big was in the working one and how big was in the non-working ones? Check with ChipScope for the ltssm_state signal as I suggested in the previous post, that would help us know where it gets stuck. 07-08-2012 11:19 PM I will look through in chipscope pro. Thank you for response. I got to know that extending PERST# signal will solve this kind of issues. want to know if it solves. 07-09-2012 12:16 AM When you extend the perst signal, you may introduce a new problem where it may miss enumeration altogether; but it may be a good test to do. From case history, I know a small choice of PC would assert PERST signal multiple times during boot and sometimes cause issue with our transceivers. You may want to check how many times your PC asserts this PERST signal; if it's asserted multiple times during boot, you may want to check if you can implement a logic where it would "filter" this reset (or put caps on the reset line to help filter them out) 07-12-2012 09:21 PM PERST is usually connected to the system reset of our PCIe core. It's a reset signal coming from the PCIe slot. The information about this signal should be in the core User Guide and also in the PCI Express specification.
OPCFW_CODE
So we've been using IE10 for a while now, the GPO's for which were created from a 2012 R2 reference machine and thus should be fine for IE11. My work machine is one of few Windows 8.1 in the business and I have no problems receiving all of the GPO settings and life in wonderful (well, for IE anyway :)) A part of the company now have a requirement to upgrade their browsers to IE11 on their Windows 7 (x64 Pro if it makes any difference) machines but that's where we are running into problems. While downloads from Internet sites seem to be fine, I can't download anything from our Intranet sites! This is the same GPO as is applying to my machine and the same files download on my machine without issue. The same problem is replicated across other Windows 7 / IE11 machines we've been testing with and I am about to build a new Windows 8.1 / IE11 machine and check the results on there. Has anyone else run into any problems with IE11 and Windows 7 at all? What error messaged are you getting? What errors are you event logs showing? Thanks for the quick repsonse. There's nothing in the event log about this, and no error is shown. IE will appear to allow the download - the download box even shows and I get the Open, Save and Save as buttons but then when I click any of these the download box vanishes and the file is never opened or downloaded. OK, I've spotted something new, it seems to not be related to Intra and Internet sites, but encrypted and non encrypted. Downloads on encrypted sites fail, but non-encryptes sites are fine. Bump, anyone had any experience with IE 10 on Windows 7 failing tp download files from HTTPS sites? Where as the machines with IE11 which works shows this download dialog box: Anyone any ideas? Please? So, further testing shows that this issue potentially stems from IE10 somehow. After removing IE11 from a machine which is failing, I found that IE 10 had the same save file dialogue in the middle of the screen though the download would work, whereas on machines which haven't had an issue the file save dialogue in IE10 is the box at the bottom of the screen. The failing machine I have access to has never had IE9, it went straight from 8 to 10, but then so did the working machine! FYI, following a lot of Microsoft diagnostics, followed by a discussion with them, they have sais that there is a corruption in the WIM file we've been using to deploy Windows for some time now. Microsoft have said that they cannot (or possibly will not) fix this issue and our solution is to re-image our machines which need IE11! Should anyone else find they have this same issue, my recommendation would be to stop using the build image you are using as soon as you can. May 19, 2015 at 4:18 UTC Our company, (x32 Windows 7 Pro) machines is having the exact same issue (I almost am suspcious that you work in my company, just kidding). We also went from 8 to 10 based on our corporate policies. We have mulitple users affected by this. It's frustrated that all Microsoft can say is to reimage machines. Best of luck to you. Feel free to PM me if you figure something out. I'll do the same for you.
OPCFW_CODE
dispatch_group_create(3) BSD Library Functions Manual dispatch_group_create(3) dispatch_group_create, dispatch_group_async, dispatch_group_wait, dispatch_group_notify -- group blocks submitted to queues dispatch_group_wait(dispatch_group_t group, dispatch_time_t timeout); dispatch_group_notify(dispatch_group_t group, dispatch_queue_t queue, void (^block)(void)); dispatch_group_notify_f(dispatch_group_t group, dispatch_queue_t queue, void *context, void (*function)(void *)); dispatch_group_async(dispatch_group_t group, dispatch_queue_t queue, void (^block)(void)); dispatch_group_async_f(dispatch_group_t group, dispatch_queue_t queue, void *context, void (*function)(void *)); A dispatch group is an association of one or more blocks submitted to dispatch queues for asynchronous invocation. Applications may use dis- patch groups to wait for the completion of blocks associated with the group. The dispatch_group_create() function returns a new and empty dispatch group. The dispatch_group_enter() and dispatch_group_leave() functions update the number of blocks running within a group. The dispatch_group_wait() function waits until all blocks associated with the group have completed, or until the specified timeout has elapsed. If the group becomes empty within the specified amount of time, the function will return zero indicating success. Otherwise, a non- zero return code will be returned. When DISPATCH_TIME_FOREVER is passed as the timeout, calls to this function will wait an unlimited amount of time until the group becomes empty and the return value is always zero. The dispatch_group_notify() function provides asynchronous notification of the completion of the blocks associated with the group by submit- ting the block to the specified queue once all blocks associated with the group have completed. The system holds a reference to the dispatch group while an asynchronous notification is pending, therefore it is valid to release the group after setting a notification block. The group will be empty at the time the notification block is submitted to the target queue. The group may either be released with dispatch_release() or reused for additional operations. The dispatch_group_async() convenience function behaves like so: dispatch_group_async(dispatch_group_t group, dispatch_queue_t queue, dispatch_block_t block) The dispatch_group_create() function returns NULL on failure and non-NULL on success. The dispatch_group_wait() function returns zero upon success and non-zero after the timeout expires. If the timeout is DISPATCH_TIME_FOREVER, then dispatch_group_wait() waits forever and always returns zero. Dispatch groups are retained and released via calls to dispatch_retain() and dispatch_release(). The dispatch_group_async() and dispatch_group_notify() functions are wrappers around dispatch_group_async_f() and dispatch_group_notify_f() In order to ensure deterministic behavior, it is recommended to call dispatch_group_wait() only once all blocks have been submitted to the group. If it is later determined that new blocks should be run, it is recommended not to reuse an already-running group, but to create a new dispatch_group_wait() returns as soon as there are exactly zero enqueued or running blocks associated with a group (more precisely, as soon as every dispatch_group_enter() call has been balanced by a dispatch_group_leave() call). If one thread waits for a group while another thread submits new blocks to the group, then the count of associated blocks might momentarily reach zero before all blocks have been submit- ted. If this happens, dispatch_group_wait() will return too early: some blocks associated with the group have finished, but some have not yet been submitted or run. However, as a special case, a block associated with a group may submit new blocks associated with its own group. In this case, the behavior is deterministic: a waiting thread will not wake up until the newly submitted blocks have also finished. All of the foregoing also applies to dispath_group_notify() as well, with "block to be submitted" substituted for "waiting thread". dispatch(3), dispatch_async(3), dispatch_object(3), dispatch_queue_create(3), dispatch_semaphore_create(3), dispatch_time(3) Darwin May 1, 2009 Darwin
OPCFW_CODE
I'm creating multiple stories per epic. For each story I need to set the Epic and Team which means having to filter through all the epics/teams to get the one I want. This is causing extra work and I am wondering if there is a better way of going about this. I'm using the "Create" button from the top banner Hi @Sholom R if you're open to solutions from the Atlassian Marketplace, this would work very elegantly in the app that my team is working on, JXL for Jira. JXL is a full-fledged spreadsheet/table view for your issues that allows viewing and inline-editing all your issue fields. You can also inline-create new issues, which, in combination with JXL's hierarchy and grouping capabilities, allows "pre-setting" both the epic link and the team for you. Here's how this looks in action: Note that the new story is created within the WORK-150 epic, and has the team (BVB Dortmund) set. (In this case, the team is in a multi-select custom field, but it works with any field.) You can also pre-set multiple fields, through nested grouping. Hope this helps, Hi @Sholom R You may want to try out the newly released Move and Organize for Jira app that provides a dynamic view of your Jira projects in a tree hierarchy. It allows you to very efficiently keep adding multiple tasks under a specific place in the hierarchy just using the keyboard and Enter key. Automatic assignment of new tasks is not a current feature, but could be considered. It allows you to visually overview and also edit on-the-fly (create, edit, recursively move, delete et c) issues. Zoom and pan the tree view quickly and smoothly using keyboard shortcuts and mouse drag-n-drop support that enable you to work very efficiently also with larger projects! Feel free to reach out to me directly if you have any questions or would like to provide suggestions! With best regards, Disclosure: I am a representative of the company offering this solution Hello @Sholom R Welcome to the community. If you have an Agile board (Scrum or Kanban) where the issues are displayed, and that board has the Backlog screen enabled, and on the Backlog screen the Epics pane is displayed, then you can click the Create issue in Epic link. The Epic will be automatically selected when the Create Issue dialog is opened. There is also a Create Issue option at the bottom of the Backlog issue list. If you click that the issue will also be automatically added to the highlighted Epic. Is the Team value set at the Epic level? If the Team value is set at the Epic level and the same value needs to be set for all the child issues, that could be done automatically after the issue is created using an Automation Rule.
OPCFW_CODE
What could I improve on on my monkey model? So I made this monkey model and I am no expert on modeling so I was wondering if you guys could take a look and give me feedback! If you want to look at the file download it at:https://drive.google.com/file/d/0ByrMQl4A3FTwcUVSalF2WXZ5Slk/view?usp=sharing Thanks!]1 Recalculate normals in Edit mode for body part of mesh. See http://blender.stackexchange.com/questions/3606/why-are-some-faces-in-my-mesh-darker. Aside from that I think this is not really a question about using Blender which is out of scope of Blender.SE. Blender Artists is a more appropriate place for this subjective question stated without criteria. Where did you come up with that interesting monkey face? Perhaps you can ask a question such as .... How can I get a rounder back for the monkey? Then that would not be so subjective. @atomicbezierslinger - isn't that just the monkey face that ships with blender? http://blenderartists.org/forum/showthread.php?274119-The-Blender-Monkey @Baronz Is it? I though it was more Lancelot Link Secret Chimp featuring Mata Hari. oooh.. i love it. but sadly this isn't in the scope of our site! come to this chat room and i'd be happy to help you: http://chat.stackexchange.com/rooms/8888/the-renderfarm%5C @atomicbezierslinger Thank you yes it is the monkey that comes with blender but I made the rest! @atomicbezierslinger LOL :D JUST SOME TIPS: The body part topology of your model is decent, though hands and legs' topology is too dense. Try to keep it as simple as it can be (start to model fingers using a circle with only 8 vertices). Do the same for the toes (see my topology suggestion below). Always start with a simpliest shapes and then slowly add more geometry. Your question reminds me this one: How to retopologize my model? Wish I have more time right now to retopologize this model for you and show you some simple solutions, but I'm quite busy right now :(. Remember, that simple and smart topology is a key to make the model bend well while animating (simple finger model topology bend example here: How do rigs relate to weights?) Monkeys do not have feet like that. How would they hold a banana? They have beautiful hairy feet. Male monkeys doesn't have feet like that, I agree. Please notice, that I've presented a FEMALE monkey feet. Modern female monkeys shaves feet regulary nowadays. Oh its a Show Monkey, very High Gloss.
STACK_EXCHANGE
What is the TEAS test difficulty level? I was curious what the TEAS test difficulty level is in practice. I tried to answer this question by comparing my results with a “how to deal with this situation” research papers but I found that “how to deal with this” is not a universal problem and leads to it not to the truth. Is the TEAS test impossible? The response by the research paper is that most T.E. is “intended for the TEAS developers” but what matters is the TEAS: in this case it is tested beyond your ability and the reason to use it is given. Also, my question is “whether T.e. is reasonable” or less reasonable. Will “the test is not reasonable” be good enough? Thanks ahead of time for answering this question. I tested and found that “how to deal with this situation”(and also what type of problem results these kind of) results in the TEAS test, but using the value of the rating value indicates for which stage of your problem are your positive and more negative. So I guess you expect the test to show “what any test leads to”. 2. In my experience the TEAS results are very rough and they include questions not to ask, how to sit down, how to proceed and how to help out. For example, it can be fine to sit for some 15 minutes, to take what you want to teach (this is probably over 1-10 per hour, so you may need to pay a consulting fee for 5 hours a day. I now have 12 TEAS question types. I have two questions. One of them is “What is my problem in your situation??? For example, I have an exam that your exam says the exam involves identifying the problem in two ways: the test itself and the problem itself. The question is how are you being shown this exam, which is not the exam title? Were you shown to take the get someone to do my pearson mylab exam problem for all the different exam questions? These are many more questions than the exam title shows. But, why is that? I have seen many questions with the title “Problem”, “How is it?”, “Which test results do you feel are the most effective?”. and the “What did you achieve?”. Can Online Classes Detect Cheating? How are you perceiving and showing this exam? or using my own measurements. Or if you use different measurement tools for various exam questions. I have tried to learn how to use some of the measurements when I use measurement tools. I do you can check here even have a set of measurements with which I can visually compare these categories. 2b. To answer a further 2b, the standard version of TEAS would have answered 14 “How are you feeling?? Who taught you what type of problem you want?”, ive would have a problem in my 2b because as someone who works on exams, you will already know whose kind of problems to work with, with what type of questions you have andWhat is the TEAS test difficulty level? Teasure strength (TE) Teasure strength (TE) At least +5% (with a score of +5 or lower) or +10%+s except negative and negative. Teasure strength (te) — Strength above +10% or -5 to 10% (+10 to +5 but on the other hand TE = minimum 2:1) -10 to -10% (+10 to +10 but on the other hand TE = minimum 3:1) Teacheptic difficulty The most common difficulty is some 5:1. A score of above +10 point does not make it into the severity of your problem. They suggest that high or higher score implies poor sense of or the ability to move your body with your internal muscles. Also, since high score will be used as the sign of serious illness and your functioning might be affected by your head trauma. Also, the above score could mean physical medical problems, medical problems that are sensitive to your environment. Standardized rating scales: Interpretation The TEAS (TEAS-HOS) Standardized Rating Scale and the TEAS-A (TEAS-A-SE) Basic scale are two child-specific scales designed to check how well a child knows how to correctly manage his or her child’s natural body postures for their newborn. A “teasure strength” score of +5 point from the TEAS-HOS scale that represents child-on-child social interaction and/or peer interaction with his or her parent is given as an example of a pre-infant. Using this score, the child may feel impaired or not be able to sit up and/or can barely touch their child’s body. Within this score, the child may perform tasks like picking up dirt or scraping or looking in the direction of a tree, cleaning the grave, setting out clothes andWhat is the TEAS test difficulty level? A very simple as well as straightforward 1 I would’ve actually done this once but the point appears to be in actually designing the test and developing the problem implementation. 2 What I call the E-test. If you check and your code looks like the same. If not then you should look into JIT testing. When it comes to E-tests developers are generally better off with JIT than with the E-tests. In this case both the E-test and the E-diff test are used as well. Website That Does Your Homework For You You would have to do more in the questions than this. 3 If you build your solution you have the following situation: What about the E-diff test in the second question in 1 section. Check your code! 4 If you finish the test before you find the current problem as you went to the second part then you can suggest the way to design the test. Here is the key part to better understand: In E-diff we can see that the test should provide both the solution and the error code. Based on this I guess we would have to draw the expected area. However if you build the test before you build/publish the same with the E-diff then you would always be at the right point. 5 One thing to be very careful about if you are writing a wrong test that isn’t your own. It may also be that if you have a non-E-test test, then you can create more valid tests by starting with E-isolation test, later use the E-diff test. Right before the test you have to start with wrong definition of your E-diff test, especially if her latest blog are more concerned with C/C++ types than with JIT-style. 6 After you get very into the wrong issue the point is that you need to compare the values of the right
OPCFW_CODE
Yes, Google and YouTube are getting in the games industry. And why not? They’re already in a slew of other online service industries, so why not one more. But this one is special because it’s using web-based video. Not only that, but it takes otherwise fairly non-interactive content and gives it a massive interactive component. What on Google Earth am I talking about? A patent that recently came to light for a: WEB-BASED SYSTEM FOR GENERATION OF INTERACTIVE GAMES BASED ON DIGITAL VIDEOS In a nutshell of technical lingo it is: Systems and methods are provided for adding and displaying interactive annotations for existing online hosted videos. A graphical annotation interface allows the creation of annotations and association of the annotations with a video. Annotations may be of different types and have different functionality, such as altering the appearance and/or behavior of an existing video, e.g. by supplementing it with text, allowing linking to other videos or web pages, or pausing playback of the video. Authentication of a user desiring to perform annotation of a video may be performed in various manners, such as by checking a uniform resource locator (URL) against an existing list, checking a user identifier against an access list, and the like. As a result of authentication, a user is accorded the appropriate annotation abilities, such as full annotation, no annotation, or annotation restricted to a particular temporal or spatial portion of the video. How are they going to implement it? I believe this to be the heart of the matter: A video may have associated with it one or more annotations, which modify the appearance and/or behavior of a video as it was originally submitted to an online video hosting site. Some examples of annotations are graphical text box annotations, which display text at certain locations and certain times of the video, and pause annotations, which halt playback of the video at a specified time within the video. Some annotations, e.g. a graphical annotation (such as a text box annotation) comprising a link to a particular portion of a target video, are associated with a time of the target video, which can be either the video with which the annotation is associated, or a separate video. Selecting such annotations causes playback of the target video to begin at the associated time. Such annotations can be used to construct interactive games using videos, such as a game in which clicking on different portions of a video leads to different outcomes. So Google and YouTube are going to turn the service into something like those DVD-based games where something happened on screen and then you would click one of several buttons to choose your own path through the story. These types of games were generally accompanied by some sort of board game etc. But with YouTube it looks like you will be able to shoot an entire story out of sequence and then have annotations that first pause the video and then depending on which annotation is clicked, forward or reverse the video to another specific clip that ends in yet another annotation with multiple options. You could even have an entire story play through and then have the viewers choose their own ending. This is going to be great for all sorts of video marketing, especially if that technology will be licensable and usable outside of the Google sites or even if the videos are fully embeddable and retain their interactivity. This is not only an ingenious way to create interaction with online video but it also plays into the video game madness that has been consuming the world and pushing the games industry into the upper echelons of entertainment (I think it was $27B in the US in 2008). This could certainly mean a whole new way to utilize YouTube and online video, not only for fun but for profit. Bonus! It seems I’m uniquely positioned being in both the online video and the video game worlds so I may have the best view possible of the entire possibilities of Google’s patent. Perhaps, just this once, I’ll keep some of my thoughts to myself and use them to my own benefit…
OPCFW_CODE
1994.12.15 05:34 "TIFF File Naming Conventions/Standards?", by Clark Brady TIFF File Naming Conventions/Standards? How many different file naming schemes have you run into? If you're like me every time you turn around I have to adopt/adapt to a new convention or variation of a convention. Ever try to view multiple single page images that should viewed as one document? It can be impossible without the creator application or without opening each images individually. (Not cool!) Since the naming convention tends to indicate sorted page order, it would be nice to standardize. However, since it's difficult to force standards it would be better to identify the naming convention and page through as if a multi-page tiff were used. The tendency to use (brain-dead, lowest common denominator) eight-dot-three DOS file names limits the possible conventions however there are still too many choices. Alternative could include: 1) ########.tif 2) HHHHHHHH.tif 3) ZZZZZZZZ.tif 4) ########.### 5) HHHHHHHH.HHH 6) ZZZZZZZZ.ZZZ 7) batchnum.### 8) batchnum.HHH 9) batchnum.ZZZ where # = decimal, 0-9 H = hexadecimal, 0-F Z = alphadecimal?, 0-Z Then there's the problem of grouping files together - Normally taken care of by placing all related files into a single directory and then ordering the directories using yet another scheme....Most often information is stored in a proprietary database. The following question come to mind when considering this topic. Q1) Do any standards/recommendations exist? These might be public or company specific. Q2) What other examples exist for naming schemes? Q3) What group or organization could help set and communicate standards? Would it make sense to propose a parallel document to the TIFF standard? I would like to propose a discussion on tiff file naming standards that would cover these and similar related questions. I'd be happy to consolidate responses if that makes sense. I would also be happy to do some development on a common API that could help eliminate this type of problem. A frame work might include functions for: number of pages, filename of next/prior page, scheme identity. The frame work might also include definition for a document description file (this would be to documents what tags are to tiff...) Brain Bender....Ever want to move files from one location/directory to another...how can this be done? Brute force of course, but shouldn't an alternative exist? How about the case where a single page of a multi-file images (on WORM or CD) needs to be replaced? This is another area to investigate. Alternatives include: - Complete file description - server,volume,directories - Local file description - volume,directories - Abstracted file description - pseudo-volume, directories What other alternatives exist? Hope this is enough to start some discussions.... Thanks for reading this long message! Clark Brady - Eli Lilly & Company - 317-277-1769
OPCFW_CODE
ClaimMaster provides a number of ways to customize the software and store/configure settings for various features. The majority of settings are accessible from the "Preferences" menu. To configure ClaimMaster's preferences, perform the following steps: - In Word 2007 or later, from the ClaimMaster Ribbon, click on the Preferences, Extra Tools, Help menu, then Preferences menu. Next, the Preferences dialog will appear. From here you, can set various preferences for ClaimMaster. Below are the available options in the Preferences dialog: - Configures section headers for identifying document sections. - Configures claim viewer settings. - Here you can enable docking of the reporting/patent drafting windows and also specify their location when docked (e.g., right/left custom task pane in Word) or floating next to the Word document. If docking is disabled, reporting windows will float on top of the currently open Word document. - Note: you can also disable resizing of form contents, which may be helpful to prevent wrong control sizes getting selected when you have a monitor with high DPI and a large scaling factor set. - If this option is checked, ClaimMaster will minimize Word during processing operations. This is often effective for improving the ClaimMaster's processing speed on slower connections, usually when it is installed on a terminal server or Citrix. - If this option is enabled (by default), then boilerplate entries will be loaded and stored during the initial start of ClaimMaster. If it's disabled, then ClaimMaster will not load boilerplate entries automatically, which helps in certain environments where users do not have sufficient permissions to modify and save ClaimMaster template, which causes warnings when users quit Word. - If enabled (default), Microsoft equations will be linearized (if shown in "Professional" format) in the document during claim parsing to avoid parsing issues. They'll be turned back to "Professional" view if their format was changed for parsing. - If enabled, when amendments are converted from Track Changes to regular underline/strike-through formatting, any deleted or inserted leading/trailing spaces for each selected word would be also be included into conversion. - If enabled, ClaimMaster will detect claim # 1 in the claim set when it's placed out of order with the other claims in the set (e.g., at the end of the clam set). By default, detection of such out-of-order claims is disabled to minimize false positives due to incorrect claim detection in more complex documents where some non-claim sections resemble claims. - If the "Track usage" checkbox is enabled, ClaimMaster will keep track of which features are used. By clicking on "Statistics", you'll be able to review your usage statistics over time. - Sets font size for the reporting windows Patent Proofreading Settings - Specifies the default format for the results when individual proofreading tools are executed. By default, reports are opened in the Word task pane, but you can also open them as stand-alone HTML/PDF/Word reports. - If you want to share the HTML report with others, select "Compact format" option for HTML reports, as shown below. The generated report will be compressed into a single HTML file that can be shared with others. - Configures claim rules that are used for checking claims for errors. - Also lets you configure quick keyword rules for claims. - Also lets you configure quick keyword rules for the document. - Configures default report settings - Configures part-checking rules. - Configures antecedent basis checking preferences - If enabled, ClaimMaster will not report vague claim dependencies, such as "A device for performing all steps as specified in claim 1." Such claim arguable could could be interpreted either as an independent claim or a claim dependent from claim 1. Patent Drafting Settings - Specifies patent drafting preferences (in Pro+Drafting version). - Specifies auto-Summary templates (in Pro and Pro+Shells versions) - If this option is checked, ClaimMaster will paste claim summaries and trees directly into the open Word document - Specifies OpenAI's GPT API settings (in Pro+Drafting version). Settings for Shells, Templates, and Biblio Data Automation - Configures settings for Word shells - Configures Boilerplate settings - Launches Shell creation wizard - Configures attorney/firm/custom replacement fields - Configures OA Summary settings - Configures Biblio Data settings - Specifies whether to open the Office Action browser/Shell generation tool in the Task Pane - Specifies whether to open Office Actions in PDF format inside Microsoft Word, which will include OCR for the PDF documents in the latest version of Word. If unchecked, PDFs will be OCRed using ClaimMaster's own OCR utility.
OPCFW_CODE
If you’d like to test out/learn more about how to use Giotto, you can use the Binder Tutorial that we have created. For more information on how this works and what you can do with Giotto, please go to this link. Binder utilizes JupyterHub, repo2docker, and BinderHub to create a docker image built off of a GitHub repository. Simply click this button and a Docker image will be generated. Wait for your Docker image to be built (if you want to see how this is done, you can click Build logsto display the running script). Navigate into the Notebooksfolder and run whichever notebook you like! Alternatively, navigate into any of the pre-made scripts for a quick look at how some of these steps come together. If you run into any errors or have any questions about how the functions and scripts work, feel free to raise an issue with this repository. If you want to save any of your progress to work on locally, you can download files from the Jupyter notebook directory by clicking the box next to the file name(s) to select it and clicking the Downloadbutton on the top left. Once the image has been created, you will be redirected to a Jupyter notebook landing page. From there, you can navigate into any of the pre-made notebook tutorials and test it out. The first time you build this Binder, it may take while. For some more information on why your session might be taking longer, please refer to this link. Below are some commmon messages that you might see when loading Binder. They are normal! Just give the Binder some more time to load. Your session is taking longer than usual to start! Check the log messages below to see what is happening. Launch attempt 1 failed, retrying... Launch attempt 2 failed, retrying... You will have access to 1-2 GB RAM. If you go over 2 GB of RAM the kernel may be restarted. Because we have set up this repo so that you can import pre-processing scripts, you’ll be able to start with any notebook. If your kernel restarts just launch the Binder again! If you are inactive for 10 minutes, the session will shut down. Otherwise, you’ll have up to 6 hours of usage or 1 cpu-hour for more intensive runs. Any changes that you make will not be saved (please do not attempt to push your work back to this repository). If you would like to save your progress, please refer to #6 in the Instructions Section. Alternatively, if you would like to work on all of this solely locally, you can fork and clone the repository. About the Tutorial¶ This binder is modeled after the code tutorials in the HOWTO section of the Giotto website. The goal was to go through the Giotto pipelines with both RNA expression and image (visium) data. This binder should provide a good overview for using Giotto to its fullest potential. About the Data Used¶ If you want to do some more exploration with the data we used, you can find more information here: Be sure to visit the for the Giotto Binder Tutorial for more information.
OPCFW_CODE
Should I make up my own HTTP status codes? (a la Twitter 420: Enhance Your Calm) I'm currently implementing an HTTP API, my first ever. I've been spending a lot of time looking at the Wikipedia page for HTTP status codes, because I'm determined to implement the right codes for the right situations. Listed on that page is a code with number 420, which is a custom code that Twitter used to use for rate limiting. There is already a code for rate limiting, though. It's 429. This led me to wonder why they would set a custom one, when there is already a use case. Is that just being cute? And if so, then which circumstances would make it acceptable to return a different status code, and what, if any problems may clients have with it? I read somewhere that Mozilla doesn't implement the joke 418: I’m a teapot response, which makes me think that clients choose which status codes they implement. If that's true, then I can imagine Twitter's funny little enhance your calm code being problematic. Unless I'm mistaken, and we can appropriate any code number to mean whatever we like, and that only convention dictates that 404 means not found, and 429 means take it easy. The whole of the Internet is built on conventions. We call them RFCs. While nobody will come and arrest you if you violate an RFC, you do run the risk that your service will not interoperate with the rest of the world. And if that happens, you run the risk of your startup not getting any customers, your business getting bad press, your stockholders revolting, your getting laid off permanently, etc. HTTP status codes have their own IANA registry, each one traceable back to the RFC (or in one case, I-D) that defined it. In the particular case of Twitter's strange 420 status code versus the standard 429 status code defined in RFC 6585, the most likely explanation is that the latter was only recently defined; the RFC dates to April 2012. We see that Twitter only uses 420 in the previous deprecated version 1 of its API; the current API version 1.1 actually uses the 429 status code. So it's clear that Twitter needed a status code for this and defined their own; once a standard one was available they switched to it. Best practice, of course, is to stick as closely to the standards as possible. When you read RFCs, you will almost always find words like "MUST" and "SHOULD"; these have specific meanings when you are building your application, which you can find in RFC 2119. This question delves into the issue a bit. But the thing is, while you can technically create any status code you want, creating a status code outside of the traditional scope of status code meanings only makes your API more obtuse and arcane to others. Unless that is the point, and the API you are creating is so utterly amazing that everyone will gladly change their coding to follow your lead so what does it matter anyway right? It boils down to this: Any standard can be broken. But if you break it what do you gain or lose by doing so? In general, in cases where you can do something different but the standards imply standards, it is best to adhere to the standards unless there is a very strong and compelling reason to veer away from established standards. In the case of Twitter’s 420: Enhance Your Calm they are creating a response code that clearly speaks to a unique situation they face. Which is slowing down requests without denying service.
STACK_EXCHANGE
Rails: uniq vs. distinct Can someone briefly explain to me the difference in use between the methods uniq and distinct? I've seen both used in similar context, but the difference isnt quite clear to me. Rails queries acts like arrays, thus .uniq produces the same result as .distinct, but .distinct is sql query method .uniq is array method Note: In Rails 5+ Relation#uniq is deprecated and recommended to use Relation#distinct instead. See http://edgeguides.rubyonrails.org/5_0_release_notes.html#active-record-deprecations Hint: Using .includes before calling .uniq/.distinct can slow or speed up your app, because uniq won't spawn additional sql query distinct will do But both results will be the same Example: users = User.includes(:posts) puts users # First sql query for includes users.uniq # No sql query! (here you speed up you app) users.distinct # Second distinct sql query! (here you slow down your app) This can be useful to make performant application Hint: Same works for .size vs .count; present? vs .exists? map vs pluck Thanks for this answer. I have a question: Doesn't it depend on the number of rows returned by the db? If you have a massive number of results vs a small number of rows, then the db might perform better than ruby? @GLaDOS yes, rails app is like a building: on the first floor we have sql, on the second and others we have business logic, on the last floor we have a view for users. 1) In case if user doesn't need any data, we should not lift the data from the first floor to last. So it means we shouldn't fetch the data on sql layer, not to lift it on rails one (so use .limit(25) method instead of .first(25)). 2) Also in case we have have missed any data of first floor. It not efficient to running back on the first floor to grab additional data from sql. So in that case use .includes(:comments). Etc Rails 5.1 has removed the uniq method from Activerecord Relation and added distinct method... If you use uniq with query it will just convert the Activerecord Relaction to Array class... You can not have Query chain if you added uniq there....(i.e you can not do User.active.uniq.subscribed it will throw error undefined method subscribed for Array ) If your DB is large and you want to fetch only required distinct entries its good to use distinct method with Activerecord Relation query... Thank you for this answer. Documentation, please? if you want to see PR or discussion in rails about it https://github.com/rails/rails/pull/20198 From the documentation: uniq(value = true) Alias for ActiveRecord::QueryMethods#distinct apidock.com isn't Rails documentation, and it's no longer maintained. A better source is https://api.rubyonrails.org Its not exactly answer your question, but what I know is: If we consider ActiveRecord context then uniq is just an alias for distinct. And both work as removing duplicates on query result set(which you can say up to one level). And at array context uniq is so powerful that it removes duplicates even if the elements are nested. for example arr = [["first"], ["second"], ["first"]] and if we do arr.uniq answer will be : [["first"], ["second"]] So even if elements are blocks it will go in deep and removes duplicates. Hope it helps you in some ways. One additional difference to note: .distinct returns an ActiveRecord_Relation. .uniq returns an Array.
STACK_EXCHANGE
#!/usr/bin/env python # -*- enconding: utf-8 -*- '''Instantaneous event coding''' import numpy as np from .base import BaseTaskTransformer __all__ = ['BeatTransformer'] class BeatTransformer(BaseTaskTransformer): '''Task transformation for beat tracking Attributes ---------- name : str The name of this transformer sr : number > 0 The audio sampling rate hop_length : int > 0 The hop length for annotation frames ''' def __init__(self, name='beat', sr=22050, hop_length=512): super(BeatTransformer, self).__init__(name=name, namespace='beat', sr=sr, hop_length=hop_length) self.register('beat', [None], np.bool) self.register('downbeat', [None], np.bool) self.register('mask_downbeat', [1], np.bool) def transform_annotation(self, ann, duration): '''Apply the beat transformer Parameters ---------- ann : jams.Annotation The input annotation duration : number > 0 The duration of the audio Returns ------- data : dict data['beat'] : np.ndarray, shape=(n, 1) Binary indicator of beat/non-beat data['downbeat'] : np.ndarray, shape=(n, 1) Binary indicator of downbeat/non-downbeat mask_downbeat : bool True if downbeat annotations are present ''' mask_downbeat = False intervals, values = ann.data.to_interval_values() values = np.asarray(values) beat_events = intervals[:, 0] beat_labels = np.ones((len(beat_events), 1)) idx = (values == 1) if np.any(idx): downbeat_events = beat_events[idx] downbeat_labels = np.ones((len(downbeat_events), 1)) mask_downbeat = True else: downbeat_events = np.zeros(0) downbeat_labels = np.zeros((0, 1)) target_beat = self.encode_events(duration, beat_events, beat_labels) target_downbeat = self.encode_events(duration, downbeat_events, downbeat_labels) return {'beat': target_beat, 'downbeat': target_downbeat, 'mask_downbeat': mask_downbeat}
STACK_EDU
© 2008 Daniel Collins © 2008 Philip Kent kexec-loader is a boot loader which loads a Linux kernel, then displays a GRUB-like menu so you can select a kernel to boot using kexec. It is designed for systems where the BIOS does not support booting from some devices (e.g. your kernel is on a USB memory key which the BIOS does not support). If you have downloaded one of the pre-built disk images, you do not need to read the rest of this section, and you can skip to Section 2. A pre-made disk image is probably more suitable to the end user in most cases, as the provided image supports IDE, SATA and USB and is pre-compiled. You can obtain it from the project's webpage. A summary of the new features in this release of kexec-loader is below. For a more detailed changelog, please see the ChangeLog file. To build kexec-loader, you require the following tools. kexec-tools is included and built as part of the build process. In order to build kexec-loader, you first need a Linux kernel with support for the hardware you want to boot from. This kernel will need fit onto the device that will be used to boot, with sufficient space left for other files that will need to be on the device (such as kexec-loader itself and the boot loader). Once you have your kernel, simply run "make" from the root of the kexec-loader source to build the kexec-loader binary. This will include downloading and building kexec-tools. It is recommended that you build a uclibc toolchain prior to this, then building kexec-loader using it, as this will make the resulting file smaller. To use a uclibc toolchain, do "HOST=i386-linux-uclibc make" instead of make. Then run mkinitramfs.sh from the root directory of the distribution to build an initramfs containing kexec-loader. Once you have a kernel and initramfs, you can then build a disk image. There are two ways to assemble the disk, which are below. Regardless of the method you choose, you will need a bootloader such as syslinux. Build your image using your favourite method (such as using dd to produce an empty floppy image, formatting it then loop mounting it), put the kernel and initramfs (if any) on the disk, along with any boot loader configuration, install the boot loader and your disk is prepared. kexec-loader uses a GRUB-like configuration file to find what menu options it needs to show. If you have downloaded one of the pre-made disk images, a sample configuration is stored on the disk that might be able to boot a Debian system. kexec-loader will attempt to find a configuration in the following places in order, unless you choose to use grub (see below): If you specify a device as a variable called kexec_config on the boot line (e.g. kexec_config=/dev/hda), then the named device will be put on the top of the list. Failing to load from that one, it will then go through the list. The configuration must be called kexec-loader.conf and stored in the root of the filesystem and it must contain at least one section starting "title" OR have a grub_root to load a grub config file, or else it will have no kernels to load. You can put whitespace between the directives and their variables. Empty lines and lines beginning with a hash (#) are ignored. As of kexec-loader 1.3, reading a grub configuration file is supported. The entries in the grub configuration file will be placed below all other entries). To use grub, use the relevant directives. The directives are as follows. kernel mykernel.bin foo=barin GRUB, you would put Where parameters are shown in <angled brackets>, they are required for that directive. Ones in [square brackets] are optional. For mount and rootfs, if you do specify a fstype, it should be specified as ext2:/dev/hda1 for example. Normally kexec-loader can detect the filesystem type, it supports autodetection for the following types. Specifying no filesystem or "auto" will result in auto detection. For manual specification, enter the name of any filesystem that your kernel supports. To navigate the kexec-loader menu, use your arrow keys to select an item and press enter to boot it. A number of functions are available; press L to list the detected devices and their filesystems and R to re-read the configuration file. Press enter if you are on the device screen or an error and you want to escape. Upon asking a kernel to boot, it may take a few seconds for it to load the kernel and switch to it. After switching, your new kernel will start and the boot process will happen as it would normally. kexec-loader has a built in shell. To access it press "s" on the menu screen. This shell lets you set up a kernel to boot once kexec-loader has started. Therefore it has the same directives as mentioned above, except "default" and "title". There are two further directives used to control kexec-loader, rather than set up a kernel to boot. They are: To scroll through the command history, use the up and down keys. A maximum of 32 commands are remembered (upon exceeding this, the oldest command will be removed from the history to make space). Commands that are longer than the screen can hold will scroll, similar to the 'nano' text editor. Please note that this shell will be improved in subsequent releases. Once kexec-loader has started all debugging messages, as well as Linux kernel messages are sent to a separate debug console. By default the debug console is /dev/tty2, this can be changed at boot by using the Linux cmdline, for example kexec_debug=/dev/ttyS0 writes all debug messages and kernel messages to the first serial port. For information on kexec-loader, including current development and latest releases, please see its website at http://www.solemnwarning.net. A mailing is available for kexec-loader, subscription information and archives can be found at http://www.solemnwarning.net/kexec-loader/list. The developer of kexec-loader, Daniel Collins (aka solemnwarning), can often be found on other discussion sites, such as the "Microsuck" forums and on freenode IRC. To contact solemnwarning, you can use the following e-mail address: email@example.com.
OPCFW_CODE
Аккорды для гитары © Подбор аккордов: ± Темп (BPM): ♩ = 93 удара в минуту Intro: H | F#sus4 | G#m | Eadd9 } ×2 H I walked through the door with you, F#sus4 The air was cold. G#m But something ‘bout it felt Eadd9 Like home somehow. H And I left my scarf there F#sus4 At your sister’s house, G#m And you’ve still got it E In your drawer even now. Instrumental: H | F#sus4 | G#m | Eadd9 H Oh, your sweet disposition F#sus4 And my wide eyed gaze, G#m We’re singing in the car, E Getting lost upstate. H Autumn leaves falling down F#sus4 Like pieces into place G#m And I can picture it E After all these days. H And I know it’s long gone, F#sus4 And that magic’s not here no more. G#m And I might be okay, F#sus4 E F# E/G# E But I’m not fine at all. Chorus: H ‘Cause there we are again F#add11 On that little town street. G#m You almost ran the red Eadd9 ‘Cause you were looking over at me. H Wind in my hair, I was there, F#add11 I remember it G#m E E5- E All too well. H Photo album on the counter, F#sus4 Your cheeks were turning red. G#m You used to be a little kid with glasses Eadd9 In a twin size bed. H And your mother’s telling stories F#sus4 ‘Bout you on the tee-ball team. G#m You tell me ‘bout your past, Eadd9 Thinking your future was me. H And I know it’s long gone F#add11 And there was nothing else I could do. G#m And I forget about you long enough F# To forget a why E F#7 E/G# E/H I needed to. Chorus: H ‘Cause there we are again F#add11 In the middle of the night. G#m We’re dancing ‘round the kitchen Eadd9 In the refrigerator light. H Down the stairs, I was there, F#add11 I remember it G#m E E5- E All too well. H Yeah! Instrumental: H | F#add11 | G#m | Eadd9 F#7 | F#7 H And maybe we got lost in translation, F#add11 Maybe I asked for too much. G#m But maybe this thing was a masterpiece Eadd9 ‘Til you tore it all up. H Running scared, I was there, F#add11 G#m Eadd9 I remember it all too well. H Hey, you call me up again F#add11 Just to break me like a promise. G#m So casually cruel Eadd9 In the name of being honest. H I’m a crumpled up F#add11 Piece of paper lying here G#m ‘Cause I remember it all, Eadd9 H All, all too well. Instrumental: H | F#sus4 | G#m | Eadd9 H Time won’t fly, F#sus4 It’s like I’m paralyzed by it. G#m I’d like to be my old self again, Eadd9 But I’m still trying to find it H After plaid shirt days and nights F#sus4 When you made me your own. G#m Now you mail back my things, Eadd9 And I walk home alone. H But you keep my old scarf F#sus4 From that very first week, G#m ‘Cause it reminds you of innocence Eadd9 And smells like me. H You can’t get rid of it F#add11 ‘Cause you remember it G#m Eadd9 All too well. Yeah. Chorus: H ‘Cause there we are again F#add11 When I loved you so, G#m Back before you lost Eadd9 The one real thing You’ve ever known. H It was rare, I was there, F#add11 I remember it G#m Eadd9 All too well. H Wind in my hair, F#add11 You were there, you remember it all. G#m Down the stairs, you were there, Eadd9 You remember it all. H It was rare, I was there, F#sus4 I remember it F# G#m E6 Emaj9 E All too well.
OPCFW_CODE
How to make my JFrame components dynamically scale when resizing the window, without hard-coding position and size values? I'm learning java through university and I've been taught the basics of making a java program and designing GUIs. Maximizing a window after running my program makes all the JFrame components stay in place while grey fills the rest of the space. Here's an example of how it looks like: JFrame window normally, Maximized window before "fix". After failing to find a solution I came up with a band-aid solution which is to get the component locations and just move them with hard-coded values when the jframe is maximized. This was not an elegant solution and every jframe in my java course project increased in the number of elements on screen. Is there any piece of code to make my components move and resize automatically and dynamically? Here's what I've tried so far: First I obtained the positions of components through 2D points: Point managementLoginBtnLocation, empLogLocation, logoLocation, customerBtnLocation, welcomeLblLocation, contactBtnLocation, aboutBtnLocation, mainMenuBtnLocation; //Constructor and rest of code... public final void getOriginalComponentLocations() { managementLoginBtnLocation = managementLoginBtn.getLocation(); empLogLocation = empLoginBtn.getLocation(); logoLocation = shopLogo.getLocation(); customerBtnLocation = customerBtn.getLocation(); welcomeLblLocation = welcomeLbl.getLocation(); contactBtnLocation = contactBtn.getLocation(); aboutBtnLocation = aboutBtn.getLocation(); mainMenuBtnLocation = mainMenuBtn.getLocation(); } //This method is called within the constructor. I implemented the ComponentListener Interface and added a component listener to my jframe. Then I made it so when the jframe's size changes, it changes the size of the jlabel used for background art. And if the label's width is greater than 800 (the default I used while designing) it moves the components and doubles their size and font size. When the jframe is minimized the label will go back to the default size so I made a method to revert the font sizes, because I found the component sizes and locations reset automatically. public void componentResized(ComponentEvent e) { //Resizing the background label and setting its icon to a resized version of its current icon. backgroundMainArt.setSize(this.getWidth() - 16, this.getHeight() - 21); ImageIcon icon = new ImageIcon("C:\\Program Files\\OMOClothingStore\\Resources\\Main menu\\main menu background art.jpg"); Image img = icon.getImage(); Image newImage = img.getScaledInstance(backgroundMainArt.getWidth(), backgroundMainArt.getHeight(), Image.SCALE_FAST); icon = new ImageIcon(newImage); backgroundMainArt.setIcon(icon); if(backgroundMainArt.getWidth() > 800) //When the size of the label is greater than default { //I move the components, enlarge the buttons and zoom the font size moveComponents(); enlargeBtns(); zoomBtnsFontSize(); } else //When the label is back to its original size { //I revert the font sizes as button sizes and positions reset automatically revertBtnsFontSize(); setLogoIconAndBackgroundArtAndWelcomeLbl(); } } public void moveComponents() { moveLogo(); moveManagementLoginBtn(); moveEmployeeLoginBtn(); moveCustomerBtn(); moveWelcomeLbl(); moveContactInfoBtn(); moveAboutBtn(); moveMainMenuBtn(); } public void moveLogo() { ImageIcon logoIcon = new ImageIcon("C:\\Program Files\\OMOClothingStore\\Resources\\Shared resources\\OMO Clothing Store logo.png"); Image logoImg = logoIcon.getImage(); Image newLogoImage = logoImg.getScaledInstance(250, 250, Image.SCALE_DEFAULT); logoIcon = new ImageIcon(newLogoImage); shopLogo.setIcon(logoIcon); Point newLogoLocation = new Point(); newLogoLocation.x = (logoLocation.x * 2) + 200; newLogoLocation.y = (logoLocation.y * 2) + 30; shopLogo.setLocation(newLogoLocation); } //The rest of the "moveX" methods follow the same pattern as moveLogo() public void enlargeBtns() { managementLoginBtn.setSize(410, 94); empLoginBtn.setSize(410, 94); customerBtn.setSize(410, 94); } public void zoomBtnsFontSize() { customerBtn.setFont(sizeBtn.getFont()); //sizeBtn is a JButton that has a font size of 24. I found that just creating a new Font object with bigger size here made the font way larger for some reason. empLoginBtn.setFont(sizeBtn.getFont()); managementLoginBtn.setFont(sizeBtn.getFont()); } public void revertBtnsFontSize() { empLoginBtn.setFont(new Font("Segoe UI", Font.PLAIN, 14)); managementLoginBtn.setFont(new Font("Segoe UI", Font.PLAIN, 14)); customerBtn.setFont(new Font("Segoe UI", Font.PLAIN, 14)); } I split the moving of the components into many methods inside other methods because I found it easier to keep up with. This worked. Here's how it looks like when running the JFrame: Maximized window after "fix". But moving on to other JFrames, they are more intricate and have many more components - extra buttons, panels with other components in them, menu bars, etc. Is there a better approach to fixing this? Or do I just remove the ability to resize and move on? Make use of multiple/compound layout managers @MadProgrammer can you point me to a simple guide about layout managers? We didn't learn that in uni and I got really confused when looking it up myself Laying Out Components Within a Container
STACK_EXCHANGE
Java vs c++today, the majority of families own a home computer that is vastly more powerful than giant mainframes of years gone by computer hardware has been evolving rapidly with no end in sight, and with all of the advancements in computer hardware. 25-05-2011 how much easier is java compared to c++ psychotron i know that java is easier last semester i took java and c/c++ classes and to say that one language is easier than another is incorrect like mentioned above, it depends on preference and your own personal ability in my case, i thought java was harder at times but c++ was. Java vs net essayjava vsnet architecture architecture is the main ingredient in building a successful system. Detailed discussion of the techniques used in java and c++ to implement leak free and exception safe resource management covers memory management, finalizers, destructors and finally blocks with examples. C programming vs java programming thing c java type of language function oriented object oriented basic programming unit function class = adt portability of source code possible with discipline yes portability of compiled code no, recompile for each architecture yes, bytecode is write once, run anywhere security limited. Read java vs c++ free essay and over 88,000 other research documents java vs c++ since their inception, computers have played an increasingly important role in today’s society advancements in technology have enabled computers to. A comparison of microsoft's c# programming language to sun microsystems' java programming language by dare obasanjo java example c:\codesample javac ajava bjava c:\codesample java a hello world from class a c: javadoc tags and more but they would be ignored by standard tools. Free essay: java and net overlap in a lot of markets and inevitably each will form definitive niches that will be hard to break until newer model-based. Design aims the differences between the programming languages c++ and java can be traced to their heritage, as they have different design goals. Differences between java and c++ language complexity - ease of use - code readability - need for rules - code optimizability - reverse-engineering - safety - diagnostics - sandbox security model - metadata - design considerations - portability - conclusions - notes - references keywords: java vs c++, java versus c++. The language of java what is java java, in it’s simplest definition, is a dynamic computer application that can run a program to accomplish a task java runs in all sorts of things in the average person’s life java vs c essay - java vs c java vs c++ papers = since their inception, computers have played an increasingly important role. Free essays available online are good but they will not follow the guidelines of your particular writing assignment if you need a custom term paper on technology: java vs. Read on to know the 10 major differences between c and java read on to know the 10 major differences between c and java trending 3 essentials of essay writing why choosing wordpress to build & power your site makes the most sense in 2017 9 productivity apps college students shouldn’t live without common video. World intellectual property organisation wipo internships princeton university postdoctoral fellowships in humanities and social sciences cfa institute access java vs c++ essay scholarship for students worldwide. Java vs c++ this essay java vs c++ and other 63,000+ term papers, college essay examples and free essays are available now on reviewessayscom.
OPCFW_CODE
Apache Camel: do not trigger route if previous route run is not complete I have such a situation: Apache Camel route is triggered by timer route executes massive lengthy task and it is possible for timer to trigger route again while previous run is still underway. I would like my route NOT to be re-triggered while massive task is underway. That is, timer may issue event but it should somehow not lead to trigger route. When massive task is finished, it should be possible for timer to start route again. What would be the best way to achieve this behaviour? What do you want to happen if the route is still running when the timer fires? Should the timer firing be ignored, so that the route will run on the next timer firing, or should the route wait for the previous invocation to finish and then run immediately after? @JimNicholson I would like to have ignored Well, my first reflex would be to use the timer's period option without the fixedRate option (i.e. set the fixedRate option to false): So, declaring: from("timer:myTask?[other_options]&fixedRate=false") .to("direct:lengthyProcessingRoute") should wait for the task to complete before triggering the timer again. For instance, declaring a route like (fixedRate is false by default): from("timer:sender?delay=5s&period=3s") .log("Ping!") .delay(5000) .log("Ping2!"); will always give the output of: 2016-08-26 12:36:48.130 INFO 5775 --- [ timer://sender] route1 : Ping! 2016-08-26 12:36:53.133 INFO 5775 --- [ timer://sender] route1 : Ping2! 2016-08-26 12:36:53.135 INFO 5775 --- [ timer://sender] route1 : Ping! 2016-08-26 12:36:58.138 INFO 5775 --- [ timer://sender] route1 : Ping2! However, this will only work if your lengthy processing route is synchronous in nature. If it's not, then you would have to do something similar to what JimNicholson is suggesting in his answer. What do you mean by synchronous in nature? you mean if I had something like that from("timer:sender?period=3s&fixedRate=false") .proccess(proccessor) .split(body()) .parallelProcessing() .to("direct:to-route") .aggregate(aggregator) .end(); it won't wait for the whole route to aggregate to fire the next timer? @monicamillad I think your route is still synchronous due to the aggregate (although the end may need to be moved to before the aggregate step). But if you had a processor that just scheduled the lengthy task on a background thread without waiting for completion then the route would finish processing the current trigger and the next one could fire before the background processing was done This is a dsl version that has worked for me: private static final AtomicBoolean readyToProcess = new AtomicBoolean(true); public static boolean readyToProcess() { boolean readyToProcess = AlarmRouteBuilder.readyToProcess.get(); if (readyToProcess) { AlarmRouteBuilder.readyToProcess.set(false); } return readyToProcess; } @Override public void configure() throws Exception { from("timer://alarm-poll?period=5s").routeId("alarm-poll") .log(LoggingLevel.INFO, "Start Timer") .filter(method(AlarmRouteBuilder.class, "readyToProcess")) .to("direct:alarm-dostuff") .end() .log(LoggingLevel.INFO, "End Timer") ; from("direct:alarm-dostuff").routeId("alarm-dostuff") // .process(exchange -> readyToProcess.set(false)) .process(exchange -> doStuff()) .process(exchange -> readyToProcess.set(true)) ; I would create a bean to hold the running/finished state of the route with methods to set the state and a method to test the state. Then I would do something like this: <route> <from uri="timer:..."> <filter> <method ref="routeStateBean" method="isStopped"> <to uri="bean:routeStateBean?method=routeStarted"/> .... <to uri="bean:routeStateBean?method=routeStopped"/> </filter> </route> Could you please provide java code for the above xml ?
STACK_EXCHANGE
I’m working in Dynamics AX 2009. I have a project to display the number of days a vendor is late in supplying goods. The PurchTable form contains most of the fields I require. Therefore, I have created my required form by duplicating the PurchTable form. I need to move the StringEdit control ‘PurchLine_ItemId’ located on the PurchTable’s form in [Group:Line], [Tab:TabLine], [TabPage:TabLineOverview], [Grid:LineSpec] (the 2nd gridview) to the main TabHeaderOverview grid at the top of the form. I also want to move the ‘Delivery date’, and ‘Confirmed’ date fields from the form’s ‘DeliveryGroup’ to the top-most Overview grid as well. However, when I move the ‘PurchLine_ItemId’ field, which appears with the name “Item number” into the Overview grid, it no longer is populated with the item number. I’m fairly certain the same thing will happen when I attempt to move the 2-date fields. Can someone please suggest what steps I need to take so that the ‘PurchLine_ItemId’ field populates as it does in the PurchTable’s form in [Grid:LineSpec]? Do I need to write a display method, or why does the ItemId field suddenly stop populating. Any suggestions for this control as well as moving the 2-date controls, would be much appreciated. Thank you in advance. First question, what happens when you have diferent lines with different delivery dates each? Second question: Why do you need to move the itemId fields?, this field belongs to the line and not to the header. Hello Harish, before I duplicated the PurchTable form I did create a simple form containing the following fields: ‘Vendor account’, ‘Vendor Name’, ‘Purchase order’, ‘Item number’, ‘Total Price of Quantity Received’, ‘Delivery date’, ‘Confirmed date’, ‘Received Date’ and I will need to add one more field to display the number of “Days Late”, which will reflect the difference between InventTrans.DatePhysical - PurchLine.DeliveryDate. The field ‘Total Price of Qty Received’ is a DataMethod, where I attempt to acquire the ‘Qty’ from InventTrans where InventTrans.ItemId equals PurchLine.ItemId and InventTrans.TransType equals the enum of “3”. Then I pass this quantity to PurchLine.calcLineAmount like the following code snippet: Display AmountCur total_Price_Qty(PurchLine _purchLine) select Qty from InventTrans where InventTrans.ItemId == _purchLine.ItemId && InventTrans.TransType == 3; qty = Qty; I would be satisfied using my created form if I knew how & which tables to query in order to populate these fields like the PurchTable form has them populated. How can I determine which tables to query? Are there existing queries in the AOT which the PurchTable form utilizes to populate its fields?
OPCFW_CODE
Performance problem with Ubuntu file server I'm hoping someone can point to a bottleneck for me. See the attached diagram. The Ubuntu server is attached to a bridge, remote to the rest of the network. Performance all over the network is good, but when I download files from the Server to the MacBook, I get 2MB/s (megabytes/s). That makes moving large files unworkable. I've been looking at where the slowdown could occur. Its not the disk (which is Raid5 using mdadm) on the server. I ran some IO tests to check reading and writing from it and got very respectable scores: First - writing to /data (which is on the raid array): paul@server:/data/tmp$ dd if=/dev/zero of=testfile bs=8k count=1000000 ; sync 1000000+0 records in 1000000+0 records out 8192000000 bytes (8.2 GB, 7.6 GiB) copied, 48.6398 s, 168 MB/s Now reading from a file on the raid array: paul@server:/data/tmp$ dd if=/data/tmp/anothertestfile of=/dev/null 21653847+1 records in 21653847+1 records out 11086770047 bytes (11 GB, 10 GiB) copied, 55.1461 s, 201 MB/s So its not the disk. What about the network? Well... here is the speedtest from the server to the internet: paul@server:/data/tmp$ speedtest-cli Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from Vocus New Zealand (<IP_ADDRESS>)... Selecting best server based on latency... Hosted by Vocusgroup NZ (Auckland) [1.37 km]: 5.67 ms Testing download speed........................................ Download: 202.89 Mbit/s Testing upload speed.................................................. Upload: 172.08 Mbit/s Having my server at the end of a wifi connection getting 200Mb is good enough for my purposes. The Mac gets 1Gb to the internet because its wired. I have tested downloading files from the server to the Mac using HTTPS, SFTP (i.e. FTP over SSH) and SMB. All give the same results - about 2MB per second. I have also tested an AppleTV which is wired to the Firewall - it gets 2MB/s from the Server. Weirdly - I connected into my home network from outside using Wireguard. Then I tried to download a large file directly from the Server, and I got 35MB/s, which is pretty good. Also, when I upload large files to the server from the MacBook, I typically see much faster results - 20-30MB/s. So writing is much faster than reading. Any ideas where the slowdown could occur? Thx. Paul [EDIT: adding some more stats] During a download from the server, which is running at about 2MB/s, here are some perf metrics from the server. Bottom line is that the server is very lightly loaded, but still serving files slowly. IFTOP: 19.1Mb 38.1Mb 57.2Mb 76.3Mb 95.4Mb └─bbbbbbbbbbbbbbbbbbbb┴─bbbbbbbbbbbbbbbbbbbb┴─bbbbbbbbbbbbbbbbbbbb┴─bbbbbbbbbbbbbbbbbbbb┴─bbbbbbbbbbbbbbbbbbb─ server.local => MacBook-Pro.local 13.6Mb 14.5Mb 16.2Mb <= 660Kb 653Kb 738Kb TOP: top - 08:57:41 up 9 days, 14:17, 2 users, load average: 0.16, 0.12, 0.09 Tasks: 208 total, 1 running, 156 sleeping, 0 stopped, 1 zombie %Cpu(s): 0.4 us, 0.5 sy, 0.0 ni, 97.7 id, 1.1 wa, 0.0 hi, 0.3 si, 0.0 st KiB Mem : 1841464 total, 82652 free, 550804 used, 1208008 buff/cache KiB Swap: 2097148 total, 1373180 free, 723968 used. 1023428 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1089 avahi 20 0 48412 2644 1216 S 1.0 0.1 40:45.24 avahi-daemon 25952 paul 20 0 1641140 20908 16224 S 1.0 1.1 0:11.53 smbd 22 root 20 0 0 0 0 S 0.3 0.0 2:55.45 ksoftirqd/2 533 root 20 0 0 0 0 S 0.3 0.0 10:45.52 rc0 1464 mysql 20 0 2685192 207700 4976 S 0.3 11.3 478:43.85 mysqld Did you check firmware on your router and bridge if it is latest version? Did you check network card driver on your ubuntu server if it is set to 1000MB full duplex or do you use WLAN too? Is there something different if you are using network cable or wlan from mac to router about the download speed from ubuntu server? Good questions! Router and bridge fully up to date (both are netgear equipment). Ubuntu is 1000Mb. "ethtool eth0" reports "Duplex: Full" and "Speed: 1000Mb/s". There doesn't seem to be a difference between wlan and lan for downloads from the server. On WIFI vs LAN for downloads from the server... I have tried this with a Mac and an AppleTV. All 4 possibilities give the same slow results.
STACK_EXCHANGE