Document
stringlengths
395
24.5k
Source
stringclasses
6 values
import inspect import mimetypes import os import traceback from whodat.handler import * from whodat.http import * class WSGIApplication: """WSGI application interface.""" def __init__(self, debug, controllers=None, error_handler=None, extensions=None, static_url=None, static_dir=None): """Set attributes, inspect controllers to find Handlers and initialize extensions.""" self._debug = debug self._error_handler = error_handler() if error_handler else ErrorHandler() self._extensions = extensions or [] self._static_url = static_url self._static_dir = static_dir self._routes = {} for controller in controllers or []: for name, obj in inspect.getmembers(controller): if isinstance(obj, Handler): self.add_handler(obj) for extension in self._extensions: extension(self) def add_handler(self, handler): """Add a Handler to this application.""" self._routes[handler._url_regex] = handler def handle_request(self, request): """Return an HTTPResponse or redirect the request by appending a slash to its path.""" try: for (url_regex, handler) in self._routes.items(): match = url_regex.match(request.path) if match is not None: for extension in self._extensions: extension.process_request(request) response = handler(request, *match.groups()) for extension in self._extensions: extension.process_response(request, response) return response for (url_regex, handler) in self._routes.items(): match = url_regex.match(request.path + '/') if match is not None: return HTTPRedirect(request.path + '/') if self._debug and self._static_url and self._static_dir and request.path.startswith(self._static_url): static_filename = os.path.join(self._static_dir, request.path[len(self._static_url):]) try: with open(static_filename, 'rb') as static_file: content_type, charset = mimetypes.guess_type(static_filename) return HTTPResponse(static_file.read(), content_type=content_type, charset=charset) except: pass raise HTTPNotFound() except Exception as error: if not isinstance(error, HTTPError): if self._debug: traceback.print_exc() error = HTTPInternalServerError() return self._error_handler(error) def __call__(self, environ, start_response): """WSGI interface.""" return self.handle_request(HTTPRequest(environ))(environ, start_response)
STACK_EDU
A number of people (Gareth and Mark from Adobe – thanks guys) indicated that the first blog on this topic was correct, but incomplete. Here is the rest of the story… Q: What is an orchestration? A: An orchestration is really a LiveCycle process that runs as if it were really just Java code. You design an orchestration by dragging a series of steps into a “short-lived” process, and joining those steps together using lines. Each step is really a method call on an object, and the process engine simply follows the lines, and executes each method call in the correct order. It’s basically visual programming. You can invoke an orchestration from Java or C## or other code, via SOAP, from another orchestration, etc. When you call it, it’s almost identical to calling a real Java program, except that: - The logic of the code is visible graphically - It can be maintained and modified by non-programmers - It’s much easier and quicker to change - It runs a teeny bit slower than if you’d written the code in Java There is a sample orchestration for rendering a Form Guide as a step inside of LiveCycle Workspace – you can refer to that if you want to take a closer look at both an orchestration, and also an orchestration rendering a Form Guide. Q: What if I’m using the feature where you can switch between the Form Guide and a PDF? A: Well, things do get a little more complicated. In this case, a few extra things happen: - After loading the SWF file, the SWF file checks to see whether the minimum version of Reader/Acrobat is available. - If so, creates a new, hidden DHTML iframe. - Into the iframe, it loads a URL that points back to the LC Forms servlet. - When invoked, the servlet in turn invokes the LC Forms Render process. It supplies different parameters, this time requesting a PDF to be returned, rather than SWF. - The PDF is returned to the hidden iframe. - The Form Guide enables a button that allows the end user to toggle between the Form Guide and the PDF view. - When the user clicks the PDF toggle button, the Form Guide extracts the current state of the XML contained in the Form Guide, and dynamically injects it into the PDF. - The Form Guide then hides itself, and displays the iframe containing the PDF. - When the user toggles back, the current value of the XML is obtained from the PDF, and injected back into the Form Guide, and the hide/show happens. - Like the generation of the Form Guide SWF itself, the PDF will also be cached by LC Forms – on subsequent invocations, the PDF will be obtained from the cache. Please click on the “Comments” link for some excellent additional material – a big thank you to John for contributing.
OPCFW_CODE
Netlink Sockets are the method that the Linux Kernel uses to pass Routing, Interface and other miscellaneous networking information around, both within the kernel and between the kernel and userspace. It replaces the old ioctl(2) based method and is far far superior - infact as soon as the kernel receives a networking ioctl it is converted to a netlink message before being shipped off for further processing. The netlink protocol uses a special type of socket(2) to communicate with the Linux kernel. This socket is called a "Netlink Socket" surprisingly enough and can be created by specifing AF_NETLINK as the first argument to a socket(2) call, The socket type (second argument) can be either SOCK_DGRAM or SOCK_RAW, it makes absolutely no difference!, the third argument (netlink family) specifies which part of the linux networking stack you want to modify, for example NETLINK_ROUTE can be specified to modify the routing table (including interfaces), or NETLINK_ARPD can be specified to allow the arp table to be manipulated. A full list of available netlink families is found in netlink(7). NETLINK_ROUTE is the most commonly used netlink family as it is used to add, delete and modify routes from the kernels routing table and can also be used to add, delete and modify the interfaces on the machine. Some of the basic Netlink principles are documented in RFC:3549. There is somewhat of a lack of easy to read documentation regarding how to program using netlink sockets, however the information is all there in the end. As a start try the netlink(3), netlink(7), rtnetlink(3) and rtnetlink(7) manpages which provide a very technical description of the netlink protocol, all the information that you need to write a program using netlink is contained in these manpages.... should be easy from here right? The iproute2 package is the base implementation of the netlink interface, it replaces all the old linux networking utilities (ifconfig, route, etc) into a single binary called ip which performs all of the tasks using the netlink interface. I highly recommend that you use this package as a reference when coding netlink related applications. In particular iproute2 contains a netlink library (libnetlink) which deals with much of the low level protocol interactions between your application and the kernel. Unfortunately the library is not seperately packaged and you'll have to spend some time extracting it from the iproute2 package before it is useful. Coming Soon - Some basic examples of how to program using libnetlink -- Talk to MattBrown if you want them and they're not here yet! I don't know why. They might have been drunk at the time -- PerryLorier The reason why is that much of the system parameters are moving this way and they were just too lazy to convert other ones too I suspect -- IanMcDonald lib/main.php:944: Notice: PageInfo: Cannot find action page
OPCFW_CODE
Sorry for asking–I searched everywhere but I still can’t find a solution to what I need to do. Basically, I already have an HTML form ready for use as a contact page. It doesn’t need Ajax validation and stuff. On submit, it just needs to validate the data, if there are errors, list the error together with the contact page (with the inputted values already showing and ready to be edited). If it’s okay, show a confirmation page listing everything the user typed, with a final “Submit” button below (also using custom HTML). If clicked, it will actually send the email and show a “Thanks” screen (also using custom HTML). The problem is, the HTML form layout is pretty unique, and is impossible to recreate using just YAML on the header frontmatter. That’s why I need to use my HTML with as little modification as possible. I don’t need the form to be customizable with the Admin. It can be a hard-coded HTML form (and I’d prefer it to be that way). Also, I need a “confirmation” page, which Grav Forms doesn’t provide, as far as I know. My last resort will be to create a custom plugin (hopefully, that does fix my problem–if it doesn’t, then I’m really screwed.) but if I can avoid creating a custom plugin, that’d be really great as I don’t really have the time to learn the Grav API (and brush up my PHP skills). Is there any method I can take to achieve this? If I need to write my own plugin, then I guess I have no choice. Really sorry, I looked everywhere but I can’t find an answer. I’m just desperate. Thanks in advance! Did you already have a closer look at the twig templates that come with the grav form plugin ? (these templates offer a high degree of flexibility, far beyond what you can do just with frontmatter). And, of coures, welcome to grav Thanks for the reply! Unfortunately, yes, I did, but I’m sorry to say that I don’t really understand it. It doesn’t seem to offer the possibility to show a “confirmation” page, and it seems to offer too much (that I don’t understand how to customize it) when all I need is really simple–to use an actual HTML form as the template itself. Is there a possibility for me to use the HTML form itself as the template with minimal HTML tweaking? (I can edit HTML–it’s just the Form plugin that I can’t wrap my head around). Sorry for being annoying. Also, thanks for the warm welcome! I find Grav vastly superior compared to other CMS, and I want to use it for all my CMS needs even in future projects. I really love the blueprints feature, it’s just the Form plugin that I can’t really figure out–all I want is to use my premade HTML, and not render the form from the frontmatter YAML at all… m currently not able to look in the form plugin code, So I can just give some hints: It is really woth to dive deeper in the functionality of twig. Basically, you have everything at hand waht you normally use with html, such as divs, paragraphs, classes and so on. But, as a BIG addition: all the ineternal variables that grav or your plugin uses. So, much easier to achieve what you want than with plain html ! I would recommend to take the templates from the forms plugin to learn twig and then figure out what you don`t need and eventually add things that are not available. I’ll quote Ricardo on a similar question on the discord channel All you need is name=“data[variablename]” if the variable name matches the form.md then it will submit So, you need to add the name="data[variablename]" in your form and it should do the trick.
OPCFW_CODE
XPath expression for containing element of a given text node I am coding some Perl to use XPath to locate a particular td element within a table` that looks similar to this <table> <tr> <td>...</td> <td><font color="white" face="verdana, Helvetica, Arial" size="2">Showing <b>1</b>-<b>100</b> of <b>200</b> total</font></td> <td>...</td> <td>...</td> </tr> </table> What I want is to find a td element that has a font/text() node that contains the string Showing. matches contains(., "Showing") A direct comparison works fine: //td[font/text()="Showing "] but I want to use the contains() XPath function so that the match is more flexible. I have tried //td[contains(font/text(), "Showing ")] but this raises the error XPath failed due to: A sequence of more than one item is not allowed as the first argument of contains() and I have managed to achieve what I want with //td[font/text()[contains(., "Showing")]] but this is very ugly and I am hoping for something more concise. Please can someone improve on this for me, or perhaps confirm that this is the best and most concise way? More concise? You have an xpath that says "All tds that have a font whose text contains Showing" - which is exactly your problem statement! - and you want it to be more concise?! @AakashM: Yes I do. Just because it is one way to express what I want doesn't mean it's the cleanest. I don't have to write //td[font/text()[.="Showing "]] so I thought there may be a way using contains instead of =. So why do you think the expression is "very ugly"? It is the one I would use -- and this means that at least to my knowledge, there isn't a more elegant one. XPath 2.0 allows one to specify a more elegant expression that selects the same nodes. Do you want an XPath 2.0 answer? Correction: Even in XPath 2.0 the one you are using is the most ellegant. It isn't meaningful to replace contains() with Regex, and the expression //td[some $t in font/text() satisfies contains($t, 'Showing')] compared to yours seems "stupidly verbose" to me. Try this: //td[contains(font/text()[1], 'Showing ')] Thanks Kirill. That works, but only if it is the first text node that matches. I want to find a td where any font/text() contains the given string. @Borodin You mean like this: //td[contains(font/text(), 'Showing ')] ? @Borodin, According to XPath spec, if an argument is not of type string, it is first converted to a string using the string() function and then the result of that conversion is evaluated. So your XPath engine doesn't work properly. @KirillPolishchuk: Borodin is right -- the XPath expression in this answer fails to select a td where contains(font/text()[2], 'Showing') is true(), but contains(font/text()[1], 'Showing') is false()
STACK_EXCHANGE
The .Net Development framework offers a number of beneficial services and classes that empower developers to conveniently come up with secure code and allow administrators to tailor the permissions obtained for coding in order to gain access to protected resources. In addition to this, the security model facilitates the use of role-based security and cryptography. With the advent of the .Net Framework 4, major changes have been made with respect to the code access security structure. The security policy is no more applicable to applications and for all the apps comfortable with desktops; execution is being done in the form of full-trust applications. This includes shared network apps as well as those on the computer. The ones which are partially trusted should be run within a sandbox, for determination of the grant set. Till date, the permission system is being used, the difference being that it is transcended by transparency rules for security. Changes in the security model of .Net framework With the .Net Framework 4.5 we have a two-tier security structure for managed applications. A Windows security container is present for running Windows store apps and this ensures limited access to resources. Managed application can be run in a completely trusted manner within the container. So far as the developer is concerned, he has nothing to do for elevation of the privileges from the perspective of CAS or Code Access Security. Major Security Concepts The .NET Framework provides security transparency, role-based security and code access security to facilitate handling of security concerns regarding mobile code and to render support that ensures determination of the level of user authorization by the components. These security instruments make use of a consistent, uncomplicated model to enable developers to conveniently use role-based security once they are well-versed with code access security and vice versa. Role-based security and code access security are implemented by means of a common infrastructure that is made available through the common language runtime. Let us discuss some of the major security concepts before we elaborate on role-based and code access security. - Security permissions: Herein, the runtime makes use of object known as permissions to impose restriction over the managed code. The code is allowed to perform those specific operations for which it has permission. - Type Security and Safety: Type-safe code accesses only those memory locations for which it has authorization. For instance, the type-safe code is not capable of reading values from the private field of another object. It can access types solely in an allowable, well-defined manner. - Principal: This implies the role and identity of a user and functions on the behalf of the user. Within the .Net development framework, role based security backs three categories of principals, namely Windows principals, Generic principals and Custom Principals. - Authentication: It refers to the procedure of discovery and verification of a principal’s identity through examination of the credentials of the users and validation of the same against some kind of authority. The code directly uses the information that is obtained during the authentication process. - Authorization: This is a process which determines if the principal has permission to carry out a requested action. In the present scenario, vastly networked computer systems have a continuous exposure to code that originates from a variety of probably unidentified sources. The code can be contained in documents, linked to e-mail, or downloaded from the Internet. Unluckily, numerous computer users have firsthand experience of the impacts of malevolent mobile code like worms and viruses, which are capable of destroying or damaging data, thereby costing money and time. .NET Framework offers security machinery known as code access security, in order to help safeguard computer systems from malevolent mobile code, to facilitate running of code from unknown origin with protection and to aid the prevention of trusted code from accidentally or intentionally compromising security. The Code access security permits code trusting to varying degrees based on the place of origin of the code and other aspects. It also imposes different levels of trust on code, thereby minimizing the code amount that should be completely trusted for running. Code access security lessens the chances of the concerned code by error-filled or malicious code. In this way, liability is reduced because the set of operations which the code has permission to perform can be specified. Many a time, roles are used in business or financial applications for enforcement of policy. Role-based security of .Net Framework renders support to authorization through making information regarding the Principal, built from a linked identity, which the current thread has. The security support provided herein is extensible and flexible enough for meeting the requirements of a range of applications. Interoperation with accessible authentication infrastructures like COM+ 1.0 Services is also possible along with the creation of a tailor-made authentication system. This security type is specifically compatible for ASP.NET Web applications, primarily processed over the server. But the role-based security suits both the server and the client. The security model of the .net application development framework ensures secure coding through effective defense strategies. This enables developers to enjoy a great degree of flexibility without compromising on the productivity. Expert .net developer India teams can leverage the benefits of this amazing framework and help build applications for you. We provide .net development services. If you would like to discuss with one of our lead developers, please get in touch with us at Mindfire Solutions.
OPCFW_CODE
Here are some discoveries that fascinate me this week. - Drug discovery - Computational biology - Career paths of programmers: on the ladder, on the ground, and the third way - Other gems ADME properties of antibodies I found an interesting review about adsorption, distribution, metabolism, and excretion (ADME) properties of biologics, focusing especially on the learnings from small molecules: Thomayant Prueksaritanont and Cuyue Tang, AAPS Journal, 2012), thanks to recommendation by colleagues. A related paper, Jain et al., PNAS, 2017, examines biophysical properties of the clinical-stage antibodies, which I would like to imagine as ‘survivors’ of discovery and development programs, and provides a complementary view on this topic. On the antibody side, many antibodies follow the structure of Immunoglobulin G (IgG) antibodies, the most common type of antibody found in circulation and extracellular fluids. IgG consists of both heavy and light chains. Besides IgG, there are other four heavy-chain isotypes, known as IgA, IgD, IgE, and IgM. With regard to light chain, there are two isotypes: $\kappa$ and $\lambda$. Here, isotype (class) means the unique constant region segments of the immunoglobulin gene which form the fragment crystallizable (Fc) region and the lower segment of the fragment antigen-binding (Fab) portion of an antibody. More about isotypes can be found on Wikipedia. There are four subclasses of IgGs (IgG1-4). IgG1 (crossing placenta, complement activation, and high affinity to Fc receptor on phagocytic cells), IgG2 (not crossing placenta, moderate complement activation, but extremely low affinity), and IgG4 (crossing placenta, no complement activation, and intermediate affinity) are often chosen as the antibody formats. Many variants and formats of antibodies are available nowadays, which are reviewed by Spiess et al.. IgG alone can protect the body from infection through the activities of its antigen binding region. However, immune functions of IgGs are much mediated by proteins and receptors that binding to the Fc region of IgG, known collectively as Fc receptors. Fc receptors can be classified into three classes by the antibody type that they bind to: Fc-gamma receptors (binding to IgG antibodies), Fc-alpha receptors (binding to IgA antibodies), and Fc-epsilon receptors (binding to IgE antibodies). Fc-gamma receptors (Fc$\gamma$R) receptors contain many classical membrane-bound surface receptors, as well as atypical intracellular receptors and cytoplasmic glycoproteins. A particular atypical Fc$\gamma$R, the neonatal Fc receptor (FcRn), is particularly interesting, among others because it can strongly influence IgG biology, including stability and PK/PD profiles of IgG format antibodies and albumin. It acts as a recycling or transcytosis receptor that maintains IgG in the circulation, and transport them across cellular barriers. It is also an immune receptor by interacting with and assisting antigen presentation of peptides derived from IgG immune complexes. Two interesting reviews about biology of FcRn can be found in Roopenian and Akilesh, Nature Reviews Immunology, 2007 and in Pyzik, et al., Front. Immunol., 2019. I summarized my learning notes in another post. Applied bioinformatics for the identification of regulatory elements Wasserman and Sandelin, Nature Review Genetics, 2004 is a classic paper in the field of bioinformatics and genomics. It introduced key concepts such as position weight matrix (PWM), also known as position-specific score matrix (PSSM), phylogenetic footprinting (identification of conserved regulatory elements by comparing genomic sequences between related sequences), as well as combinatorial interaction of transcription factors via cis-regulatory module (CRM) analysis. These and other concepts introduced in the paper are fundamental for many tools that we use today to characterize elementary features in genomes. In Box 1 of the paper, the authors gave an example on how to derive a PWM from a set of aligned sequences. Unfortunately, there seem to be a few typos in the tables, which obscures the interpretation. Here are some that I found: Simple graphical user interfaces with guietta and Python alfiopuglisi/guietta is a tool making simple graphical user interfaces (GUIs) with Python. Discovered from hacker news. Hill tail-index estimator and Hill equation The Hill tail-index estimator was proposed by Bruce M. Hill as a simple and general method to inference about the tail behaviour of a distribution (Hill et al.). It does not assume any global form for the distribution function, but only the form of the behaviour in the tail where it is desired to draw inference. It is a way to describe heavy-tailed distributions, besides Pickand’s and Ratio estimators of the tail-index. Do not mix it with the Hill coefficient in Hill equation in biochemistry, which is attributed to the English physiologist Archibald Vivian (A.V.) Hill. As the Hill coefficient increases, the saturation curve becomes steeper. Career paths of programmers: on the ladder, on the ground, and the third way From guruevi, an apparently good manager: It’s the (good) evolution of any technical manager - you’ve got too much work to delve deep into the code and its dependencies. I haven’t learned much new programming languages either in the last few years, I just farm the work out to my minions, write in pseudocode and once in a while I will read the documentation of some new framework or library. The rest of my time is spent dealing with superiors and customers, mostly managing expectations and shielding my team from the ire of some micromanager. From jd, a programmer, I consider programming enjoyable, not a chore. Couldn’t give a damn if I’m still coding into my 90s. I like solving problems and that’s a category of problem. Spend my time away from keyboards solving different problems for other organizations. Archaeology, history, maths, I don’t care, it’s call problem solving and all fun. From TechyImmigrant, a hardware architect with programming skills In my job, people regard me as a hardware architect more than coder (of System Verilog RTL, python and C mostly). This comes from years of coding, during which I developed some important circuits for my employer with cunning designs. Then they promote you and want you to write documents describing things to be coded by others. I find that problematic, because all my most cunning designes were arrived at iteratively, coding up solutions, identifying problems and then refining the solution until it worked for being coded, its size and efficiency, debugability, testability on the lab bench and in high volume manufacturing and solving the problems of remaining secure while remaining testable. So I still code RTL and Python and C when coming up with my designs, document them and them throw the code and documents over to the rest of the team to beat it into submission, test it and help kick it into shape for mass production. Apparently these are the three typical career paths of programmers: becoming professional managers (which I call ‘on the ladder’), continue coding and solving problem (‘on the ground’), and trying to stay between the two (with one foot on the ladder and the other on the ground). I can imagine that these career paths are also the ones scientists and engineers can choose when they work in industry. Each and everyone of us is asked to make a decision based on circumstances and his capability and wishes. - I read this week the book Man’s Search for Meaning by Victor Frankl. The conclusion I draw is that we do not ask life for a meaning, rather life asks us for a meaning. Given circumstances, we have to make active choices and live a life that we think worth living. - Math book recommendations for teenagers, discussion on hacker news. - Design thinking for data products by Norbert Wirth and Martin Szugat. - Wir haben das Hier und Jetzt. Und das ist alles, was wirklich zählt (We have Here and Now. This is all that really matters), an interview with Prof. Dr. Andrew Gloster (in German).
OPCFW_CODE
The RefApp 2.x openmrs-module-patient-flags module is not worthy of integrating into OMRS 3.x. This is ideally because the Uganda EMR team reported that the RefApp 2.x version of this module is slow. This is because the patient-flags-module defines a mechanism which evaluates flags live. When a patient is brought up, the system runs scripts against the patient’s data to determine the flags. However, this system may not be scalable as it may be too slow to display more than one or two flags. The FHIR Flag resource can be used to store data related to flags. The data stored should include, at least, the data in the FHIR flag model. This would allow us to build out how patient flags are created in different mechanisms. The patient flags are already created as part of the CDS scripts in SQL, so importing those flags is ideal. The flag module should store only unexpired flags. I proposed this as a project for GSOC 2023 so that a GSOC student can update this module. cc @dkayiwa @ibacher @grace @suruchi Wow, this is a good project, I would love to work on this project alongside anyone contributing to it. A IDI and soldev Team, We envision enhancing patient flags in call for life systems across different implementations in Africa, However, we realized that capturing flags using FlagEndpoints, It was not something the community embraced. Being that, we can discuss further how we can achieve this. cc @pwargulak Thanks @sharif for the mention. It’s an interesting project. In Connect for Life, we are using patient flags in one of the implementations, in productive environment and we did notice the performance issues @jnsereko mentions - we have flag which takes couple of seconds to calculate. @jnsereko how about evaluating these flags in some sort of background daemon thread and storing the precomputed results. That way, the patient dashboard loads the already stored computed results. We would then need to keep it in sync with the obs or any other data which could potentially affect the computed results. To give an example, we could for instance spawn a background thread to recompute for a patient (or a group of patients depending on the type of flags), on form submission. Or simply do this as AOP around the ObsService or any other relevant services, for data submitted outside of forms. This can be refined further, but just wanted to start by throwing in some rough ideas on the possible next steps. Thanks @dkayiwa. I would personally go with option 1, spawn a background thread to recompute... I think we shall need to; - Enable asynchronous processing in the 2.x patient flags module - Modify the existing a service class that handles flag evaluation by adding an asynchronous function that accepts a patient or group of patient as parameters and paste Flag revaluation logic there. - In the controllers, we shall just call that asynchronous function and pass the desired patient or group of patients.
OPCFW_CODE
On Tue, Sep 11, 2012 at 6:00 PM, Joe Niederberger <email@example.com> wrote: > Why not just address the challenge: give us a detailed mathematical account of the "process" (Devlin) that multiplication is that is not flat-out circular. > > Note that I haven't used the word scaling, or property, or model, or any of these complications. Just give a straightforward account of the process (Devlin's word -- and he used it explicitly) of multiplication of any two real numbers that's not circular and shows how it produces an answer. > > "Repeated addition" refers to something better thought of as a computational procedure. > Math that is computable is all of the math that the vast majority of students will ever need. > On your challenge further above: Define "answer". That is, in your challenge I detect the hidden "challenge" to compute that which cannot be computed. In other words, your claim of circularity assumes that a produced answer counts as an answer only if it is the result if a computable process, that a model counts only if it is computable. On your claim that we can ban almost all of the real numbers and everything will just fine: I reiterate everything I said in my post in this thread "Are you saying that the "proper" response to being confronted with mathematics that is or is based on that which is continuous or non-computable is to deny it? Are you saying that we should accept only mathematics that is or is based on that which is discrete or computable? If so, then you are denying essentially all of precalculus and calculus and then just about almost everything that comes afterwards in analysis and topology and even a good chunk of abstract algebra - in other words, most of abstract mathematics in general. That is, you are saying that the only "real" mathematics is discrete mathematics or computable mathematics. This is one very, very extreme position. Perhaps this denial of all of mathematics that is or is based on that which is continuous or non-computable is an example of what Devlin had in mind when suggested that irreparable harm is being caused to kids by teaching kids that repeated addition is what multiplication *is* - it creates later on such a cognitive dissonance at the subconscious or unconscious level when the person finds out otherwise that the person has no choice but to come up with such extreme answers to the problem." Regardless, to meet your challenge above that scaling cannot be a model for the multiplication of arbitrary real numbers - given that I utterly reject your assumption above that a model must be computable - and I note that your assumption above denies all of geometry based on real numbers because you deny almost all real numbers: Here is a real number based geometric construction using similar triangles that gives point (product) ab on the real number line given point 1 and arbitrary points (arbitrary real numbers) a and b on the real number line: Take a standard x-axis and y-axis in the Cartesian plane, and for sake of simplicity, name the points on these lines according to the fact they are each a real number line. Then: Connect 1 on the x-axis to b on the y-axis, and, parallel to that drawn line segment, connect a on the x-axis to the y-axis, and this point on the y-axis is ab. That is, in terms of distance from 0 or magnitude or absolute value: ab is to b as a is to 1 - that is, written in terms of ratios or proportions: ab:b as a:1. And via commutativity we have the other way as an alternative: Connect point 1 on the x-axis to point a on the y-axis, and, parallel to that drawn line segment, connect b on the x-axis to the y-axis, and this point on the y-axis is ab. That is, in terms of distance from 0 or magnitude or absolute value: ab is to a as b is to 1 - that is, written in terms of ratios or proportions: ab:a as b:1.
OPCFW_CODE
We make things harder than they need to be. We take a basic concept and then diverge into a sea of complexity. This human behavior burrows deep into our thinking, implementations, and explanations. This drive towards complexity makes us ineffective at the core of our profession. Complexity creates new problems, as you work to solve the original problem. It breeds bugs. It makes our jobs harder. The web development and software professions are based upon basic fundamentals and principles. Over the decades, we took these basics and turned it into a profession of complexity. We filled books and lectures with design patterns, heuristics, and best practices. But we failed to agree on the approach. As a result, complexity bred more complexity. In our quest to build perfectly reusable and maintainable code, we failed. Instead, we made it exponentially harder to do our work. - Reading the code and quickly understanding what is going on and why is harder. - Sniffing out the root cause of some unexpected behavior is harder. - Validating all the paths through your codebase is harder. - Maintaining the codebase is harder. - Adding a new feature is harder. - Onboarding new people is harder. - Finding new talent to join your team is harder. We made our profession harder. As a result, it takes more time, not less, and we introduce more problems, not less. All of this complexity weighs me down. I have over 30 years of experience. Imagine the impact it has on someone with only a few years of experience or even new. It’s too much. Wakeup! Harder is the opposite of what we need. Think and do simple. We need simplicity. How Did We Get Here? Part of my journey has been (and still is) to develop individuals and teams. In that process, I’ve come to discover these underlying reasons for how we got to where we are now: - We want respect. - We want to be valued and indispensable. - We think we are supposed to do this way because those we respect do. All of the above are human nature. But it’s our approach that is failing us. Simplicity is Better - Make things so simple that anyone can step into your shoes and do your job. Empower others. - Make it tell us and you what’s going on and why. - Build smaller, more focused pieces that have one behavior. Shifting your focus to simplicity immediately impacts the quality of your code. It drives down costs, bugs, and time. It makes it easier for all of us to do our work. In my experience, this approach makes you more valuable, marketable, and indispensable. What do you think? Let’s discuss it.
OPCFW_CODE
The data are from a couple of years ago but English remains by a long way the number one language of international publishing. One challenge with compiling this list is that some countries, for example the USA, do not have an official language (despite various attempts to introduce one). In these cases, the de facto languages of the country have been counted, which naturally raises questions of its own. In other places, measuring what counts as a de facto language can be a challenge. This list shows the influence of European colonial histories on the worlds linguistic map. Languages with a large number of native speakers like Japanese and Mandarin have not travelled far beyond Asia, unlike the Western European languages that colonists spread to all corners of the world. Powered by- Designed with theHueman theme An interesting way of looking at the development of international languages is via internet aggregates information from various sources to create a list of the internets top languages by number of users. This information, based on number of native speakers, comes fromEthnologue, a widely-respected encyclopaedia of the worlds living languages. At the time of the last update, there were 7105 living languages, but that figure is likely to have decreased since then as languages become extinct. Since Ethnologue started counting in 1950, they calculate 6 languages have gone extinct each year. Compared to the list of total native speakers, this one is relatively easy to measure. Of the languages listed, Arabic provides the greatest challenge because the spoken languages that fall under the umbrella of Arabic are not all mutually intelligible. However, Standard Arabic is used as the written language in countries where dialects of Arabic are spoken. The Economists Johnson blog has aninteresting articleon the languages of Wikipedia. But this doesnt quite illustrate how central English is to the web. Arecent piece of researchby a pair of Googlers shows the percentage of links going to and from websites in various languages. Unsurprisingly, many more links are pointed at English-language sites from other languages than the other way around. Encyclopaedia Britannicasuggests the following list, which seemingly combines native speakers with non-native speakers: English remains the worlds most important international language by most measures except for the number of total native speakers, but measuring this can be difficult in itself. A report published by the British Councilestimatesthat around 2 billion people will be learning English at any one time during the next decade. But even then, more than two thirds of the world will not speak English. ESL language studies abroad © 2018. All Rights Reserved. So, with that in mind, here are some statistics: Even organisations that set out to present information as neutrally as possible are dependent on censuses and surveys for their information. For example, the EUs Eurobarometer reports rely on asking people in which languages apart from their mother tongue they can hold a conversation. Self-assessment is notoriously tricky and people will hold themselves to different standards, especially when it comes tolearning languages. A more informal way of measuring the languages of the internet would be to look at the number of articles in each language on Wikipedia: When looking at the numbers, it is important to remember that the exact figures are difficult to measure. For example, when the US government asks about languages in acensus, they state an approximate accuracy of 90%. In a country of (approximately) 312 million inhabitants, thats a margin of error of (approximately) 31 million people. When it comes to measuring in India, home to two of the listed languages and over a billion people, the challenges are multiplied. The rankings are based on internet penetration per country and seem to assign a generous value added dollop of users to English for the many people who are assumed to speak English as a second language and use it for browsing the web. UNESCO measures the number of books published by each country per year. Correlating the data for various countries, the British Council gives the following figures: The tables give an interesting view of the worlds most spoken and written languages. Three of the worlds ten most widely spoken mother tongues are barely used for publishing, the internet or international communication: fields where English is dominant. Two of these are Indian languages and educated speakers would be expected to speak English fluently as a second language. What is the worlds most spoken language? The answer depends on how you choose to measure. Some statistics are harder to measure than others; all should be taken with a pinch of salt. For example, estimates for how many people speak a language as a second language are often put forward by people with some political or financial interest in making a certain language appear more or less important. This can skew their estimates. Estimating the number of second language speakers is extremely difficult and even the best estimates involve a fair amount of guess work. For example, the British Council suggests inThe Future of English?that around 1.5 billion people in total speak English a figure you will often hear but that is 500 million more than the estimates used byEncyclopaedia Britannica. Half a billion people is a large discrepancy.
OPCFW_CODE
Over the past month or so I have started increasing my time dedicated to my project and general interest in the technology industry. A couple of months ago I met one of my neighbors on my block who is deeply entrenched in the Baltimore tech community and recommended I check out a few events and resources. One of the events was Baltimore Innovation Week, a week filled with tech focused events and ended with Startup Weekend Baltimore, an event concurrently hosted in numerous cities around the world. Attending my first event last Tuesday, I was quickly thrown into the world of tech startups, programming, and the people who actively participate in both. The event was a great way to learn about what’s popular and common resources, as well as helped me more better focus where I should be spending my time and effort. As the week went on, I was able to participate in multiple events that provided varying perspectives on tech startup industry and a forum for varying levels of expertise to impart wisdom about their past experiences and recommendations. This is a good time to point out how different the tech startup world is from my current work environment, where openly sharing information is taboo and everyone must fend for themselves. It was rather refreshing to see a group of people willingly share ideas and engage one another for a focused cause. All these events led up to the Startup Weekend Baltimore event, a three day event where the Baltimore startup community came together and collaborated around pitched ideas for 54 hours to be presented in front of a panel of judges and graded on various metrics. I was unfortunately unable to attend the first night due to a wedding dinner, but showed up early Saturday morning. The groups had already formed and I was rather at a lost about where to begin and who to engage. Fortunately, my neighbor was one of the organizers so I quickly linked up with him and after about 30 minutes of meandoring, found a team consisting of a couple of LocalUp guys. Over the weekend we worked on an idea one of the guys had and matured it out for Sunday’s presentation. The concept was called TeamPassword focused on developing an application that allowed small businesses to manage their company’s individual and shared account passwords through an easy and intuitive interface. One of the members was clearly suited as the “developer,” where he went off and focused on developing a Chrome extension and web application (built on Ruby on Rails), while the rest of us (only 2) worked on the presentation, market research, pitch, business model, etc. Sunday rolled around and we presented our idea along with 14 other teams. We believed we had a solid product and were able to demonstrate a working application along with a solid presentation and market feasibility. After all was said and done, we ended up first place receiving a set of prizes that further enabled the project to grow and materialize into a functional business. Here’s a little write up by TechnicallyBaltimore about the event. Where TeamPassword goes and my level of participation is still yet to be determined, but the experiences I gained from this past week will be an invaluable resource as I continue to press on and become further involved in the tech community.
OPCFW_CODE
Thu, 17 Aug , 14:00 - 15:20 - On Concurrent Multi-Party Quantum ComputationVipul Goyal (NTT Research & Carnegie Mellon University); Xiao Liang (NTT Research); Giulio Malavolta (Max Planck Institute for Security and Privacy)[Abstract]Abstract: Recently, significant progress has been made toward quantumly secure multi-party computation (MPC) in the stand-alone setting. In sharp contrast, the picture of concurrently secure MPC (or even 2PC), for both classical and quantum functionalities, still remains unclear. Quantum information behaves in a fundamentally different way, making the job of adversary harder and easier at the same time. Thus, it is unclear if the positive or negative results from the classical setting still apply. This work initiates a systematic study of concurrent secure computation in the quantum setting. We obtain a mix of positive and negative results. We first show that assuming the existence of post-quantum one-way functions (PQ-OWFs), concurrently secure 2PC (and thus MPC) for quantum functionalities is impossible. Next, we focus on the bounded-concurrent setting, where we obtain simulation-sound zero-knowledge arguments for both NP and QMA, assuming PQ-OWFs. This is obtained by a new design of simulation-sound gadget which is compatible with the quantum rewinding strategy recently developed by Ananth, Chung, and La Placa [CRYPTO'21] for bounded-concurrent post-quantum ZK. Moreover, we show that our technique is general enough---It also leads to quantum-secure bounded-concurrent coin-flipping protocols, and eventually general-purpose 2PC and MPC, for both classical and quantum functionalities. All these constructions can be based on the quantum hardness of Learning with Errors. - Fiat-Shamir for Proofs Lacks a Proof Even in the Presence of Shared EntanglementFrédéric Dupuis (Université de Montréal); Philippe Lamontagne (National Research Council Canada); Louis Salvail (Université de Montréal)[Abstract]Abstract: We explore the cryptographic power of arbitrary shared physical resources. The most general such resource is access to a fresh entangled quantum state at the outset of each protocol execution. We call this the Common Reference Quantum State (CRQS) model, in analogy to the well-known Common Reference String (CRS). The CRQS model is a natural generalization of the CRS model but appears to be more powerful: in the two-party setting, a CRQS can sometimes exhibit properties associated with a Random Oracle queried once by measuring a maximally entangled state in one of many mutually unbiased bases. We formalize this notion as a Weak One-Time Random Oracle (WOTRO), where we only ask of the m–bit output to have some randomness when conditioned on the n–bit input. We show that when n − m ∈ ω(lg n), any protocol for WOTRO in the CRQS model can be attacked by an (inefficient) adversary. Moreover, our adversary is efficiently simulatable, which rules out the possibility of proving the computational security of a scheme by a fully black-box reduction to a cryptographic game assumption. On the other hand, we introduce a non-game quantum assumption for hash functions that implies WOTRO in the CRQ$ model (where the CRQS consists only of EPR pairs). We first build a statistically secure WOTRO protocol where m = n, then hash the output. The impossibility of WOTRO has the following consequences. First, we show the fully-black-box impossibility of a quantum Fiat-Shamir transform, extending the impossibility result of Bitansky et al. (TCC ’13) to the CRQS model. Second, we show a fully-black-box impossibility result for a strenghtened version of quantum lightning (Zhandry, Eurocrypt ’19) where quantum bolts have an additional parameter that cannot be changed without generating new bolts. Our results also apply to 2–message protocols in the plain model. - Oblivious Transfer from Zero-Knowledge Proofs, Or How to Achieve Round-Optimal Quantum Oblivious Transfer and Zero-Knowledge Proofs on Quantum StatesLéo Colisson (Centrum Wiskunde & Informatica, QuSoft, Netherlands); Garazi Muguruza (University of Amsterdam, QuSoft, Netherlands); Florian Speelman (University of Amsterdam, QuSoft, Netherlands)[Abstract]Abstract: We provide a generic construction to turn any classical Zero-Knowledge (ZK) protocol into a composable (quantum) oblivious transfer (OT) protocol, mostly lifting the round-complexity properties and security guarantees (plain-model/statistical security/unstructured functions…) of the ZK protocol to the resulting OT protocol. Such a construction is unlikely to exist classically as Cryptomania is believed to be different from Minicrypt. In particular, by instantiating our construction using Non-Interactive ZK (NIZK), we provide the first round-optimal (2-message) quantum OT protocol secure in the random oracle model, and round-optimal extensions to string and k-out-of-n OT. At the heart of our construction lies a new method that allows us to prove properties on a received quantum state without revealing additional information on it, even in a non-interactive way and/or with statistical guarantees when using an appropriate classical ZK protocol. We can notably prove that a state has been partially measured (with arbitrary constraints on the set of measured qubits), without revealing any additional information on this set. This notion can be seen as an analog of ZK to quantum states, and we expect it to be of independent interest as it extends complexity theory to quantum languages, as illustrated by the two new complexity classes we introduce, ZKstatesQIP and ZKstatesQMA. - Single-qubit loss-tolerant quantum position verification protocol secure against entangled attackersLlorenc Escola Farras (QuSoft); Florian Speelman (University of Amsterdam, QuSoft)[Abstract]Abstract: We give a tight characterization of the relation between loss-tolerance and error rate of the most popular protocol for quantum position verification (QPV), which is based on BB84 states, and generalizations of this protocol. Combining it with classical information, we show for the first time a fault-tolerant protocol that is secure against attackers who pre-share a linear amount of entanglement (in the classical information), arbitrarily slow quantum information and that tolerates a certain amount of photon loss. We also extend this analysis to the case of more than two bases, showing even stronger loss-tolerance for that case. Finally, we show that our techniques can be applied to improve the analysis of one-sided device-independent QKD protocols.
OPCFW_CODE
package apidiff.internal.analysis.description; import apidiff.internal.util.UtilTools; public class TemplateDescription { protected String messageRemoveTemplate(final String typeStruture, final String nameStruture, final String path){ String message = ""; message += "<br>" + typeStruture + " <code>" + nameStruture +"</code>"; message += "<br>removed from <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageVisibilityTemplate(final String typeStruture, final String nameStruture, final String typePath, final String path, final String visibility1, final String visibility2){ String message = ""; message += "<br> " + typeStruture + " <code>" + nameStruture +"</code>"; message += "<br> changed visibility from <code>" + visibility1 + "</code> to <code>" + visibility2 + "</code>"; message += "<br>in <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageChangeDefaultValueTemplate(final String typeStruture, final String nameStruture, final String typePath, final String path, final String value1, final String value2){ String message = ""; message += "<b>Category Default Value:</b>"; message += "<br>" + UtilTools.downCaseFirstLetter(typeStruture) + "<code>" + nameStruture +"</code>"; message += "<br>changed default value from " + value1 + " to " + value2; message += "<br>in " + typePath + " <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageFinalTemplate(final String typeStruture, final String nameStruture, final String typePath, final String path, final Boolean gain){ String message = ""; message += UtilTools.downCaseFirstLetter(typeStruture) + " <code>" + nameStruture +"</code>"; message += gain ? "<br>received the modifier <code>final</code>" : "<br>lost the modifier <code>final</code>"; message += "<br>in <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageStaticTemplate(final String typeStruture, final String nameStruture, final String typePath, final String path, final Boolean gain){ String message = ""; message += UtilTools.downCaseFirstLetter(typeStruture) + " <code>" + nameStruture +"</code>"; message += gain ? "<br>received the modifier <code>static</code>" : "<br>lost the modifier <code>static</code>"; message += "<br>in " + typePath + " <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageReturnTypeTemplate(final String typeStruture, final String nameStruture, final String typePath, final String path){ String message = ""; message += "<br>" + UtilTools.downCaseFirstLetter(typeStruture) + " <code>" + nameStruture +"</code>"; message += "<br>changed the return type"; message += "<br>in <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageParameterTemplate(final String typeStruture, final String nameStrutureAfter, final String nameStrutureBefore, final String typePath, final String path){ String message = ""; message += "<b>Category " + UtilTools.upperCaseFirstLetter(typeStruture) + " Parameters</b>:"; message += "<br><code>" + nameStrutureBefore +"</code>"; message += "<br>changed the list parameters"; message += "<br>to <code>" + nameStrutureAfter +"</code>"; message += "<br>in " + typePath + " <code>" + path + "</code>"; message += "<br>"; return message; } protected String messageMoveTemplate(final String typeStruture,final String fullName, final String pathBefore, final String pathAfter){ String message = ""; message += UtilTools.downCaseFirstLetter(typeStruture) + "<code>" + fullName +"</code>"; message += "<br>moved from <code>" + pathBefore +"</code>"; message += "<br>to <code>" + pathAfter +"</code>"; message += "<br>"; return message; } protected String messageRenameTemplate(final String typeStruture, final String nameBefore, final String nameAfter, final String path){ String message = ""; message += UtilTools.downCaseFirstLetter(typeStruture) + " <code>" + nameBefore +"</code>"; message += "<br>renamed to <code>" + nameAfter +"</code>"; message += "<br>in <code>" + path +"</code>"; message += "<br>"; return message; } public String messagePullUpTemplate(final String typeStruture, String nameStruture, final String nameClassBefore, final String nameClassAfter){ String message = ""; message += "<br> Pull Up " + UtilTools.upperCaseFirstLetter(typeStruture) +" <code>" + nameStruture +"</code>"; message += "<br>from <code>" + nameClassBefore +"</code>"; message += "<br>to <code>" + nameClassAfter +"</code>"; message += "<br>"; return message; } public String messagePushDownTemplate(final String typeStruture, String nameStruture, final String nameClassBefore, final String nameClassAfter){ String message = ""; message += "<br> Push Down " + UtilTools.downCaseFirstLetter(typeStruture) +" <code>" + nameStruture +"</code>"; message += "<br>from <code>" + nameClassBefore +"</code>"; message += "<br>to <code>" + nameClassAfter +"</code>"; message += "<br>"; return message; } public String messageDeprecate(final String typeStruture, final String nameMethodBefore, final String nameClassBefore){ String message = ""; message += "<br>" + typeStruture + " <code>" + nameMethodBefore +"</code> "; message += "<br>deprecated in <code>" + nameClassBefore +"</code>"; message += "<br>"; return message; } public String messageAddition(final String typeStruture, final String nameStruture, final String nameClass){ String message = ""; message += "<br>" + typeStruture + " <code>" + nameStruture +"</code>"; message += "<br>added in <code>" + nameClass +"</code>"; message += "<br>"; return message; } }
STACK_EDU
Should a designer learn to code? This question has been around for some time and been hotly debated online and within our own minds. Individually, there are not many reasons you shouldn’t learn some coding. At the same time, it is an investment in time you must make (and money) and that time might be needed elsewhere. The short answer might be I want to learn more, but when do I do it? In my own quest to solve this internal problem, I will lay out some of the benefits and reason to hold off. I don’t want to call it a pro/con list because I doubt any of us considers more knowledge a bad thing. As we all share an interest in designing digital products, we all have an interest in some level of coding. Understanding the process — You can design anything you want, but will it work? Is it something a developer can make easily or would it take months? In one app design, I created a tab that when pushed would fold up into a paper airplane and fly off the screen. I thought that was very cool, but could a developer make that in JS? It shouldn’t be too hard to have the container reshape itself and move across a screen. Although I was able to prototype this using the Principle app exactly as I envisioned it performing, I have no idea if it could be done in a real app. You don’t need to know how to code this interaction, but it is helpful to be able to think about what would go into creating it. If you knew something was next to impossible to code correctly, would you design it? While nothing is technically impossible, it could take a long time to iron out the kinks and your client may not have that kind of time or patience. So knowing what is possible is very good thing to keep in mind when your designs need to turn into functioning apps. This knowledge of how things work also translates to better designer/developer relationships and handoff. Knowing developer language and describing things in a way they understand will go a long way in improving how your ideas are received. Also being able to provide artifacts they can understand and recreate is more of a requirement now days. Developers need hex codes for colors, Font family sources, container sizes in pixels, column layouts and so on. Handing over a pdf of the site without references means the developer will have to work out on their own all the elements that make up the page you created. It most likely won’t look the same. Wouldn’t it be cool to watch your creations come to life? Or at least being able to put together your own website after designing it? I want to see my paper airplane in an app someday. It would be so nice to not have to rely on someone else to finalize and bring my creations to life. It is great designing a website for a client, but even better is making a working website and have it do what it was intended. In the time I have spent looking for a designer position about 40% of the time a job listing wants at the very least some html/CSS/JS understanding. Many companies recognize the importance of the relationship between designers and developers. There are some gray areas where both fields blend together. The main reason for the split is that designers focus on the user experience and creating beautiful apps while the developer focuses on making all the cogs line up. Once upon a time developers did all this work. It was a lot to do. When ideologies like agile came around and companies started to focus on beautiful web design and user-centered approaches things started to change. Now it is hard to imagine the tech world without us designers. It is still important to recognize the roles we play and how we collaborate to create better products. That is what being a better candidate is all about. Someone who understands both sides of the coin and works well to facilitate the progress. I think this makes a very strong case for learning to code. I certainly see no reason not to. In fact I kind of like coding and that is just another reason to learn it: Because it is fun! The downside to this is that you need to spend some time learning to code. There is already so much to learn and improve in the design sphere already. I have about 7 UX/UI book recommendations already and topics that I have barely scratched the surface on like accessibility or HCD. Where am I going to find the time to pickup coding? Front-end coding is probably year long commitment to have a good understanding of the basics. That is a lot of side work! Depending on how you want to approach things, maybe it is a subject reserved for later. Another thing to consider is that splitting your attention between two different fields of study means you will not master either anytime soon. There are masters and Doctorate levels of studies available for either field and that means years of study and possibly a specialized focus. You can do both, but it will take a long time. Over the course of a 40–50 year career that seems worth it. In the short term though, you are going to have to prioritize. Although I said the opposite a couple paragraphs ago, it may not actually improve your employability. There are still plenty of roles out there looking for strong UX/UI designers. Some organizations already have in place the tools and structure to coordinate design and development. Design systems and cross-functional teams already solve for some of these collaboration issues. My own job searches may not be reflective of the needs of many businesses out there. I had an instructor tell me long ago during job interview with Google he learned it was a requirement to be a developer and designer. Now things are different. Most big companies (and mid-size or startup) have in place both teams, so you may be fine doing one job or the other. Right now, we are focused on producing beautiful platforms and digital products that solve problems for people. There are many ways to get there and many opportunities. Overall my preference is to have the knowledge whether it benefits me professionally or not. I think it is a choice that all designers should consider at some point in their career. Another thing to consider is what the future of tech, design and development might look like. AI in tech may impact this to a degree. You may not need to code in the near future. There are now apps that allow you to drag and drop elements to create a website instead of writing lines of code. This has the potential to change how we look at roles within the tech sector. With shifts in ideology and new tools appearing more frequently we should all be prepared to adapt . Facing change and being flexible in the market are “must haves.” Maybe not something of paramount importance as a junior designer, but realizing and reacting to this sooner than later could be the litmus test of a growing career or a stagnating job. Consider this foray into design a life-long journey and whatever knowledge and growth you can squeeze into it the better.
OPCFW_CODE
2012 is rapidly running out, but fortunately the number of links (and the quality) continues to rise. 2013 looks like it might be a very good year for desktop Java at this rate! Keep up the great work folks:-) - As mentioned last week, there is a survey up on FX Experience about JavaFX on embedded, mobile and tablet devices. If you are reading this, regardless of whether you are a JavaFX developer or interested in targeting these kinds of devices, please (please please!) give us 5 minutes to record your thoughts in the survey – it is much appreciated. - Java 7u10 was released this week, which includes JavaFX 2.2.4. As per usual, along with the release comes a new release of JavaFX documentation. - To aid people working on the FX Game project (which is currently focused on building a Tower Defense game), Daniel Zwolenski has started up a Google group mailing list. If you’re interested in taking part, head over there, sign up and introduce yourself! - Danno Ferrin is working on an interesting application that renders text formatted with the plain text MarkdDown syntax using the TextFlow feature of JavaFX. - Tom Schindl continues to be very busy on his e(fx)clipse project, with three posts written in the past week. Firstly he talks about ‘Automated Eclipse Project creation and deployment (for JavaFX)‘, secondly he posts about his recommended project structure for (JavaFX) e4 projects, and finally he has a post about how EMF-Edit-Support is coming to JavaFX via e(fx)clipse. - Pedro Duque Vieira continues work on his excellent JMetro CSS styling for JavaFX UI controls, this week adding support for JavaFX menus. - Marco Jakob has posted part six of his series of JavaFX 2 tutorials, this time looking into charts. - José Pereda has a comprehensive blog post for part one of his series of posts on ArduinoFX, a JavaFX GUI for home automation with Raspberry Pi and Arduino. - Geertjan Wielenga has a post about how Bengt-Erik Fröberg is reworking the UI for JFugue Music NotePad to include elements of JavaFX. - The Open-Dolphin GitHub repo now has a sample on using dolphin to lazily populate a TableView with 100,000 rows. - Filipe Portes has posted to GitHub code for a new project of his called ModuleFX, which “embeds the JavaFX Runtime inside an OSGI bundle, allowing you to create modular JavaFX apps with all the power of OSGI framework, getting the best of Java Rich Client and Modularization worlds together.” The Hatena Diaryaoe-tk has a post about developing multi-touch application in JavaFX. That’s us for another week – catch you all next on Christmas Eve (hopefully, assuming we don’t all get wiped out on December 21st 😉 )
OPCFW_CODE
Why img_neck has no encoder layer? Great work! But I encounter some errors when running your code testing "PolarFormer, R101_DCN" and "PolarFormer-w/o_bev_aug, R101_DCN" using the checkpoints you posted. The neck is defined like: img_neck=dict( type='FPN_TRANS', num_encoder=0, # encoder is not used here num_decoder=3, num_levels=3, ... ), The img_neck is defined with no encoder layer, but transformer with no layers is not supported by pytorch. And therefore, I have these errors: File "tools/test.py", line 222, in main outputs = multi_gpu_test(model, data_loader, args.tmpdir, File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/mmdet/apis/test.py", line 109, in multi_gpu_test result = model(return_loss=False, rescale=True, **data) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward output = self._run_ddp_forward(*inputs, **kwargs) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 165, in _run_ddp_forward return module_to_run(*inputs[0], **kwargs[0]) # type: ignore File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 208, in new_func return old_func(*args, **kwargs) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/detectors/polarformer.py", line 118, in forward return self.forward_test(**kwargs) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/detectors/polarformer.py", line 171, in forward_test return self.simple_test(img_metas[0], img[0], **kwargs) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/detectors/polarformer.py", line 192, in simple_test img_feats = self.extract_feat(img=img, img_metas=img_metas, gt_bboxes_3d=gt_bboxes_3d, gt_labels_3d=gt_labels_3d) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 119, in new_func return old_func(*args, **kwargs) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/detectors/polarformer.py", line 77, in extract_feat img_feats = self.extract_img_feat(img, img_metas, gt_bboxes_3d, gt_labels_3d) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/detectors/polarformer.py", line 70, in extract_img_feat img_feats = self.img_neck(img_feats, img_metas, gt_bboxes_3d, gt_labels_3d) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/necks/fpn_trans.py", line 213, in forward ret_polar_ray_list = self._forward_single_camera(feature_single_cam, cam2lidar_info_single_cam, cam_intrinsic_single_cam) File "xxx/PolarFormer-main/projects/mmdet3d_plugin/models/necks/fpn_trans.py", line 146, in _forward_single_camera bev_out = self.transformer_layers[i](img_columns, polar_rays) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/transformer.py", line 145, in forward memory = self.encoder(src, mask=src_mask, src_key_padding_mask=src_key_padding_mask) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/transformer.py", line 206, in forward first_layer = self.layers[0] File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/container.py", line 197, in getitem return self._modules[self._get_abs_string_index(idx)] File "xxx/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/container.py", line 187, in _get_abs_string_index raise IndexError('index {} is out of range'.format(idx)) IndexError: index 0 is out of range So, how to solve it? Hi! In early experiments, img_neck had the encoder, however, to reduce the computational burden, we removed it in later experiments. Since the current version of the code has many differences from the old one, I'm not very sure if the encoder in img_neck can boost the performance of the model. Maybe you can just set num_encoders to a non-zero number to give it a try. I guess the error you met was caused by the mismatch of the PyTorch version. We've tested the model with Pytorch 1.8.x + CUDA 10.2/11.1/11.6, and it worked well. I find pytorch has an update in torch.nn.transformer since version 1.12, i.e., the code of class TransformerEncoder change from # v1.8.0 class TransformerEncoder(Module): [[docs]](https://pytorch.org/docs/1.8.0/generated/torch.nn.TransformerEncoder.html#torch.nn.TransformerEncoder.forward) def forward(self, src: Tensor, mask: Optional[Tensor] = None, src_key_padding_mask: Optional[Tensor] = None) -> Tensor: r"""Pass the input through the encoder layers in turn. Args: src: the sequence to the encoder (required). mask: the mask for the src sequence (optional). src_key_padding_mask: the mask for the src keys per batch (optional). Shape: see the docs in Transformer class. """ output = src for mod in self.layers: output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask) if self.norm is not None: output = self.norm(output) return output to # v1.12 class TransformerEncoder(Module): [[docs]](https://pytorch.org/docs/1.12/generated/torch.nn.TransformerEncoder.html#torch.nn.TransformerEncoder.forward) def forward(self, src: Tensor, mask: Optional[Tensor] = None, src_key_padding_mask: Optional[Tensor] = None) -> Tensor: r"""Pass the input through the encoder layers in turn. Args: src: the sequence to the encoder (required). mask: the mask for the src sequence (optional). src_key_padding_mask: the mask for the src keys per batch (optional). Shape: see the docs in Transformer class. """ output = src convert_to_nested = False first_layer = self.layers[0] if isinstance(first_layer, torch.nn.TransformerEncoderLayer): if (not first_layer.norm_first and not first_layer.training and first_layer.self_attn.batch_first and first_layer.self_attn._qkv_same_embed_dim and first_layer.activation_relu_or_gelu and first_layer.norm1.eps == first_layer.norm2.eps and src.dim() == 3 and self.enable_nested_tensor) : if src_key_padding_mask is not None and not output.is_nested and mask is None: tensor_args = ( src, first_layer.self_attn.in_proj_weight, first_layer.self_attn.in_proj_bias, first_layer.self_attn.out_proj.weight, first_layer.self_attn.out_proj.bias, first_layer.norm1.weight, first_layer.norm1.bias, first_layer.norm2.weight, first_layer.norm2.bias, first_layer.linear1.weight, first_layer.linear1.bias, first_layer.linear2.weight, first_layer.linear2.bias, ) if not torch.overrides.has_torch_function(tensor_args): if not torch.is_grad_enabled() or all([not x.requires_grad for x in tensor_args]): if output.is_cuda or 'cpu' in str(output.device): convert_to_nested = True output = torch._nested_tensor_from_mask(output, src_key_padding_mask.logical_not()) for mod in self.layers: if convert_to_nested: output = mod(output, src_mask=mask) else: output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask) if convert_to_nested: output = output.to_padded_tensor(0.) if self.norm is not None: output = self.norm(output) return output So num_encoder could not be zero in pytorch >=1.12. You could just downgrade the PyTorch, OR, if you must use the latest version of Pytorch, you could modify the code and only use nn.TransformerDecoder, but do remember to add LayerNorm for image features manually in case of performance drop. Sorry that I didn't indicate the pytorch version we use in install.md. I will add it. : ) Thank you for your replay! I will try it.
GITHUB_ARCHIVE
How to use local Service instance to call methods outside of onServiceConnected() that are declared in ILocalService.java interface Regarding Local Service Sample I have succeeded in defining methods declared in my ILocalService.java, but I don't know how to call these methods outside of an activity's onServiceConnected(). I can call only them from within onServiceConnected() which does not seem to be of much use. Am I limited to that? Here is my ILocalService.java: /************************************************************************************************** * Filename: ILocalService.java * Project name: Local Service Sample * Application name: Local Service * Description: This file contains an example interface for LocalService **************************************************************************************************/ package com.marie.localservicesample; public interface ILocalService { public LocalService getService(); public int getStatusCode(); } My assumption is that I should be able to call methods declared in my ILocalService.java. These calls should not be limited to from within onServiceConnected() by an activity that is bound to my LocalService.java. How would I do this? If you will use the service reference outside the serviceConnected method. Mostly it will throw nullpointerexception. Make a button in your activity and try to use service reference in its onClick. Click the button at least waiting 10 seconds. It will work fine. This indicates that service takes time to bind to the local reference variable. ServiceConnected method is only used to tell that service got connected and now it is usable. In order to use properly service outside the serviceConnected method. You can maintain a boolean flag in the serviceConnected method. On the basis of flag value use service anywhere you want This is as far as i know. Hope it helps :) thanks for the answer. I may have do that. But I think I found a solution that works for me for now. great then :). If you found a better answer do post that also. so others can also get the benefit @Javantor it won't let me post an answer to my own question for like 8 hours. But here's what I found: If I declare mBoundService as "private ILocalService mBoundService = null;" Then in onServiceConnected() assign it as follows: "mBoundService = (ILocalService) service;" Then I get a Service instance that allows me to call any method declared in my ILocalService.java from anywhere in the activity bound to the local Service. But I think your answer is good so I will accept it and vote it up. If you liked my question may I ask you to vote if up? sure. It was a good question. as i faced the same when i used services for the first time
STACK_EXCHANGE
The workflow did not pull the remote image to create a docker container when it was running. I successfully imported the piece I made into the platform and created the workflow. An error occurred during runtime. The rest log did not report an error message. However, after troubleshooting, I found that the piece I made did not create a docker container (image publish was successful). So I copied ml_domino_pieces , and only modified the REGISTRY_NAME in config.toml to be my own account name (this is just to ensure that the publish image is successful). The rest of the places were not modified. After importing to the platform, it is still the same. No docker container is created when running the workflow. No errors were reported throughout the process, but the docker container did not run and the cloud image was not pulled. What could be the reason? Are there any other operations required to import the piece I made to work properly? Thanks for your answer! Hey @lanzhixi, sorry you are facing issues in creating pieces, I'll try to help you to debug what is happening. What environment are you running Domino? Are you running locally using docker compose? When you run the workflow, what happens with the piece and with your workflow? Do they go to the failed state or they get stucked in running or other state? Can you send me a screenshot of your workflow after running it? Your ml_domino_pieces is just a copy of ours, right? If so, can you make it public for a while so I can try to reproduce it ? Hey @vinicvaz ,Thank you for your reminder, I will standardize my questioning method next time. The latest attempt is to successfully run the piece I made. The reason is that I am used to naming with uppercase letters, and in order to make the image build successful, all lowercase letters are filled in config.toml. This may cause the platform to not find the corresponding image based on the repository name at all, so it appears that the cloud image is not being pulled. thanks for spotting the issue @lanzhixi ! Can you confirm you're now able to run your Pieces? Thank you for your continued attention. I am sure that if I don't make the mistakes I made before, I can successfully create the piece and successfully import the domino to run. This is so cool! But in my recent attempts, I have encountered some problems, and I would like to ask you for help. After successfully trying to use the template, my confidence increased and I started to create my own lightGBMPiece. However, the lightGBMPiece I wrote had problems running. After adding tests, I found that the problem was not in the code. I'd like to ask you to help me take a look. thanks! Hey @lanzhixi , thanks for reporting this. Yes, you are right, the tests will not run for your piece, and there are few reasons why: I think there is a missing argument in your lightgbm dockerfile, you should include RUN apt-get install libgomp1 . I tried to run it locally after building and got the error: [ImportError: libgomp.so.1: cannot open shared object file: No such file or directory](https://stackoverflow.com/questions/43764624/importerror-libgomp-so-1-cannot-open-shared-object-file-no-such-file-or-direc) This command should fix it. Not sure but I think you have to fix some things in your code, like the missing evaluation data, which I Think is required for early stop The third thing is our fault. The way Domino runs tests on github environment still a bit tricky, I'll try to explain here what and why the things happen. First lets imagine a scenario with multiple pieces and a lot of dependency conflicts in a same repository. For this scenario, installing all the pieces dependencies in the github actions environment would be impossible, so to avoid that we indeed build the docker images in github actions environment and and run each image independently, where each image listen to the tests env. Basically what we are doing is separating the tests and piece code environment, where tests are running in github actions root environment and the piece code will run in a docker container running inside the github actions root environment. The way we do that is basically done in 3 steps: First we build all the images and save a map for each piece name and corresponding image. Example: LightGBMTrainPiece: ghcr.io/lanzhixi/piece_test:0.1.5-group0 Based on the piece name defined on the test we run your built docker image starting a really tiny HTTP server in your piece container. This HTTP server will listen the request from the piece_dry_run function, pass it to your piece_function and return the results to the test function. This is the way we've found to run the pieces in their isolated environment, the problems with that are: We can't mock internal functions We can't read files from outside of piece build. I guess this is one of the problems you're facing. You can add your test file (iris.csv) to build your build context (saving it on the piece folder should be enough) and on your test you can use the relative path to it or even the absolute path on the container, like /home/domino/pieces_repository/pieces/LightGBMTrainPiece/iris.csv. It is hard to debug Additional information / Alternative Solution You can always test your piece in your local environment before sending it to github, it will help you to debug things like code error (not docker image problems like the point 1) You can skip a piece running in a specific environment using the skip_env decorator: You can always develop your piece function as a separate function, or even in a jupyter notebook and after ensuring all the code is right you can bring it to the Domino framework and add tests to it. from domino.testing.utils import skip_envs @skip_envs('github') def test_my_piece(): ... This might be useful for tests that you can't run yet in github actions but want to run locally, like here. I know this is a lot of information to get so if you have any questions feel free to send it here. Hey @vinicvaz ,Thank you very much for your detailed answer, it's very useful to me! I successfully ran my piece by modifying the dockerfile. But I had to add the skip_env decorator to the test file to skip the test, otherwise the test would still report an error. That's all for now. I will continue to improve lightGBM related pieces in the future. If necessary, I will be happy to submit them to the ml_domino_pieces repository. Thanks! awesome job @lanzhixi! let us know how your ML pieces evolve, we would be happy to integrate them into existing repos or to juts list them in the open source gallery! I suggest you also take a look into how to display results from your piece at the GUI, it can be very useful for ML pieces such as LightGBM: https://docs.domino-workflows.io/pieces/create_pieces#piecepy Example of a piece producing a Plotly image: https://github.com/Tauffer-Consulting/ml_domino_pieces/blob/main/pieces/PCAInferencePiece/piece.py
GITHUB_ARCHIVE
Obesity afflicts over 13 million American children, according to the U.S. Centers for Disease Control and Prevention. The risk of childhood obesity also increases with age, with over 20% of all American 12 to 19 year olds being classified as obese. Furthermore, study after study has found that childhood obesity dramatically increases the risks of dangerous health consequences in adulthood. COVID-19, and the lockdowns imposed to combat it, have made this complex and far reaching problem even worse! Schools all around the U.S. have closed, which means no recess, no sports, and no gym class. Many communities have implemented strict social distancing guidelines which have closed outdoor spaces like playgrounds and parks. Unfortunately, these measures have accidentally encouraged kids to spend even more time just sitting around indoors. In fact, University of Missouri sociology professor Joseph Workman estimates that just six months of school closure could result in a 4.86% increase in childhood obesity. I built the exARcise web app to help solve this problem, by giving kids a creative way to have fun exercising in their own homes. What it does ExARcise uses augmented reality to gamify exercise for kids, in order to encourage them to exercise. The app is a platform that facilitates the creation of interesting and fun real-world activities by providing easily embeddable augmented reality experiences. The core of the app is the AR enabled QR codes, which I power through echoAR. The QR code has the information the app needs to pull up the AR player, which then displays an exercise tutorial/example video onto the marker. This combination tech stack allows users to use the QR codes to create their own activities, like scavenger hunts, dice, or bean bag toss, and display the information into the real world. I have also used gamification to encourage exercise by awarding badges for doing specific exercises, and having the user set daily exercise goals. How I built it Since exARcise is meant to be accessed via a mobile device, responsiveness is a key design feature. Therefore, I chose to build a React app, styled in Material UI so the website is accessible and easy to use on all sized devices. It’s also hosted on Google Firebase Hosting, to guarantee high availability and to automatically scale server resources up and down based on demand. For the AR portion of the app, I used echoAR, because it allows me to easily manage and track my AR experiences, and change them in the future. To use echoAR’s built-in rendering system, our app has a two-part embedded system for reading and viewing the experiences. The first step is scanning the QR code using the reader built into the app. This triggers a popup with the appropriate viewer for the experience the user is looking to have. From there, echoAR allows me to embed and play the video. Challenges I ran into One major challenge with the app that I came across was in encapsulating echoAR’s full rendering process into the app. Traditionally, the user would scan an echoAR QR code outside of any app, and the QR code would take them to the web viewer for the media. While that is great for a poster or flyer that is isolated from any specific technology, I specifically wanted people to view the exercises in the app so I could reward them for their exercise. The result was creating a slightly roundabout process on our end, but the end result from the user is intuitive and simple, and it prevents them from ever needing to leave the app. Accomplishments that I'm proud of The main thing I am proud of is getting the AR to work so well! I'm really proud of the simple process I created for the user. What's next for exARcise In the future, I want to expand exARcise in the following ways: - Expand out the library of exercise videos, exercise badges, etc. - Enable the user to create personalized workout routines and sets - Add mindfulness and wellbeing activities, like yoga poses
OPCFW_CODE
[TensorRT] INTERNAL ERROR: Assertion failed: status==STATUS_SUCCESS Code dir: closed/NVIDIA Test benchmark: ssd-resnet34 Scenarios: Server When I running command make run RUN_ARGS="--benchmarks=ssd-resnet34 --scenarios=Server --test_mode=PerformanceOnly" and make run RUN_ARGS="--benchmarks=ssd-resnet34 --scenarios=Server --config_ver=triton --test_mode=PerformanceOnly" there all have this kind of error [2021-07-01 10:26:57,901 builder.py:149 INFO] Building ./build/engines/Tesla P4x1/ssd-resnet34/Server/ssd-resnet34-Server-gpu-b2-int8.default.plan [TensorRT] WARNING: Calibration Profile is not defined. Runing calibration with Profile 0 [TensorRT] INFO: Detected 1 inputs and 1 output network tensors. [TensorRT] INFO: Starting Calibration. Calibrating with batch 0 [TensorRT] INTERNAL ERROR: Assertion failed: status == STATUS_SUCCESS /work/code/plugin/NMSOptPlugin/src/nmsPluginOpt.cpp:183 Aborting... Traceback (most recent call last): File "code/main.py", line 703, in <module> main(main_args, system) File "code/main.py", line 634, in main launch_handle_generate_engine(*_gen_args, **_gen_kwargs) File "code/main.py", line 62, in launch_handle_generate_engine raise RuntimeError("Building engines failed!") RuntimeError: Building engines failed! Makefile:606: recipe for target 'generate_engines' failed make[1]: *** [generate_engines] Error 1 make[1]: Leaving directory '/work' Makefile:600: recipe for target 'run' failed make: *** [run] Error 2 I have built test docker successfully by running command make prebuild and already built the required libraries and TensorRT plugins successfully by running command make build my gpu is tesla p4, and here is my config file code/common/system_list.py P4=SystemClass("Tesla P4",["Tesla P4"],["1BB3"],Architecture.Unknow,[1]) configs/ssd-resnet34/Server/config.json "Tesla P4": { "config_ver": { "triton": { "instance_group_count": 4, "server_target_qps": 110, "use_triton": true } }, "deque_timeout_usec": 2000, "gpu_batch_size": 2, "gpu_inference_streams": 4, "server_target_qps": 110, "use_cuda_thread_per_device": false }, So where the problem is? Is there something wrong with config file, inference data or model? You need to generate you harness before. make generate_engines RUN_ARGS="--benchmarks= --scenarios= --config_ver=default,high_accuracy [OTHER FLAGS]" Oups I saw it's a P4... si good luck you won't be able to do anything with this. In my experience only Tessa and ampere can be reproduced with nVidia benchmark cause it's used mixed core like int8 or fp16. I don't think p4 has FP16 or int8 or so and won't support this. I remember going into triton code to have the IF statement changed for the GPU family and Only ampere/Tesla were supported. Here it is an exemple of why it won't work https://github.com/NVIDIA/TensorRT/blob/eb5de99b523c76c2f3ae997855ad86d3a1e86a31/plugin/bertQKVToContextPlugin/qkvToContextInt8InterleavedPlugin.cpp#L65 Here it is an exemple of why it won't work https://github.com/NVIDIA/TensorRT/blob/eb5de99b523c76c2f3ae997855ad86d3a1e86a31/plugin/bertQKVToContextPlugin/qkvToContextInt8InterleavedPlugin.cpp#L65 Thanks for your reply! Now I know why P4 doesn't work. I changed my gpu from a P4 to a T4 and then I was able to run the code properly.
GITHUB_ARCHIVE
Posted by Jessica Dene Earley-Cha, Developer Relations Engineer We’re pleased to announce the General Availability (GA) of App Actions using shortcuts.xml, part of the Android shortcuts framework. By using the Shortcuts API, it’s never been easier to add a layer of voice interaction to your apps, by simply using the Android tooling, platform, and features you already know. This release means your shortcuts.xml implementations are now fully supported through our support channels. App Actions let users launch and control Android apps with their voice, using Google Assistant. At Google I/O 2021, we released a beta of App Actions that enabled developers to implement App Actions using the Android shortcuts framework, lowering the development cost of voice enabling apps by leveraging a common framework many developers were already familiar with. Throughout the beta period, we listened to developer feedback and made several improvements to the API, developer tooling, and Assistant comprehension and accuracy of voice commands. Over the past year we’ve added new features, like the ability to fulfill user voice requests using Android widgets, and in-app voice control. The set of built-in intents supported by App Actions has also expanded to include travel and parking intents suited for use in Android for Cars apps. See how Strava implemented App Actions to provide a voice-forward experience to their users. I’m new! How do I get started? New App Actions developers are encouraged to try the App Actions learning pathway. This learning pathway is a complete training course that prepares new and seasoned Android developers to design and implement voice-enabled app experiences. After completing the pathway, you’ll earn the App Actions with Google Assistant badge on your developer profile. Check out the latest App Actions news from I/O We are excited to have had several sessions focusing on App Actions this year at Google I/O. - How Google Assistant’s architecture powers voice features in your apps: A technical deep dive on Assistant's architecture, and how you add voice to your apps with a few lines of code. - Google Assistant functionality across Android devices: Assistant has integration paths on many Android devices. We'll talk about how you can voice features to your app. - Car Talk: Assistant and Android for Cars: Take a closer look at how to add voice functionality to Android Auto apps. - Workshop: Integrate Android widgets with Google Assistant: A walkthrough of a codelab on how to integrate your Android Widgets as fulfillment to user’s voice request. My app already uses App Actions. How do I stay supported? Developers with existing App Actions implementations that use actions.xml are encouraged to migrate their implementation before the end of the support period on March 31st, 2023. Implementations that leveraged shortcuts.xml during the beta period will continue to work, as they have been without any changes required, until March 31st, 2023. I have more questions! There are several ways to get in touch with the App Actions team and interact with the developer community.
OPCFW_CODE
using chapter_08.Engine.States; using chapter_08.States.Gameplay; using Microsoft.Xna.Framework.Audio; using System; using System.Collections.Generic; namespace chapter_08.Engine.Sound { public class SoundManager { private int _soundtrackIndex = -1; private List<SoundEffectInstance> _soundtracks = new List<SoundEffectInstance>(); private Dictionary<Type, SoundBankItem> _soundBank = new Dictionary<Type, SoundBankItem>(); public void SetSoundtrack(List<SoundEffectInstance> tracks) { _soundtracks = tracks; _soundtrackIndex = _soundtracks.Count - 1; } public void OnNotify(BaseGameStateEvent gameEvent) { if (_soundBank.ContainsKey(gameEvent.GetType())) { var sound = _soundBank[gameEvent.GetType()]; sound.Sound.Play(sound.Attributes.Volume, sound.Attributes.Pitch, sound.Attributes.Pan); } } public void PlaySoundtrack() { var nbTracks = _soundtracks.Count; if (nbTracks <= 0) { return; } var currentTrack = _soundtracks[_soundtrackIndex]; var nextTrack = _soundtracks[(_soundtrackIndex + 1) % nbTracks]; if (currentTrack.State == SoundState.Stopped) { nextTrack.Play(); _soundtrackIndex++; if (_soundtrackIndex >= _soundtracks.Count) { _soundtrackIndex = 0; } } } public void RegisterSound(BaseGameStateEvent gameEvent, SoundEffect sound) { RegisterSound(gameEvent, sound, 1.0f, 0.0f, 0.0f); } internal void RegisterSound(BaseGameStateEvent gameEvent, SoundEffect sound, float volume, float pitch, float pan) { _soundBank.Add(gameEvent.GetType(), new SoundBankItem(sound, new SoundAttributes(volume, pitch, pan))); } } }
STACK_EDU
Matlab Builder NE q&a and why R blows the doors off it for quant and model development or strategy building A Youtube video user asked: Matlab builder NE I like your videos about Matlab and how it can be integrated with .NET. I’m not that much of a programmer but here’s what I’d like to do: 1. Use “deploytool” to create a .NET-Assembly 2. Integrate it in a .NET application to do calculations on an array containing a financial time-series. Unfortunately I get a bit confused when and how to use MWArray, MWNumericArray (etc..) when declaring the variables and arrays I need for the calculation. Below there’s the code for the “magic square” example. I’m wondering how I need to change this “prototype code” for my purposes – taking an array of financial prices, passing it to the matlab function and returning the result. Can you help me? Thanks you very much… andrew8w Dim obj As MLTestClass = Nothing Dim input As MWNumericArray = Nothing Dim output As MWNumericArray = Nothing Dim result As MWArray() = Nothing Try obj = New MLTestClass() input = 4 result = obj.magic(1, input) output = DirectCast(result(0), MWNumericArray) Console.WriteLine(output) Catch generatedExceptionName As Exception Throw End Try My answer is: Use R. There are many better ways to integrate into an external language including .NET languages like C# or Visual C++ or even Java. I find R is easier with cetain R packages like RCaller. ALso, it is free so this is why I switched. Why you will never see discounts to our quant membership on learning trading technology, algo, strategy, and model development? successful indie traders!! I get comments or queries like this below all time: Can I get access to your site for the first month for $25 (4 kids!)? I’m pretty new to the quant-level work … think your stuff might be temporarily way over my head. Not sure what value it’s going to bring me or if it will just be complete overload, but I would like to start into it with earnest. If I can get into it, I think the $75 monthly fee or whatever it was would be very much worth it and I could justify it. Let me know? Thanks,This is pretty easy to explain. View this video at Youtube to see how our dirt cheap rate for our grand tour of the membership: First of all, we never put a gun to your head to join. Neither do we do hard sells for people to join. Our membership is growing at a nice pace. Once we get past our model R code walkthroughs, the membership will start learning about our selected and potential open source trading platform in Linux and C++. We will also demo the clustering and parallelization capabilities using open source. Also, we will include a a complete evaluation on what model forecasting techniques will show the most profitable potential. If all these items are still not enough to join for the highly affordable rate, well maybe you should either get out of trading or just walk into a casino. You will never make money in trading if you are not willing to make a little investment. Pretty simple I would think. We are in the midst of getting our public and private membership webinars going as well in order to build a community. As you can tell, we don’t discount but will be increasing the rates exponentially as we will start focusing on the higher end market that are willing to pay $100 per month for our service. Don’t believe me this exists? Please visit this survey to see the results. http://quantlabs.net/surveys/ So consider the member ship already discounted quite heavily as It moves forward, the rate will be going up. Once you join, you will continue paying that rate forever as long as you remain a member. Plus, our payment administrator platform does not allow us to discount anyhow. We can only offer coupons but there may be a rare promotional offer during our public webinars so you will need to be an active participant to know about those. Find info like that by jolning our Meetup group. http://www.meetup.com/quant-finance/ Performance attribution also known as benchmarkingdescribed in our algo, model development, strategy development online course for a quant or HFT This has been posted in the new online course for our members. Get access to it by going here: http://quantlabs.net/membership.htm Right on! Java development done so now I can focus on model development and research again with Matllab I just joined an R Meetup group so I will find out what this thing is about in the next month. Also, I very happy to get back to this process with Matlab. It is very critical for my premium membership service to expand it. Big Data Sets & Hadoop – BackTesting and/or Model Development Any out there use or using Hadoop as part of their backtesting repository? Any good lessons or experiences in using Hadoop? That is what we are going to use. We have done some researches in that field, including consultations with search engine developers that are using hadoop, and found out that it will fully suit us for storage and cluster calculations. The project is in development stage at the moment, so I can’t tell right now how hadoop performs) but it was designed for hard calculations 18 hours ago Did you look at any in-memory databases like VoltDB? or did Hadoop have more in the way of calculations? Hadoop should only be used if you need to spread the calculations between multiple nodes. This is where the speed is. You can easily connect as much nodes as you need. And that can be virtual machines, dedicated servers, work stations) and it can store significant amount of data with real time back up.
OPCFW_CODE
We’ve talked a lot about our fast game streaming technology; however, today we wanted to talk about something even more critical. Security. We employ a plethora of techniques to help ensure you can always securely connect to your system both inside and outside your home. One of the rules we apply is only serving our content on what is known as “secure origins” or HTTPS. You’ll find that no page on Rainway makes calls to non-secure origins. What that means is your traffic right from the start of loading is entirely encrypted so no third parties can see what you are doing. This, however, does present a challenge. You see, Rainway is a self-hosted application meaning that the “server” portion lives on a users computer. This is more or less fine when we attempt a P2P connection over WebRTC because when successful, it utilizes Datagram Transport Layer Security (DTLS), Secure Real-time Transport Protocol (or SRTP), and Advanced Encryption Standard (AES). However when it fails, we fall back to WebSockets. Browsers enforce that a page loaded from HTTPS must have any WebSocket connections be to a WSS (WebSocket Secure) endpoint and this requires a valid SSL certificate. The solution seems straightforward enough, purchase a certificate and use that on all user’s servers, right? The issue is that would require shipping the private key to all users, thus making all encryption pointless and leading to the certificate being revoked and we also cannot ask users to register their domain and buy a certificate to play their games. I love the smell of uncatchable errors in the morning Enter Lochcert, a PKI (public key infrastructure) we built on top of Let’s Encrypt so that we can provide unique SSL certificates to all of our users in real-time. First, we sync a user’s known IP’s to our routing service cya.gg, which currently managing 100,000 different addresses. Once we’ve validated a connection can be made, we generate a cryptographically secure token to be used as their DNS hostname (jq9k7qs64d29zawm.cya.gg) and bind the relevant A and AAAA records to it. We then submit a request to Lochcert which validates the newly created DNS entries and within a few seconds should have a valid and unique SSL certificate. The Rainway instance then pulls down the certificate and password protects it before storing it in the machines local store. Secure game streaming in the browser. Now we are doing wide-scale automated SSL/TLS certificate deployment with zero friction, and the result is an effortless, state-of-the-art secure deployment of SSL/TLS by default for tens of thousands of users. We are working to further improve things by implementing dynamic DNS to avoid propagation and wildcard certificates to support multiple machines under a single user. It is hard to believe in 2018 there are services providing game streaming that do not encrypt your traffic, so we set the bar higher for ourselves and you.
OPCFW_CODE
How to create android Library and distribute as a single jar without resources and android manifest file I want to create an android library project and I want to distribute it as jar file .I read some blog that how to create a library project in android but in all of them I have to distribute android resources file with jar file. Can any one suggest me how to create a android library project as java project, So that I can distribute it as a one jar file with out resources and android manifest file . If your Android library project uses resources, you have no choice but to ship those resources. If your Android library project does not use resources, you create a JAR file the same way as you would for any other Java project (e.g., <jar> task in Ant). See https://github.com/commonsguy/cwac-prefs for an example. Okay Thanks for the reply .. I guess in future google will provide android sdk tool to create one jar file with all the resources .. I have one more question . Suppose If I have created a java project so in that case I will not have an androidmanifest file so in that case , will this library will support @NagendraSrivastava: I am sorry, but I do not understand your last comment. So how do I 'ship' those resources? In my main project, I have imported the library project jar file into my libs, and copied the library project resources into my main project resources. But I get the exception java.lang.NoClassDefFoundError: com.example.libraryproj.R$layout -- so it seems like the jar library can't resolve the resources that have been copied into the main project. @wildabeast: "So how do I 'ship' those resources?" -- however you want, so long as you are shipping the full source code and resources to the Android library project. I use a ZIP file. "I have imported the library project jar file" -- there is no "library project jar file" suitable for distribution at this time. This is an objective of a revamped build system, something they are working on and will hopefully be a reality in 2013. Ah ok -- so if it uses resources, it has to be distributed in source code form. In that case, can the source code be imported directly into the project (rather than being a separate library project on its own)? I tried adding the library projects source to my main project src, but it can't resolve the R class (I'm guessing because its in a different 'package'). @wildabeast: "In that case, can the source code be imported directly into the project (rather than being a separate library project on its own)?" -- I have never tried that, sorry.
STACK_EXCHANGE
import { validateEmail, validatePassword, validateLogin } from './validation'; describe('validateEmail', () => { it('should return an empty list when email is valid', () => { const email = 'test@test.com'; const validationResult = validateEmail(email); expect(validationResult.length).toBe(0); }); it('should return a list containing error message when email is invalid', () => { const email = 'totallyNotAnEmail'; const validationResult = validateEmail(email); expect(validationResult.length).toBe(1); expect(validationResult[0]).toEqual('this doesn\'t look like a valid email'); }); }); describe('validatePassword', () => { it('should return an empty list when password is valid', () => { const password = 'P4sSW0Rd#1'; const validationResult = validatePassword(password); expect(validationResult.length).toBe(0); }); it('should return a list containing error message when password is too short', () => { const password = 'abc'; const validationResult = validatePassword(password); expect(validationResult.length).toBe(1); expect(validationResult[0]).toEqual('at least 5 characters required!'); }); }); describe('validateLogin', () => { it('should return an empty list when login is valid', () => { const login = 'JFK'; const validationResult = validateLogin(login); expect(validationResult.length).toBe(0); }); it('should return a list containing error message when login is too short', () => { const login = 'MU'; const validationResult = validateLogin(login); expect(validationResult.length).toBe(1); expect(validationResult[0]).toEqual('at least 3 characters required!'); }); });
STACK_EDU
How can I set JVM-Memory in Solr-5.0 How can I increase heap space memory in Solr-5.0? I just want to set minimum and maximum Heap space memory of Solr. This info is in the WIKI: https://cwiki.apache.org/confluence/display/solr/Taking+Solr+to+Production#TakingSolrtoProduction-MemoryandGCSettings SOLR_JAVA_MEM="-Xms10g -Xmx10g" How about using -m option ./solr restart -m 1g From ./solr --help - -m Sets the min (-Xms) and max (-Xmx) heap size for the JVM, such as: -m 4g results in: -Xms4g -Xmx4g; by default, this script sets the heap size to 512m @RonakShah Try below on your console. ./solr --help - -m <memory> Sets the min (-Xms) and max (-Xmx) heap size for the JVM, such as: -m 4g results in: -Xms4g -Xmx4g; by default, this script sets the heap size to 512m This exactly answers the question. @RonakShah This is indeed an answer to the question. It is describing an another way of setting JVM heap space which is what question is about! Please do not criticize a good answer. Follow the below steps to increase the JVM memory in Solr: Open solr.in.cmd in any text editor (path: bin/solr.in.cmd) Add the set SOLR_JAVA_MEM=-Xms1g -Xmx1g ( it will set the minimum and maximum Java heap memory by 1 GB. however you will only able the see 981.38 MB in solr dashboard) save the solr.in.cmd file and restart the solr For Centos 7 the modification is in the file bin/solr, e.g. to increase it to 5G : vim /opt/solr/bin/solr Edit the block of code: JAVA_MEM_OPTS=() if [ -z "$SOLR_HEAP" ] && [ -n "$SOLR_JAVA_MEM" ]; then JAVA_MEM_OPTS=($SOLR_JAVA_MEM) else SOLR_HEAP="${SOLR_HEAP:-512m}" JAVA_MEM_OPTS=("-Xms5g" "-Xmx5g") fi Thanks for the post... Worked for me with Solr 8.3 on CentOS as well. Methods in other answers above did not. Editing a program's defaults is not the way. You'll lose this when you update. Edit the appropriate config files, please. In the case of Solr 7, there's commented out directives you can customise in the default /etc/default/solr.in.sh.
STACK_EXCHANGE
Following on my previous review of a 100-bank low-cost USB floppy emulator and testing of the emulator, I realized that the eBay market saw the introduction of another floppy emulator. This one is the GoTek System SFR1M44-U100K 1000-bank USB floppy emulator, also attractively priced around US$20. The Unit and Inclusions The device itself has the letters GOTEK in the moulding, and features a three-digit seven-segment LED display (still covered with plastic tape), a USB port, two push buttons and an access LED. The green LED lights up whenever the drive is active. The right button is used to increment the “ones” position, the left button increments the “tens” position, and pressing both buttons momentarily increments the “hundreds” position. Holding them both may engage the auto-format function which will erase all content. In use, this drive allows you to increment the bank during accesses but this may lead to data loss. The increment function appears to cause the change line to be toggled to signal to the controller a disk removal. The other emulator locks-out the increment/decrement during accesses. The casing itself differs from the other emulator, with a distinctive “step” shape, and the use of only three screws to secure the unit together. There was also the provision for branding and other button labels which are not provided. The step shape is clearly visible here – and may be an advantage when trying to shoehorn the device into non-standard floppy drive enclosures, although those often have non-standard interfaces as well. Exposed at the rear are jumpers – some are labelled for drive select, and others are undocumented. There are also through holes unpopulated which are also for configuration by jumper. The 34 pin interface has the keying pin (pin 3) still in place (just like the other emulator I had), there’s no harm clipping this off for cables with the hole blocked. It’s probably a good time to note that this emulator does come with an 8cm CD (groan!) however, the tools provided are arranged arbitrarily, are poor and (some) improperly licensed. I’ll show you some of it later on. It also comes with manuals, although for a variety of versions of products which are not related whatsoever. It appears there are separate versions for 720kB, 1.2Mb and 1.44Mb format, and there is a format which images the disk, and another which interprets folders as a disk – and customized versions for sewing machines and networked applications. There are some documentation which refers to a 100-bank model, and others to a 1000-bank model and indeed the tools seem deficient in the least. Most of this is there to confuse you, so I’d suggest avoiding the CD if you can. The datasheet is provided though with some jumper explanation. The internals consist of a green PCB populated with an STM32F105 ARM Cortex-M3 CPU from STMicroelectronics, a NXP 74HC04D Hex Inverter Buffer, a 3.3v regulator, and a few supporting resistors, capacitors and a crystal. It’s a little simpler looking compared to the other emulator – having no external SRAM or multiple ICs – the magic will obviously be in the programming of the ARM CPU. The rear side of the PCB is relatively bare – picture is for proof: The Unit In Use I plugged it into the same old Windows 95 box that I have running (since it’s capable of non-standard formats) just to give it a go. The unit would display 000 without a USB stick connected, and would display the same once a stick is connected. Formatting the disks (with verification) with WinImage was not a problem at 1.44Mb. Surprisingly, a format at 1.2Mb also succeeded. 720kB and all non-standard (1.68Mb, 1.72Mb, DMF, 820kB, 360kB, 320kB, 180kB, 160kB) all failed. Following the same methodology as before, I performed a pattern fill on the USB key to be used with the device, and I formatted the first 5 banks, and the last bank and found the “beginning” of the images. It was found that this device also stores sector images, just like the other emulator. The offsets to each bank are as follows: - Image 000 – 0h - Image 001 – 180000h - Image 002 – 300000h - Image 003 – 480000h - Image 004 – 600000h - Image 999 – 5DA80000h Do you know what this means? This is the same scheme as for the other emulator which means, in theory, the first 100 banks will be interchangeable between both emulators without requiring any attention. The pattern is obvious and tells us that for 1000 floppies, only the first 1500MiB of any USB key will be used. Therefore, a 2Gb USB key is enough! A commenter on my previous post said that Ipcas has a similar floppy emulator to the other emulator and had software downloads. Curiously, their software download is similar to the software provided with this GoTek system unit – but does not appear to work. There are two programs supplied – one of them is called UFDISKManager.exe and is supposed to allow you to access each of the disks on your USB key by extracting the files into a temporary directory, where you can modify them, and then repack them back and write them to the USB key. The language seems permanently fixed to Chinese, and I don’t really find it workable … You can see that I formatted bank 001 as a 1.2Mb floppy and it detects that correctly. It does not change the binary offset for the other banks however, but this does not work properly, as I will explain later … The other pieces of software is UFloppyManager.exe with a plethora of options related to pre-formatting a USB key for use with the drive, but I haven’t managed to get it to work meaningfully either. So it’s pretty much a fail all round with the software. Analyzing the Output Thanks to the Kryoflux, we can get a little more information about how the drive presents itself to the host. On connecting the Kryoflux, the calibration option returns a value of 84 tracks (0-83). The Kryoflux itself doesn’t test for larger values – it may be possible to get 85! This is different to the other emulator which tops out at 80 tracks (0-79). When reading the “virtual” floppy, it is very close to 300RPM with a clean signal – on all tracks! The first question we should ask is what is being returned on those tracks? A quick peek in the hex editor reveals that those tracks indeed return the data “in sequence” from the USB key – so if they were never written by a host, they will return whatever was remnant on the USB key in the “gap” between images (in my case, 0xBAADF00D). Another question is what happens when we attempt to read a disk which is formatted as a 1.2Mb bank? Well, as it turns out, the unit emits the full 18-sectors per track at 300rpm – not the expected 15 sectors per track at 360rpm. This makes the unit that I have unsuitable for emulating 5.25″ HD drives – only good for 1.44Mb HD 3.5″. For people wanting to deal with unusual format disks, it’s a disappointment – but at this price, I really don’t mind. It works as advertised for 1.44Mb HD MFM 18-sectors per track 500kbit/s format. The software is a bit cryptic and generally rubbish, but the “compatibility” suggests there may be an “unwritten agreement” somewhere about USB Floppy Emulation, or maybe it’s a case of someone wanting to remain compatible with the “status quo”. Whatever it is, it’s good and bad news. While the first 100 banks may be compatible, additional banks following the same pattern are inaccessible from certain emulators – and their handling of track 80+ is different. I suppose the good thing is that the reported RPM of this unit is much closer to what is expected, although only having increment buttons and the potential for changing the bank during a write is a bit of a risk. At least, I hope this information comes in handy for anyone who may be thinking of purchasing such a unit …
OPCFW_CODE
Windows form with System.Windows.Forms.ListView no longer shows image with PowerShell 7.1 I have a fairly complex Windows form that utilizes a System.Windows.Forms.ListView object. The list view shows the items with "Tile view" images and works without issues with Windows PowerShell 5.1 and PowerShell 7.0.3. When I try to run the script with any version of PowerShell 7.1 (currently on RC2), it shows the listview items (the text portion) and I can interact with them normally - but no image appears to the left of each item in the list. To simplify showing the problem, I have written a simple script (below) to illustrate the issue - and it basically exhibits the same problem as the more complex production script I use. I believe this is a problem with .Net 5, but since PS 7.1 is built on it I figured I would post that bug here to see if I could get any traction from the PowerShell team first. Steps to reproduce #region Assemblies Add-Type -AssemblyName System.Windows.Forms #endregion #region Appearance [System.Windows.Forms.Application]::EnableVisualStyles() #endregion #region Components $FormMain = New-Object System.Windows.Forms.Form $ListViewMain = New-Object System.Windows.Forms.ListView $ImageListMain = New-Object System.Windows.Forms.ImageList $ButtonCancel = New-Object System.Windows.Forms.Button #endregion #region Forms $ShowFormMain = { $FormMain.Icon = [Drawing.Icon]::ExtractAssociatedIcon((Get-Process -Id $PID).Path) $FormMain.Text = "ListView Test" $FormMain.Font = New-Object System.Drawing.Font("MS Sans Serif",8) $FormMain.ClientSize = New-Object System.Drawing.Size(330,240) $FormMain.StartPosition = "CenterScreen" $FormMain.FormBorderStyle = "FixedSingle" $FormMain.MaximizeBox = $false $FormMain.Add_Load($FormMain_Load) $ListViewMain.MultiSelect = $false $ListViewMain.LabelWrap = $false $ListViewMain.LargeImageList = $ImageListMain $ListViewMain.Location = New-Object System.Drawing.Point(10,15) $ListViewMain.Size = New-Object System.Drawing.Size(310,150) $ListViewMain.View = "Tile" $FormMain.Controls.Add($ListViewMain) $ImageListMain.ColorDepth = "Depth32Bit" $ImageListMain.ImageSize = New-Object System.Drawing.Size(32,32) $ButtonCancel.Location = New-Object System.Drawing.Point(245,200) $ButtonCancel.Size = New-Object System.Drawing.Size(75,25) $ButtonCancel.Text = "Cancel" $ButtonCancel.TabIndex = 1 $ButtonCancel.DialogResult = "Cancel" $FormMain.Controls.Add($ButtonCancel) [void]$FormMain.ShowDialog() $FormMain.Dispose() } #endregion #region Handlers $FormMain_Load = { $ImageListMain.Images.Add(0, [Drawing.Icon]::ExtractAssociatedIcon((Get-Process -Id $PID).Path)) $ListViewMain.Items.Add("Test",0) } #endregion #region Main Invoke-Command $ShowFormMain #endregion Expected behavior The two far left forms in the screen shot above are Windows PowerShell 5.1 and PowerShell 7.03. They correctly show the listview image. Actual behavior The far right form in the above screenshot is from PowerShell 7.1-rc2. As you can see there is no image. When I run the script in PowerShell 7.1-rc2 there is no errors generated - it simply omits showing the image. Environment data Name Value ---- ----- PSVersion 7.1.0-rc.2 PSEdition Core GitCommitId 7.1.0-rc.2 OS Microsoft Windows 10.0.17763 Platform Win32NT PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…} PSRemotingProtocolVersion 2.3 SerializationVersion <IP_ADDRESS> WSManStackVersion 3.0 Second try for screenshot: It looks to be some sort of issue with the order of when I add images to the ImageList. If I add the image prior to calling ShowDialog(), then it works fine. I am guessing it is just some poor coding habit on my part that was "allowed to work" previously but now .Net 5 is catching it keeping me honest by not allowing it. #region Assemblies Add-Type -AssemblyName System.Windows.Forms Add-Type -AssemblyName System.Drawing #endregion #region Appearance [System.Windows.Forms.Application]::EnableVisualStyles() #endregion #region Components $FormMain = New-Object System.Windows.Forms.Form $ListViewMain = New-Object System.Windows.Forms.ListView $ImageListMain = New-Object System.Windows.Forms.ImageList $ButtonCancel = New-Object System.Windows.Forms.Button #endregion #region Forms $ShowFormMain = { $FormMain.Icon = [Drawing.Icon]::ExtractAssociatedIcon((Get-Process -Id $PID).Path) $FormMain.Text = "ListView Test" $FormMain.Font = New-Object System.Drawing.Font("MS Sans Serif",8) $FormMain.ClientSize = New-Object System.Drawing.Size(330,240) $FormMain.StartPosition = "CenterScreen" $FormMain.FormBorderStyle = "FixedSingle" $FormMain.MaximizeBox = $false $FormMain.Add_Load($FormMain_Load) $ListViewMain.MultiSelect = $false $ListViewMain.LabelWrap = $false $ListViewMain.LargeImageList = $ImageListMain $ListViewMain.Location = New-Object System.Drawing.Point(10,15) $ListViewMain.Size = New-Object System.Drawing.Size(310,150) $ListViewMain.View = "Tile" $FormMain.Controls.Add($ListViewMain) $ImageListMain.TransparentColor = [Drawing.Color]::Transparent $ImageListMain.ColorDepth = "Depth32Bit" $ImageListMain.ImageSize = New-Object System.Drawing.Size(32,32) $ImageListMain.Images.Add(0, [Drawing.Icon]::ExtractAssociatedIcon((Get-Process -Id $PID).Path)) $ButtonCancel.Location = New-Object System.Drawing.Point(245,200) $ButtonCancel.Size = New-Object System.Drawing.Size(75,25) $ButtonCancel.Text = "Cancel" $ButtonCancel.TabIndex = 1 $ButtonCancel.DialogResult = "Cancel" $FormMain.Controls.Add($ButtonCancel) [void]$FormMain.ShowDialog() $FormMain.Dispose() } #endregion #region Handlers $FormMain_Load = { $ListViewMain.Items.Add("Test", 0) } #endregion #region Main Invoke-Command $ShowFormMain #endregion I will mess around with it and try to improve the way I am coding it to fall in line with what .Net 5 is expecting. Thanks NK This appears to be now fixed in .NET 5.0.2 per: Revert "Explicit ImageList ownership management" (servicing) This appears to be now fixed in .NET 5.0.2 per: Revert "Explicit ImageList ownership management" (servicing)
GITHUB_ARCHIVE
When the cheap laptops known as netbooks first came out over a year ago, computer makers were able to offer them at low prices in part by shipping them with the free Linux open-source operating system, rather than Microsoft’s Windows. Since then, Windows netbooks have taken over most of the market after Microsoft began pushing Windows XP aggressively to netbook makers and consumers realized Linux netbooks didn’t work well with some popular applications and devices. Linux on netbooks isn’t going away though. In fact, software and hardware companies have been making big investments to improve Linux netbooks. For the past week, I’ve been using several flavors of Linux running on netbooks — Ubuntu, Hewlett-Packard’s Mi (which is based on Ubuntu) and Moblin, created largely by Intel and not yet available commercially. In all cases, the Linux netbooks failed at some basic functions that any laptop, no matter how tiny and inexpensive, should be able to handle, like working with printers. At the same time, Mi and Moblin have impressive graphical user interfaces well-suited to the habits of typical netbook users, like checking email and accessing social-networking sites, as well as the small screens and low horsepower of tiny laptops. In addition to Linux, all of the computers shared the standard features, or lack thereof, common among netbooks, including compact keyboards and no DVD drives. The most polished of the products was H-P’s Mini 110 Mi Edition, a new model with a 10.1-inch screen that H-P will begin selling on its Web site for $279.99 on June 10. That’s nearly $50 less than what H-P will charge for the Mini 110 running Windows XP, which will come with a 160-gigabyte hard drive instead of the 8-gigabyte solid-state drive that will come with the Mi edition. The striking thing about this netbook is the slick graphical user interface created by H-P that runs on top of Ubuntu and first began appearing on H-P netbooks early this year. Instead of a traditional desktop like that found in Windows and the Mac, Mi (pronounced “me”) arranges commonly used applications and content on a screen called the “dashboard,” which looks like a personalized Web page and lists recently received emails, fresh thumbnail images of favorite Web sites, and a Web-search toolbar. The Mi home screen is a clever way to make the computer seem alive with on- and off-line content, which is fitting since netbooks are designed for on-the-go Internet activities. It’s also tailor-made for the small screen size of netbooks. A more eye-catching iteration of Linux is Moblin, which I tried out in test form on an Acer netbook; it is expected to ship on netbooks by the end of the year. Moblin has a menu of icons at the top of the screen, the most interesting of which leads to the M-Zone, a home screen that displays calendar appointments and favorite applications alongside snapshots of recently visited sites and a continuous feed from the user’s Twitter network. An icon called “People” leads to a list of instant-messaging buddies, while another, called “Zones,” let me organize all the applications I had launched into different virtual workspaces, which is useful for hopping between various tasks on a small-screen device like a netbook. The look and feel of the standard Ubuntu system, without the Mi interface, is more commonplace. I tried out a Dell Mini 10 with a 10.1-inch display and 160-gigabyte hard drive that sells for $349 on Dell’s Web site. The Mini 10 ships with version 8.04 of Ubuntu, which resembles Windows XP, with its desktop, taskbar and pop-up menu system. Ubuntu, in some cases, seemed to overestimate the size of the Dell Mini’s display: A window for configuring wireless-networking capabilities was so large it bled off the screen, and I couldn’t access all the buttons on it. I also installed on the Dell a new version of Ubuntu Netbook Remix, which works better on small screens. Since a Windows XP version of the Dell Mini 10 sells for the same price as the Ubuntu, I can’t see a compelling reason to choose the Ubuntu option. All the netbooks I tried had compatibility problems with other external devices. The netbooks couldn’t load the software drivers to let me print to my Canon and Dell printers. I couldn’t load pictures over a USB cable from my Canon PowerShot SD750 digital camera. I was able to get my pictures on the machines by plugging a storage card from my camera directly into the netbooks. Canonical, the London company that oversees development work on Ubuntu, says it is improving the system’s compatibility with various devices. Intel says it is unfair to judge Moblin until it is commercially available. Some key applications currently don’t run on Linux, like Apple’s iTunes, which makes it difficult to load music files onto iPods from the netbooks. While the Linux laptops didn’t run Microsoft Office, they came with OpenOffice, a free package of word-processing, spreadsheet and presentation applications that allowed me to open and modify basic Word and Excel files. The companies behind Linux netbooks have made great strides in improving user interfaces, but until they can achieve similar breakthroughs in how the machines work with other devices, Windows netbooks are still a better deal. - Email email@example.com. Walter S. Mossberg is away and will return next Thursday.
OPCFW_CODE
The other day I (respectfully) challenged B/X Blackrazor's JB's definition of niche protection vs. redundancy. JB does a great job of explaining his idea of niche protection and redundancy here, so go read that first. I think he's right on the money with niche protection in play. However, I think D&D (especially the very oldest versions of it) has very strong inherent niche protection. Who turns undead the best? The cleric. Who can detect traps without burning up spells per day to do it? The thief. Who fights the best bar none? The fighter. And so on - and magic items restricted by class just further drive in that protection. Can anyone use this Staff of the Magi? Er, no, just the wizard. So you don't need to worry about it too much in play or even in chargen. Even a 7th level magic-user will have stuff to do in a game with an 11th level MU with similar spells to select, because you never seem to have enough spells to go around. And they'll likely end up with different magic items, anyway. Where character classes duplicate each other, they usually have some niche aspect to it that either limits their options or isn't duplicated elsewhere. For example, Paladins have healing like clerics and fight like fighters, but at a cost of alignment, magic item, and equipment restrictions (and IIRC a steeper per-level XP cost). Assassins do the exact same things as thieves, but at a lower level equivalence, and get a special assassination ability that no one else gets. Illusionists in AD&D give up the "artillery" aspect of magic but gain in another focus of magic that magic-users don't do quite as well. All of this depends on well-made classes, of course - but the D&D classes generally balance well enough against each other in terms of choices and not causing total redundancy (see below). Niche protection is a bit more of an issue in point-buy systems like GURPS because there aren't rigid class boundaries (or effective limiters like the level/xp costs of multi-classing). What's stopping your barbarian from picking pockets better than my thief is how we spend our points, more or less. There are a lot of ways around that issue (such as limiting point totals), some of which have to do with GMing and some with game design. It's a potential downside to the extreme flexibility of character generation. But it's not a non-issue for class-and-level systems. The more classes you have, the more concern you end up with about having too much redundancy. This is where the plethora of sub-classes of AD&D and overlapping kits and feats in 2e and 3e come in. This is why, for example, GURPS Dungeon Fantasy has such strong niche protection for its templates - tightly constructed templates, lots of suggestions for reducing the options to duplicate another's specialty, restricted options for what you can start with on your character, and a smoothed path to upgrading what fits your niche. It's emulating a genre where strong niches are common and where allowing complete customization can weaken the emulation. In my opinion, anyway. But in those point-buy systems, do JB's points hold up? I think they do. Like JB says, difference of play is key here. If your cleric can cast Find Traps and outdo the thief's trap finding, that's fine - who cares? It comes with a cost of a spell slot (or in GURPS, with learning See Secrets instead of something else) or of magical energy. It's not always on, it's not permanent, and it's not free. It's not always going to work. It's another way around the problem - JB's many ways to skin the cat argument. In a good game, sheer variety of circumstances will result in a need to have those multiple ways to skin the cat. Plus you'll often need multiple people doing something to defeat the problem/advance the plot/accomplish the goal. Not the least of which is if, as JB notes, someone flakes out/misses a session/gets his PC killed/etc. But what does this show you? It's that redundancy isn't bad, either - it gives you more options. What sucks is total redundancy. That's when your character (or worse, your character class/template/type) doesn't do anything better than the others. If your system's clerics get automatic, 100% reliable trap detection and removal ability, always on after a certain point, yeah, your thief is useless now except as a backup in his own area of supposed expertise. If the mage is a better swordsman than the swordsmen are, that's probably an issue. If you do A, B, and C and another character does A, B, and C+1, or A, B, C, and D, it better be because you're a less powerful version of the same character - less points or lower level, say. If not . . . you're stuck. If it's because of lower level or less power or less attention from the GM, that's a play issue. If it's because your character class / template / whatever sucks, that's a game design issue. You can't catch up and you can't ever do as well as the other players who made a better choice than you. You may as well go stand in the back and hold a torch like the hirelings - make yourself useful. I'll follow up tomorrow with a discussion of niche protection from my perspective as a game designer. Or at least as a writer of GURPS supplements.
OPCFW_CODE
Surface mount technology (SMT) is an essential part of PCB assembly, and the efficiency of SMT greatly affects the efficiency of quick turn PCB assembly. The pick and place programming is one of the critical factors that relates to the efficiency of SMT. Therefore, to improve the efficiency of programming procedure would speed the PCB assembly. Now we have machine to automate the surface mounting process, while the programming process is not that automatic. With machine software only, it requires the process engineers to be quite experienced to skillfully program each product. The challenges of SMT programming mainly come from the following two aspects: * Part data preparation for each component. The part data contains lots of parameters, therefore it would bring heavy workload for engineers to manually assign part data or create part data, which would also cost much time. * Verification of component’s angle/polarity. By traditional approach, engineers have to visually check the correctness of each component’s angle and polarity. Still no one can ensure 100% correctness. One mistake of polarity could bring risk of rework, customer complaint, order loss, etc. They not only challenged the SMT programming, but also challenged the quick turn PCB assembly. The machine software or traditional offline programming (CAD converter) software no longer works in the above situation. That’s why intelligent SMT programming software is getting more and more attention. VayoPro-SMT Expert, as an intelligent software, could make up for the shortage of machine software and traditional offline programming software. SMT Expert comes with Vayo’s patented intelligent technology: bidirectional data exchange, intelligent part data assignment, intelligent part data creation, intelligent angle/polarity correction and intelligent panelization. SMT Expert could load part data from machine library and generate part data to machine library, achieving the bidirectional data exchange. Under the accelerator mode of software, it could automatically assign the part package/shape from machine library, which could save 90%+ time cycle. Besides, the software could intelligently capture CAD data to prepare for part data creation. SMT Expert could directly access placement machine library to obtain part data of each component, virtually place them onto boards to detect the incompliance, then intelligently correct those incompliances. This intelligent approach has avoided manual adjustment and minimized the risks of wrong polarity/angle. Compared with other intelligent solution that comes with individual library, Vayo’s solution could copy machine library directly, thus avoiding visual inspection steps. Vayo’s intelligent SMT programming solution has solved two major challenges, helping greatly reduce offline programming and inline fine-tuning time. For quick turn PCB assembly business, to implement such an intelligent software would bring a breakthrough to reduce turnaround time. For PCB assembly enterprises, this solution could also helped ease the trouble of people management like high qualification requirement, heavy workload and so on. Apply for a trial license now to see how SMT Expert could help you accelerate PCB assembly. Or you could visit www.vayoinfo.com/products/smt-expert/ to learn more details about this solution. Reach us via email: email@example.com.
OPCFW_CODE
[SOLVED] Exchange/Outlook 2010 can't see sent items I have built an Outlook 2010 Addin using C#. I created an Inspector wrapper and I am using it to capture when an item is added to the Sent Items folder.... 6/09/2016 · Outlook 2010 Sent Mail goes to Inbox Hello, I just set up Microsoft 2010 this week and was configuring Outlook 2010. After everything was done, I found that every time I send an email, a copy of it is automatically sent to my inbox . c# How to capture sent items with Outlook addin across Open Outlook 2016 and from the folder pane, choose the Sent Items folder. Next, open the message that you want to recall. double-click to open it. Now, under the Message tab, choose Actions...Yesterday I had a user that decided to change some options in Outlook 2010, they deselected the option 'Save copies of messages in Sent Items folder'. Sent Items on Gmail to Sent Folder in Outlook Express POP When user is sending mail with “Send As” permission the “Sent mail item” is getting save to user’s personal mailbox instead of send as’s mailbox sent item. Now I want to move it from user’s personal mailbox to send as’s mailbox sent items folder. I was trying with this the following code how to draw reflections in eyes By Default Microsoft Office 2010 will save all sent emails into the primary mailbox ‘sent items’ folder, even when another email account is used in the ‘From’ field. To change this behaviour and to make emails sent from a shared mailbox go into the shared mailboxes’ sent items folder you can install a Microsoft patch and enable the feature by changing a registry setting.. How to download utorrent in ubuntu How to get Gmail Sent messages to appear in Outlook 2010 - Outlook 2010 not showing Sent Items TechRepublic - Outlook 2010 OWA Sent items not showing - How to transfer sent items in Outlook Windows - Outlook 2010 not saving Sent emails in Sent Items Folder Outlook 2010 - Protect Sent Items Folder solved Outlook 'sent items' sync between platforms? There are 5168 items in my in box and 2260 items in sent items of outlook 2007. - 20/07/2013 · Good option if you want a copy of your sent emails, have both POP and IMAP running in outlook and from time to time move your sent emails from your Optus Sent items to your local Sent Items. That is if you wish to keep your POP settings in outlook and not worry about hitting your 50mb limit Optus gives you. - By Default Microsoft Office 2010 will save all sent emails into the primary mailbox ‘sent items’ folder, even when another email account is used in the ‘From’ field. To change this behaviour and to make emails sent from a shared mailbox go into the shared mailboxes’ sent items folder you can install a Microsoft patch and enable the feature by changing a registry setting. - If you have multiple email accounts in Outlook, each email account has its own Sent Items folder. Click Sent Items in the folder list. Tip: If you don’t see the Sent Items folder, click the arrow (>) on the left of your account folder to expand the list of folders. - 25/11/2011 · Hi all, I'm using Outlook 2010 with POP3 account (Exchange Server 2003) I sent an email from OWA (outlook web access) than that email has been saved sent items in OWA. You can find us here: - Australian Capital Territory: Evatt ACT, Casey ACT, City ACT, O'Connor ACT, Ainslie ACT, ACT Australia 2639 - New South Wales: Nundle NSW, Mt Tomah NSW, Bullawa Creek NSW, Belmore River NSW, Moobi NSW, NSW Australia 2033 - Northern Territory: Imanpa NT, Tiwi NT, Ti Tree NT, Barrow Creek NT, Virginia NT, East Arm NT, NT Australia 0868 - Queensland: Moreton Island QLD, Cowley QLD, Kureelpa QLD, Aspley QLD, QLD Australia 4048 - South Australia: Pureba SA, Paechtown SA, Kalbeeba SA, Riverglades SA, Belvidere (Alexandrina) SA, Rose Park SA, SA Australia 5072 - Tasmania: Renison Bell TAS, Oyster Cove TAS, Ranelagh TAS, TAS Australia 7044 - Victoria: Straten VIC, Mount Sabine VIC, Pirron Yallock VIC, Indigo Valley VIC, Fernbank VIC, VIC Australia 3006 - Western Australia: Embleton WA, East Perth WA, Reedy WA, WA Australia 6044 - British Columbia: Lytton BC, Canal Flats BC, Coquitlam BC, Anmore BC, Fruitvale BC, BC Canada, V8W 1W7 - Yukon: Jensen Creek YT, Fort Reliance YT, Forty Mile YT, Paris YT, Grand Forks YT, YT Canada, Y1A 3C8 - Alberta: Killam AB, Bentley AB, Veteran AB, Paradise Valley AB, Leduc AB, Chipman AB, AB Canada, T5K 2J4 - Northwest Territories: Paulatuk NT, Salt Plains 195 NT, Paulatuk NT, Sambaa K'e NT, NT Canada, X1A 2L6 - Saskatchewan: Kinistino SK, Netherhill SK, Eastend SK, Windthorst SK, Zealandia SK, Lake Lenore SK, SK Canada, S4P 6C8 - Manitoba: Ethelbert MB, Winnipeg MB, Sainte Rose du Lac MB, MB Canada, R3B 2P1 - Quebec: Metis-sur-Mer QC, Montreal QC, Saint-Sauveur QC, Pont-Rouge QC, Matane QC, QC Canada, H2Y 2W8 - New Brunswick: Cambridge-Narrows NB, Cambridge-Narrows NB, Fredericton Junction NB, NB Canada, E3B 7H9 - Nova Scotia: Amherst NS, Kings NS, Clare NS, NS Canada, B3J 3S2 - Prince Edward Island: Bonshaw PE, Lady Slipper PE, Hope River PE, PE Canada, C1A 6N5 - Newfoundland and Labrador: Hermitage-Sandyville NL, Change Islands NL, Massey Drive NL, Rigolet NL, NL Canada, A1B 5J4 - Ontario: Blue Corners ON, Aroland First Nation ON, Kirby ON, Rodney, Middleville ON, New Tecumseth ON, Lauriston ON, ON Canada, M7A 1L9 - Nunavut: Fort Ross NU, Repulse Bay NU, NU Canada, X0A 6H5 - England: Sutton Coldfield ENG, Carlisle ENG, Bamber Bridge ENG, Stafford ENG, Canterbury ENG, ENG United Kingdom W1U 1A8 - Northern Ireland: Belfast NIR, Craigavon (incl. Lurgan, Portadown) NIR, Belfast NIR, Newtownabbey NIR, Derry (Londonderry) NIR, NIR United Kingdom BT2 8H5 - Scotland: Edinburgh SCO, East Kilbride SCO, Glasgow SCO, East Kilbride SCO, Livingston SCO, SCO United Kingdom EH10 7B9 - Wales: Newport WAL, Cardiff WAL, Neath WAL, Swansea WAL, Wrexham WAL, WAL United Kingdom CF24 6D5
OPCFW_CODE
V.Gnovel Complete Martial Arts Attributes online – Chapter 130 – Who On Earth Are You? mighty release suggest-p3 Novel–Complete Martial Arts Attributes–Complete Martial Arts Attributes hell to pay castle Chapter 130 – Who On Earth Are You? makeshift suck “3-celebrity soldier-point martial warrior!” “A 3-celebrity soldier level martial warrior isn’t much. There are various highly effective and respected individuals this put. I’m just from the most affordable rung.” Xie Kun pretended to be modest. Nonetheless, in their coronary heart, he was very proud and enjoyed the eye he was acquiring from everybody. He considered w.a.n.g Teng calmly and inquired, “I’ve learned about your grudge with Li Rongcheng. You employed some option to compel him to go away the martial arts test given that you desired to get revenge on him. I contemplate if this is real?” “It’s him.” Ahead of Li Liangda could start his oral cavity, Li Rongcheng obtained already responded to hurriedly. At the beginning, he thought that along with his skill like a 3-star soldier-level martial warrior, w.a.n.g Teng wouldn’t dare to speak returning to him. He would definitely listen to his lecture obediently. He hurriedly shook his head and sensed slightly furious from embarra.s.sment. Just when he was approximately to flare up, he been told a sound behind him. Pan Danwen and Xu Hui’s other elderly people silently arranged in reference to his terms. That they had been through this step before. They knew much better than anyone that they felt after they just graduated from senior high school. Honestly, these folks were a bit extremely pleased during those times. They merely learned the immensity on the universe once they accessed college or university. Before this, these got no connection whatsoever. The only real web page link between them was Xu Hui. This martial warrior believed his daddy, so he would definitely stand on their aspect. Also, Li Rongcheng could convey to that there was contempt as part of his tone when he talked about w.a.n.g Teng. He could possibly make w.a.n.g Teng suffer a little. Li Rongcheng was thrilled to see that. To start with, he considered that along with his ability as being a 3-celebrity soldier-point martial warrior, w.a.n.g Teng wouldn’t dare to talk returning to him. He would definitely hear his lecture obediently. Abruptly, w.a.n.g Teng wasn’t fearful of offending him. The moment he started his lips, he didn’t provide him with any face and humiliated him ideal when in front of every person. Xie Zhilong converted his mind and smiled brightly. He named along to the person, “Uncle!” He hurriedly shook his mind and believed slightly angry from embarra.s.sment. Just as he was approximately to flare up, he read a tone of voice behind him. chairman husband too boorish spoiler This martial warrior recognized his daddy, so he would definitely stand on their facet. Also, Li Rongcheng could inform that there was contempt within his develop when he described w.a.n.g Teng. He might be able to make w.a.n.g Teng experience just a little. Li Rongcheng was excited to find out that. Xie Zhiling believed chills functioning down his spinal cord. For some reason, he noticed as though he obtained focused by the outrageous monster. Anxiety crept up his cardiovascular. His or her mature brother, Xie Zhilong experienced he required to educate them properly through the standpoint of your elder. a bayard from bengals “That’s extremely hard. w.a.n.g Teng is not really that kind of particular person,” Xu Hui shouted without wondering. w.a.n.g Teng shook his top of your head speechlessly. w.a.n.g Teng claimed helplessly, “Senior Sibling Pan, I’m not self conscious. I’m really looking forward to my girlfriends. I can’t make selections to them.” “Why are you presently rejecting us? It appears as though you peer down on us. Overlook it. Let’s not ask for a snub,” Xie Zhilong reported as if he was mocking themselves. the harp of god pdf He hurriedly shook his brain and felt a little bit mad from embarra.s.sment. Just when he was approximately to flare up, he been told a sound behind him. the ancient near east history society and economy Did he think that he could be haughty although he was the most notable scholar of the martial arts training examination? modern areas in london “Haha, it’s really you!” Xie Kun, Li Liangda, and Li Rongcheng went around. Pan Danwen and Xu Hui’s other aging adults silently decided in reference to his terms. That they had been through this point well before. They understood better than everyone the direction they felt after they just graduated from school. Frankly, these people were somewhat extremely pleased in those days. They simply mastered the immensity in the world as soon as they entered institution. “Yes, they all are my schoolmates from Jiangnan College or university. Nevertheless, this individual isn’t. He’s the best scholar of the Donghai University or college martial arts assessment this season. He appears to be upon standard individuals like us.” Xie Zhilong presented Pan Danwen and his awesome associates well before looking at w.a.n.g Teng mockingly. “3-star soldier-stage martial warrior!” Unexpectedly, w.a.n.g Teng wasn’t fearful of bad him. As soon as he started his mouth, he didn’t offer him any encounter and humiliated him proper ahead of everyone. Xie Zhiling sensed chills jogging down his spine. For some reason, he noticed as though he have particular by way of a crazy beast. Anxiety crept up his coronary heart. As estimated, additional older brothers and sisters frowned whenever they observed this. They were dissatisfied. When w.a.n.g Teng spotted Li Liangda and Li Rongcheng, his top of your head started aching. Every one of them were gathered alongside one another all over again. “Why are you rejecting us? It seems like a person looks down on us. Forget about it. Let’s not ask for a snub,” Xie Zhilong mentioned as though he was mocking him self. At this time, he really want to slap w.a.n.g Teng to passing away. “Who will you be? I don’t feel I know you? Whether or not I’m bad, exactly what does it have to do with you? Why would you function over and say this nonsense in my opinion? Who provided the expert to achieve that?” w.a.n.g Teng investigated Xie Kun indifferently. Xie Zhiling felt chills functioning down his spinal column. By some means, he believed just like he have targeted from a wild monster. Anxiety crept up his heart and soul. “w.a.n.g Teng, this Mr. Xie is actually a 3-celebrity soldier-degree martial warrior. Don’t imagine which you can be so arrogant simply because you’ve developed into a martial warrior. In comparison with Mr. Xie, a 1-legend martial warrior are few things. Don’t be extremely pleased.” Li Rongcheng was such as the fox that a.s.sumed the majesty in the tiger.
OPCFW_CODE
using System; using System.Threading; using NLog; namespace NLogTest { public class BusyWork { private readonly string m_Name; private readonly string m_Category; private readonly Logger m_Log; // And cache this for convenience. public BusyWork(string name) { m_Name = name; // Create a category by combining info from the type with additional info m_Category = typeof(TestForm).FullName + "." + name; m_Log = LogManager.GetLogger(m_Category); } public void Run(int messages) { // Queue the task and data. if (!ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc), messages)) m_Log.Warn("QueueUserWorkItem failed"); if (m_Name == "Shemp") throw new ApplicationException("This is a test exception"); } private void ThreadProc(object info) { int count = (int)info; for (int i = 1; i <= count; i++) { m_Log.Trace("{0} message {1} of {2}", m_Name, i, count); Thread.Sleep(500); } } } }
STACK_EDU
Authors: Kevin Ge, Jiansheng Han, and Sheyang Wang (TiDB virtual team at Chehaoduo) Transcreator: Ran Huang; Editor: Tom Dewan Chehaoduo is an online trading platform for both new cars and personal used cars. Founded in 2015, it is now one of the largest auto trading platforms in China, valued at $9 billion in its series D round of funding last year. In the early stages of Chehaoduo, to quickly adapt to our application development, we chose MySQL as our major database. However, as our business evolved, we were greatly troubled by the complication of MySQL sharding and schema changes. In face of this dilemma, we found an alternative database to MySQL: TiDB, an open-source, MySQL compatible database that scales out to hold massive data. In this post, I’ll share with you why we chose TiDB and how it empowers our application to provide better service for our customers. How MySQL fell short MySQL is a stand-alone database that doesn’t provide horizontal scalability. After our data volume exceeded a certain threshold, MySQL could no longer deliver satisfactory performance. Single MySQL instance has limits As our data accumulated, single instances often hit Queries per Second (QPS) limits and ran out of storage capacity. To squeeze data into single instances, we had to split large tables to smaller ones and split the change data capture (CDC) layer and data warehouse (DW) layer. Whenever we had to split tables, the application team had to work with the CDC team and the DW team. If the application at issue was sharing a database or table with other applications, the people in charge of other applications also had to be involved in the process. In addition, small scripting programs might be neglected during the migration, which might impact application data. Depending on the data volume of the application, each split could take 2 to 4 weeks. This was a huge waste of time and energy. MySQL sharding is intrusive to applications As the user base grows, some tables might have tens of millions of data records, which slows down reads and writes. The usual approach is to shard the data, but this has some problems: - The distributed transactions are hard to handle. - MySQL can’t create secondary indexes. - The shards might not be able to further scale out. - There’s no way to perform cross-shard joins. - It’s hard to perform a sort-merge join on the result set. It’s hard to change schemas for big tables At Chehaoduo, our business model changes rapidly. To adapt to the business requirements, we have to frequently change table schemas. When a table has millions of records, we need a third-party tool to execute data definition language (DDL) commands, such as pt-online-schema-change (pt-osc). During the schema change process, we must make a temporary copy of the table and then apply the changes. This is time-consuming and may impact storage, I/O, and the application. Why we chose TiDB to replace MySQL To address these pain points, we considered reforming and upgrading our existing database architecture. We analyzed the requirements from the application side and compared them to some common database solutions. Comparing various solutions |HBase + Phoenix|| |HBase + ES|| To sum up, TiDB is horizontally scalable and MySQL compatible. It supports online DDL schema changes and distributed transactions. These features combined are suitable for Chehaoduo’s use case: large data volumes, frequent schema changes, and long-term data storage. Analyzing our use scenarios After we studied TiDB’s advantages, we also analyzed Chehaoduo’s specific use scenarios and summarized the application side’s concerns: - One application has nearly 300 million rows of data, with 1.7 million rows of new data added each day and 50 million each month. Even if only the hot data within 2 months are stored in MySQL, a single table could be crammed with more than 100 million rows of data. - Cars have a longer sales cycle than other products. During the long sales cycle, once-cold data may become hot again. Therefore, the application might need to update cold data. And since the application serves online users, the database needs to read and write in real time. - Multiple applications may read from the same dataset, each with different query conditions. - When data changes, the application needs corresponding logic to process the changes. The database must provide a CDC data flow so that the application can monitor the data changes. Based on these requirements. we decided to migrate several applications to TiDB, including ticket distribution and transferring, the telephone sales system, the business central hub, and the accounting system. These applications have large amounts of accumulated data. Some of them must frequently add new fields, while others need transactions. TiDB can certainly help them out. Our migration process Facing a new database, the core application team couldn’t help but feel concerned about its stability and reliability. To boost their confidence, we decided to pilot the system on some less critical services. The entire testing process had three stages. The first stage was to use TiDB as a MySQL replica cluster and sync data using TiDB Data Migration (DM). The application side examined whether TiDB’s data consistency, stability, and performance met their requirements. After that, we routed a small proportion of online requests to read from TiDB and observed the data. As everything went smoothly, we gradually increased the proportion until TiDB completely took over all read requests. The data validation didn’t fail. By then, the application side had faith in TiDB and was ready to move on to the next stage. In the second stage, the application wrote into MySQL and TiDB simultaneously without DM. It read from TiDB and wrote into TiDB directly. We still wrote data separately into MySQL to provide for contingencies. This stage lasted for two quarters, during which TiDB ran normally and passed our everyday data validation. By the last stage, TiDB had earned our total trust. We put away MySQL and launched TiDB into our production environment as an independent database. With TiDB, our service quality has greatly improved: - The time range an application can query has expanded from the most recent three months to all historical data. - Even with nearly 1 billion data records and 1,000 QPS, our 99.9th percentile latency is still lower than 128 ms. While running TiDB in our production environment, we’ve encountered a few problems. Here, I’d like to share some lessons we learned. Choose the right version For the production environment, we suggest choosing a release that has run normally for some time. TiDB is an on-going technology. Bug fixes and new features are continuously added to new releases. Since our first research, TiDB has matured from version 2.0 to 4.0. When we upgraded from v2.1.x to 3.0.x, we didn’t notice the SQL mode change, which resulted in an unexpected impact caused by the ONLY_FULL_GROUP_BY rule. Now, we only choose stable releases and usually don’t upgrade our clusters unless we encounter critical bugs or need new features. Bind your SQL Because TiDB uses cost-based optimization, when the statistics change, the optimizer might select an incorrect index. One time, we saw full CPU utilization, rising I/O, and lots of queries timing out in our system. After we consulted PingCAP, the team behind TiDB, we resolved the issue via SQL binding. As we gained more confidence in TiDB, we hoped to migrate more applications to it. However, due to limited hardware resources, we couldn’t immediately get several independent TiDB clusters online. Therefore, our DBA team explored the possibility of deploying multiple clusters on the same set of physical servers. - PD has low resource requirements. - TiKV supports configuring the maximum CPU and memory at the software layer. We also mount multiple SSDs in the same machine to isolate I/O. - TiDB also supports limiting CPU and memory usage at the software layer, but it can’t stop skyrocketing memory access in a short period of time. Fortunately, we can configure memory_limitin systemd and let cgroups cap the memory usage. This also inspires us to try fully controlling resources using Kubernetes and Docker. After we verified that the abnormal SQL statements on one cluster didn’t affect other clusters on the same physical machine, we connected our other applications, one by one, with TiDB. Our future plans with TiDB Our current infrastructure is deeply interconnected with TiDB, and we still want to achieve more with it. Deploy on the Cloud TiDB is a cloud-native distributed database. Our existing clusters are mostly deployed on-premises. Going forward, we plan to deploy TiDB on the cloud. With TiDB Operator, the Kubernetes operator for TiDB, TiDB will be able to automatically allocate resources and improve the resource utilization rate. This will greatly save on maintenance costs. Our advertising application is sensitive to service latency. It offers targeted advertising based on user data. With our data accumulating in the past five years, we needed a persistable key-value store to provide service for the advertising application. TiKV, TiDB’s storage component, drew our attention. TiKV was originally created to complement TiDB, but we can also deploy it independent of TiDB as low-latency, persistable storage. We have already implemented this project, and plan to extend the use of TiKV to more applications. Before we migrated to TiDB, we built our data flow service based on the MySQL binary log. Now that TiDB runs as our primary database, we use Pump and Drainer to replicate binlog. TiDB 4.0 introduces a new feature, TiCDC, which is easier to deploy and supports output in multiple formats. We will gradually integrate TiCDC into our current system. Integrate with an internal platform At Chehaoduo, our DBA team develops and maintains an internal platform for MySQL management. Both DBAs and developers can perform daily routines on this platform. To unify the system, we will integrate TiDB into the platform and achieve automatic operations and maintenance. Simplify the migration process Currently, our migration process has two major parts: migrate data using DM and then migrate the application. We hope to make the migration easier for the application side, for example, by introducing a SQL proxy to the architecture. The applications will only connect to the proxy layer and not be concerned about whether the backend is MySQL or TiDB. The above plans won’t be brought to reality if it hadn’t been for TiDB. With TiDB as one of the core components in our architecture, we will build a powerful system to support our business in the long run.
OPCFW_CODE
How do I modularize functions that work with dask? I'm trying to modularize my functions that use Dask, but I keep encountering the error "No module named 'setup'". I can't import any local module that is related to Dask, and currently, everything I do with Dask has to be in a single .py file. How do I properly modularize functions that use Dask? Here's an example of my setup: main.py: Contains functions that utilize Dask. setup.py: A local module I created, which also contains functions using Dask. I'm attempting to import the setup module in main.py like this: from setup import * class CustomModulePlugin(WorkerPlugin): def setup(self, worker): import setup def start(self, worker): logging.warning("Plugin started and added to the worker.") client = Client('tcp://<IP_ADDRESS>:8786') client.register_worker_plugin(CustomModulePlugin()) But I keep getting the following error: No module named 'setup' try renaming it to sth like dask_setup.py and see if it works. it should be then from dask_setup import * The workers see a different filesystem than your client. When tasks are sent, only as much is serialised as required, to keep the communication as small as possible - so functions and classes in modules are NOT packed, only a reference for which module to find the definition in. Therefore, the workers need to have access to the same modules as your client. There are many ways to ensure this kind of environment synching. Perhaps the simplest one for you is upload_file, which will send the code to all workers and ensure it is in an importable location. Other options include: shared filesystems, e.g., NFS or kube shared volume nanny-mediated file upload (see here] publishing the code in any python format (.egg, .zip etc), and use the environ plugin to install it in new workers. "coiled" cluster, which includes source file syncing (doc) since you are using a plugin already, you can have its setup method write the file first, and defer import until after setup (inside tasks, rather than in main) I have tested the solution with upload_file and it did not work in my case. I work with a system of backup workers that are automatically created when the current VMs go down. Because of this, I cannot use upload_file, as I need something that works for both the workers that have already been created and those that will be created in the future. Do you have any suggestions on how to dynamically synchronize the environment to cover future workers as well? I included some idea sketches. Let us know what you end up using! We tested all the suggestions, except the coiled cluster, as it would change the project architecture a lot. Neither solution worked, however. We use the compute API from GCP to manage the machines Publishing your package certainly works!
STACK_EXCHANGE
Will a USA no-fly blacklist prevent Canada passengers from flying domestically? Related to this question, apparently someone who is on a USA no-fly list will not be allowed on international flights between other countries, if they have a chance of overflying or diverting to the US. And it's understandable that airlines would check this as part of the immigration prescreening, since they have to do that anyway. So several people suggested he fly from Europe to Canadian airports nowhere near the US, such as Gander, Halifax or St. John's. That's great, but if you flew internationally into St. John's, the only sane way to travel onward is flight. They don't do immigration prescreening on domestic flights. So would he still get flagged/blocked if he attempted a St. John's-Toronto or Halifax-Vancouver domestic flight? There's a perfectly good train service from Halifax, calling at Montreal and Toronto. In theory domestic Mexican flights could also be affected. @DJClayworth The train from Halifax to Montreal takes about 24hrs. This is comparable to the driving distance but of course without the need to operate a motor vehicle the whole time :) Note that a lot of Canadian domestic flights routinely overfly the U.S., most notably flights to/from Victoria, BC and flights into or out of the southern Ontario airports (Windsor, London, Hamilton, Toronto Pearson, Toronto Billy Bishop). My flights from Regina to Toronto have flown as far south as over Green Bay, Wisconsin. @chx And is something like C$3,000 @AzorAhai: Maybe depends on dates? I just randomly looked up fares for the Sep 15 train; coach starts at $444 and sleeper fares are available from $1320. @nate I could be recalling a two way fare The answer is no, they would not be prevented from boarding a domestic flight. Canada's compliance with the TSA no-fly list started in 2011 with the passage of Bill C-42: Despite section 5 of the Personal Information Protection and Electronic Documents Act, to the extent that that section relates to obligations set out in Schedule 1 to that Act relating to the disclosure of information, and despite subsection 7(3) of that Act, an operator of an aircraft departing from Canada that is due to land in a foreign state or fly over the United States and land outside Canada or of a Canadian aircraft departing from any place outside Canada that is due to land in a foreign state or fly over the United States may, in accordance with the regulations, provide to a competent authority in that foreign state any information that is in the operator’s control relating to persons on board or expected to be on board the aircraft and that is required by the laws of the foreign state. As we can see, only flights that depart or land outside of Canada are permitted to transmit APIS data to foreign officials and thus domestic flights are exempted. This interpretation is confirmed in the legislative summary published by the Library of Parliament. Thank you for the answer and for running the bounty! Also I think people in both Oxford and Chicago would agree, Canadian lawmakers could stand to use more commas :)
STACK_EXCHANGE
Novel–Chaotic Sword God–Chaotic Sword God Chapter 2646 – The Young Star Lord’s Death auspicious earsplitting They had all noticed a lot of things concerning the small legend lord’s identification during this period. Ever since anyone so important got died right here, they all knew what can occur after that. “My foster father would be the Nine Excellence Legend Lord. Should you even feel a hair on me, my foster father will never space you. Perhaps the Divine Palace of Bisheng can’t have the wrath of my foster dad,” the youthful superstar lord mentioned fearfully. His pupils gradually shrank. For example, when he saw Jian Chen gradually improve his sword, the young legend lord’s heart trembled even more. Even his ft shook. The Nine Legend Sword of Heavenly Means carefully made an appearance in their fingers. As he applyed energy involved with it, illusionary celebrities sprang out one immediately after yet another. The terrifying strain was enough to strike along the particles on the surroundings. “The youthful star lord has actually passed away.” The expression with the maximum experts on the Four Representations Alliance started to be extremely unappealing. The small star lord was their way to obtain self-confidence as he was what tied up these phones the Nine Excellence Celebrity Lord. Ever since this tie up was severed, a bunch of their former campaigns have been given useless. The fresh star lord also grew to be extremely surprised as he observed the Heartless Boy or girl nullify the Heavenly Queen of Azure Brilliance’s infiltration so simply. He even sensed disbelief. Chapter 2646: The Youthful Legend Lord’s Passing away Section 2646: The Small Superstar Lord’s Passing away The little superstar lord’s vision have been vast. His face was loaded with disbelief. All the way up until his loss of life, he struggled to think that someone would actually get rid of him. From the blink connected with an vision, the Heartless Boy or girl easily nullified the danger in the Incredible Ruler of Azure Brilliance’s Lifebound Plume. “Father, help save me, father!” The youthful superstar lord finally began to worry for his everyday life before Jian Chen’s killing objective. Most of his assurance and composure collapsed within that occasion. He sensed for his daily life by using a light confront. “Jian Chen, you better cope with the matters right here very first. I’ll return in certain days. I actually have a little something imperative that you tell you then.” Using that, the Heartless Youngster suddenly vanished. “Y- you’re not really frightened of my dad.” The youthful superstar lord’s deal with modified. From the thousand years he possessed existed, wherever he went, however effective the expert he stumbled upon was, he would be able to stun them given that he mentioned his father’s identify. “Y- you’re not really terrified of my father.” The fresh superstar lord’s confront improved. Inside the thousand yrs he obtained lived, where ever he decided to go, regardless of how highly effective the professional he come across was, he could stun them provided that he described his father’s title. Right then, the hearts in the optimum authorities in the Four Signs Alliance utterly sank. Within the blink of an eyes, the Heartless Child easily nullified the risk coming from the Heavenly Queen of Azure Brilliance’s Lifebound Plume. “Father, conserve me, dad!” The small celebrity lord finally did start to worry for his life before Jian Chen’s eradicating intention. All of his self-assurance and composure collapsed for the reason that minute. He experienced for his lifestyle with a pale face. He could vaguely perception which the boy or girl had probably reached the exact same stage as his foster daddy. Even when he obtained not, he was definitely shut down. Not one of them uttered anything. All of them stayed private. Even though man or woman behind the upheaval around the Cloud Aircraft was gone, not one of them could light. As an alternative, the atmosphere obtained come to be rather substantial. None uttered a word. Each of them stayed calm. Even though the individual behind the upheaval around the Cloud Aeroplane was gone, none of them could illuminate. Alternatively, the atmosphere got end up rather large. Chaotic Sword God “Jian Chen, you bought fortunate this period.” The younger legend lord shivered in. Since he obtained consumed every thing, he believed very unconfident since he confronted Jian Chen. He threw that to choose from with terrific reluctance before switching around to go out of. The foster child of the Nine Brilliance Celebrity Lord, Tian Yao, was departed. The small superstar lord also grew to become extremely surprised as he seen the Heartless Boy or girl nullify the Divine Ruler of Azure Brilliance’s invasion so effortlessly. He even felt disbelief. “The youthful celebrity lord has actually passed away.” The expressions with the highest professionals on the Four Signs Alliance grew to be extremely unappealing. The youthful celebrity lord was their method of obtaining self confidence while he was what tied up them to the Nine Splendour Legend Lord. Considering that this fasten was severed, their previous initiatives ended up given ineffective. They had all heard some things with regards to the younger legend lord’s personal identity during this period. Since someone extremely important possessed died right here, all of them understood what would come about following. Bang! Jian Chen tossed the young celebrity lord’s corpse on top of the floor into the Tian Yuan clan. Since the younger superstar lord was gone, his getting rid of intent finally begun to dissolve. “My foster father may be the Nine Splendor Celebrity Lord. If you even contact a your hair on me, my foster dad will never room or space you. Even Divine Palace of Bisheng can’t tolerate the wrath of my foster daddy,” the small legend lord mentioned fearfully. His pupils gradually shrank. In particular, as he observed Jian Chen slowly and gradually improve his sword, the fresh star lord’s cardiovascular system trembled all the more. Even his ft . shook. That they had all observed specific things concerning the little star lord’s identity during this period. Considering that someone very important possessed passed away in this article, each will believed what can occur following. “Do you will still believe you could leave before me?” Jian Chen shown up until the little legend lord silently, hindering his path. Chaotic Sword God Bang! Jian Chen thrown the little superstar lord’s corpse onto the terrain in the Tian Yuan clan. Ever since the younger legend lord was old, his killing objective finally begun to dissolve. roman farm management The foster daughter from the Nine Brilliance Star Lord, Tian Yao, was old. The individuality the small superstar lord needed pride in was really worth almost nothing ahead of the Heartless Boy or girl. The Heartless Boy or girl did not give off any pulses of vigor. He appeared being a vicinity young child, however the fresh legend lord experienced concern from the bottom of your heart and soul. Right then, the hearts and minds of your top experts coming from the Four Icons Alliance utterly sank. “I anticipate making the Tian Yuan clan towards the Succeed Plane,” stated Jian Chen. He dared to eliminate the fresh superstar lord while he still had fantastic merit with all the Divine Palace of Bisheng. He can use this merit to switch for a specific part of territory around the Thrive Airplane and have the Divine Palace of Bisheng to guard the Tian Yuan clan. The Nine Beauty Star Lord’s wrath was much more alarming than any destructive storm. The Tian Yuan clan was as powerless as being a newborn before it. “I intend on getting the Tian Yuan clan towards the Thrive Airplane,” stated Jian Chen. He dared to get rid of the youthful celebrity lord while he still had good merit with all the Heavenly Palace of Bisheng. He could use this value to change for a distinctive little bit of territory on the Thrive Plane and acquire the Heavenly Palace of Bisheng to secure the Tian Yuan clan. The Nine Beauty Star Lord’s wrath was much more terrifying than any destructive tornado. The Tian Yuan clan was as powerless to be a baby before it. Novel–Chaotic Sword God–Chaotic Sword God
OPCFW_CODE
Rename 'authorization' header to 'x-access-token' The readme refers to x-access-token as the header that'll be used to pass the token around in, but the code didn't. Now is do. Ah...! Ciao @olizilla! Sadly, its the readme that needs to be updated ... My Bad ... :disappointed: Our Hapi.js Plugin uses request.headers.authorization see: https://github.com/dwyl/hapi-auth-jwt2/blob/f62bf57795e22b97f5c31a2375b68d6e7e6b80e1/lib/extract.js#L21-L22 Its worth keeping it consistent because the plugin advises people to visit the this repo to learn more about JWTs ... thanks so much for drawing our attention to this! For my knowledge is authorization the generally preferred header name for this kind of thing? What was the backstory on the change? Ah, we was using x-access-token but switched to using the (standard) authorization header because it was mentioned in the spec: http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#rfc.section.1 (or RFC: http://www.rfc-editor.org/rfc/rfc7519.txt if you prefer ... you'll need to grep for it) and it keeps it consistent with Hapi Auth Basic: https://github.com/hapijs/hapi-auth-basic/blob/ea5b694adeaf15b3df46243a5501394d6f3c9268/lib/index.js#L35 Also, there's industry trend (best-practice?) for it: https://developer.atlassian.com/static/connect/docs/latest/concepts/authentication.html http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html https://dev.twitter.com/oauth/overview/authorizing-requests @nelsonic ok so. Do you think the auth scheme should be prefixed into the header value, as it is with other schemes, Basic is prefixed with Basic, Digest has Digest... https://en.wikipedia.org/wiki/Digest_access_authentication The Atlassian seems to do it with JWT: Authorization" header value: "JWT <insert jwt-token here>" https://developer.atlassian.com/static/connect/docs/latest/concepts/authentication.html Perhaps a win for consistency at the expense of simplicity. I've no strong opinion. I guess as it's joining the masses in a commonly used header it'd be polite to follow suit. Yeah, its easier to use the authorization key because hapi automatically toLowerCase()es the header keys (so you don't have to remember if its upper/lower case) oh, and did I mention its one fewer character to type. :wink: Does #39 superseded this PR? This PR is irrelevant now, but the question above is about the value for the Authorisation header. Should it be prefixes with JWT to denote the Authorisation method bei g used as per other styles like Basic et al? We could probably have some sort of word to denote that the token is a JWT ... But we are assuming that there will only be one token passed by the client. A multiple token use-case has not yet surfaced ... If we want to use basic auth, we simply register the https://github.com/hapijs/hapi-auth-basic plugin first and it will attempt to authenticate the user first in the request lifecycle. But again, this use-case has not presented its' self. If you are happy that the readme/demo app is consistent, please close the PR. (Sorry to have created work for you ... will buy you a :beer: next time we :bowling: )
GITHUB_ARCHIVE
3.7 Effects, Effect Types, Multiple Effects Gravit has an excellent system of non-destructive effects that can be applied directly to shapes or, thanks to a special type of shape fill, overlaid on an entire document. In this video we’ll go through the effects you have to choose from and how you can use them. 1.Introduction1 lesson, 00:54 2.Say “Hello” to Gravit Designer4 lessons, 32:43 3.Working With Gravit Designer8 lessons, 1:13:07 4.Extra Functionality and Utilities4 lessons, 18:17 5.Conclusion1 lesson, 01:35 3.7 Effects, Effect Types, Multiple Effects Hey, and welcome back to Gravit Designer Quick Start. In this lesson, we're gonna be checking out the cool effects that you can add, using Effects section in Gravit Designer. We're gonna start with a design from a template, so that we've got something to apply our effects to. So you can see things a bit more clearly. So we're gonna go to New From Template. Jump into the blog graphic category, and we will grab this template here at the top right, to the story books for children. Now the first thing we are gonna do, is something that we touched on in this section of fields, and we grab a rectangle and draw it down over the entire area, then in the field section, we are going to go up here and set this to background field. So that makes it as though we can't see the rectangle itself. But any effects that we apply to this rectangle, are gonna be visible on all of these other objects that are beneath the rectangle that we just added in. So just select that rectangle again, and then down here, we have all these different effects that we can choose from. And just like with fields and borders, you can add as many of these as you like. You can stack them on top of each other. You can reorder them into any sorting order that you want. You can blend them together, and use them to make combined effects. So we're just gonna quickly step through what each of the available effects are, and the settings that they have related to them. So first up, we'll check out the blur effect. And you can see, right away we have a soft blur that's been applied to the entire design, and we can increase the radius of that blur, so that it becomes more intense. And if we want, we can also activate this clip setting here, which will ensure that the blur effect is clipped to only be applied inside the actual shapes that are beneath this effect. So you can see when I turn that back off again, the blurring is bleeding out around the outside edges here. And if we turn it on, then it's nicely and neatly clipped. To get rid of an effect once you've added it, and you don't want it anymore, just hover over it. You'll see the ability to just hide it, if you wish, or you can delete the effect here. Next up we have the Color Adjust effect. So we can use this to make a number of different adjustments to the hue of our document, or to an individual element that we apply this effect to. So we can shift our hue, so you can see as I drag this across, the hue of everything in the image below, is being altered. We can also increase the saturation, decrease the saturation, we can change up the contrast, and we can change out the brightness. Just delete that one as well. Drop shadow, and in a shadow, I'll show on a different document in a just a moment. Then under the more section here, if you hit the little plus button, you get some extra effects that aren't showing by default. So we have the recolor effect, which is very similar to the color adjustment effect that we just looked at. But in this case, rather than taking the hues of individual elements and shifting them, it recolors everything in the scene with the similar hue. You can change what hue, you wanna recolor things too, and you can also change how heavily saturated that recoloring is. You also have the overlay, which is just an easy way to get a gradient overlapping your shape. However, you can of course, just use a gradient fill to achieve the same goal instead. And then we also have a vignette. So that gives us a nice soft darkening around the outside of the document. We can change the size of the vignette, and we can change the amount of the vignette. There's a couple more effects that I still want to show you, but they're going to be better demonstrated in a new document. So we're just going to draw another rectangle. Just give it a nice gradient of some description. All right, so now we have the ability to add a drop shadow to any one of our elements. We can change the X and the Y position of the drop shadow, we can increase or decrease how blurred that shadow is, so if we up it to, say 15, you see become more blurred. And you can change the opacity, so that we have a nice soft shadow, or you have a very dark shadow. You also have an inner shadow, which works exactly the same way. The only difference is that, this time, the shadow is on the inside. So all the settings are the same. You can change the color to anything you want. So it's the same as the drop shadow, it's just on the inside of the shape, instead of the outside of the shape. You also have this cool mirror effect, which is gonna make it look like there's a reflection of the shape down below it. So it just gives you this nice soft faded mirror effect down below. And then finally, we also have a curved shadow effect, which makes the shape look like, it's lifted off the background, and that the left and right edges are curved towards you. So you can modify the settings here, you can increase the amount of bend, or decrease it. You can draw how soft the shadows are. The radius. The coverage, so that changes the width of the shadow behind the shape. And you can also change up the angle. So, that's just a quick overview of all of the effects that are included here. It doesn't take too long to go over these, because they're all created to be very user friendly, without needing you to dig through a lot of complexity in order to use these effects. And just like with everything we've looked at so far, you can stack these effects on top of each other, to get a kind of combined effect on any element in your design. So far, we've pretty much just been playing with squares and ellipses, and just some random pods. In the next lesson, we're gonna look at how you can create text in Gravit Designer. How you can position it, control its alignment, adjust it, scale it, and make it follow paths. So we're gonna check all of that out in the next lesson. I'll see you there.
OPCFW_CODE
A BuildR building exists as an in scene asset and contains all the data to create the building along with the generated mesh to be used by your game at runtime. A volume is a 3D shape that makes up a BuildR building. There can be a single or multiple volumes that can create a single BuildRbuilding. The volume consists of a 2D footprint extruded over a number of floors. All floors in a volume are the same size. Volumes are by default attached to the ground but they can be attached above another volume. The volume also defines the minimum size of a wall section. Volumes are contained within each building. Floorplans detail the inside of a building. Each volume will contain a floorplan for each floor. The floor plan defines rooms and vertical spaces like stairwells, lift shafts and atriums. The facade of a BuildR building defines what the external face of a volume looks like. It holds a pattern of wall sections and can be expanded or contracted to fit within the dimensions of the facade it is attached to. The pattern is arranged into a grid structure. There is a specific slot for the ground floor so that a single facade can be used for both ground based facades and ones that begin a number of floors up. BuildR Facade designs are saved into the Assets/BuildR2/Facades folder. A wall section defines a single piece of a facade design pattern. Wall sections could be a plain wall, a window, a door or even something else. BuildR Wall Sections are saved into the Assets/BuildR2/WallSections folder. A window is a basic opening in a wall section. It tends be centred closer to the top of the wall section. BuildR Wall Sections are saved into the Assets/BuildR2/Windows and Doors folder. A door is a basic opening in a wall section. It tends to be aligned to the bottom of the wall section so that it can be used easily as a door. BuildR Wall Sections are saved into the Assets/BuildR2/Windows and Doors folder. Gable designs can be applied to a specific facade of a volume. They will render the facade above the parapet wall with the user design. A gable design is made up of five types of shape; Horizontal, Vertical, Diagonal, Concave and Convex. With these shapes it’s possible to create most designs we see from a simple “Crow Stepped” through to more elaborate Dutch variants. BuildR Gable designs are saved into the Assets/BuildR2/Gables folder. A BuildR Surface handles how textures and materials are applied to buildings. It contains further information that allows the generator to account for the size of the texture and how it should tile. BuildR Surfaces are saved into the Assets/BuildR2/Surfaces folder. Vertical Space A vertical space is an internal shape that cuts through multiple floors. It will bed used to define the position and shape of stairwells, elevator shafts and atriums.
OPCFW_CODE
OneNote is a tool for keeping a notebook on your computer. It’s a desktop application. But, OneNote is also a web-based service that you can access on the Office 365 web site. We'll see how to work with both the application and the web-based tool, including how to open and edit the same notebooks in either location. - [Narrator] OneNote is a tool for keeping a notebook on your computer. It's one of those desktop applications that you can download and install like Word, PowerPoint and Excel. But OneNote is also a web-based service that you can access on the Office 365 Website. And those two things can link together in really useful ways, so I want to start by opening up the OneNote application on my computer. So I'm going to go to the start menu and launch One Note. And here I have one notebook. I don't have a lot of content here, but I can make notebooks consisting of text, pictures, hand written or hand drawn content, lots of stuff. I'll leave it to the Essential Training Course to help you explore that. But what I'd like to focus on here is where this notebook is stored. Up near the top left corner I can see the name of this notebook file and this is actually a menu. If I click on that to open it up, I can see a list of all of the notebooks that I have open. I only have one notebook right now, so there's only one on this list. But if I point at the name of that notebook, I can see the path to where this notebook is stored. This notebook is on my OneDrive. So here's a quick important note, if you sign into Office 365 and then install the desktop applications so that the first time you launch the OneNote application you're already signed into Office, then it will launch with one notebook stored on your OneDrive, the way I have it now. Now let's jump over to the Office 365 website to see something, so I'm going to close the OneDrive application, go over to the website and I want to access OneNote from here, which I can do by clicking on the OneNote tile. So when I open OneNote on the web, it automatically opens that same notebook. I can see the same file name up here at the top, and if I go to this little menu here and switch to the same page, then I'll see the same notes that we saw over on the desktop application. So if I have a notebook on OneDrive or on Sharepoint, I can access it in both the desktop application and the web based version. But how did it know to open the same notebook? Well that happened because I only have one notebook file right now, as a matter of fact, if we jump back to the Office webpage and take a quick look at my OneDrive, of course I can see that one note file is stored here on my OneDrive. And I only have this one notebook file. I don't have any other OneNote documents on my computer or on OneDrive or on Sharepoint. That's why it opens by default in both versions of OneNote. Now I can certainly make new notebooks, and if I do that I can choose to store it on OneDrive, which means I can access it on multiple computers, and I can share it with others, I can save it on one of my Sharepoint groups, and then other people in those groups could access it, or I could store it locally on my computer, then only I could access it. So let's make another notebook. Now if we go back to the OneDrive app on the web, and I go into the file menu, you'll see there is no option to create a new notebook from here. But here's what I can do: for now I'm going to close OneNote on the web, and I want to open up the desktop application again, so I'm going to go back to my start menu, open up the desktop application and from here, I can make a new notebook. So I can go to the file menu, I can go to new, and of course I could choose to store this new notebook on OneDrive, Sharepoint or locally on my computer. I want to save this one on OneDrive, so I'm going to select OneDrive, then I'll hit browse, and I'll just leave this in the root directory of my OneDrive. And I'm going to give this a name, I'm going to call it work notes, and I usually do not do this, but I'm going to add a little bit extra here that's going to help us identify these later, so I'm going to add created in desktop application. So I'll hit create, and as soon as I create a new notebook on my OneDrive, it asks me if I want to share it. I could share it now, or I could do that later. And the sharing tools work very similarly to sharing other files which we saw earlier in this chapter. If I wanted to share it now, I could just hit this option to invite people, but I don't want to do that, I just want to see this notebook, so I'll hit not now. And now I can see my new notebook here. And I can open up this menu, and I can see that it's listed along with that other notebook that we had earlier. So let's just add a little bit to this notebook, I'll type in some text here as a title for this page, then I'll add a little text box here, and I'm just going to stop there because we're not really worried about filling in content, we're talking about where these files are saved. So that's just one way to make a new notebook. Let's close this, I'm just going to quit the desktop application, go back to the Office 365 website and I want to see another way to make a new notebook is to start on my OneDrive or a Sharepoint group. So let's go to my OneDrive, and I'm going to go up here to at the top to where it says new. And I'm going to choose OneNote notebook. So I'm creating a new OneNote notebook from here and I'll call this one project notes, add a little bit extra here, just so we can identify it later, and I'll hit create. So now that file is on my OneDrive and it immediately opens it in OneNote on the web. But for now, I just want to close that. And what I'd like to do is go back to my OneDrive again and I want you to see that we can open one of these notebooks directly from OneDrive, or Sharepoint if I was over there. I could just click on one of these, and it opens up inside of OneNote on the web. And as you might expect, I could share these files from here and multiple people can open the same OneNote notebook and edit them at the same time. So co-authoring works in OneNote as well. Okay so I've made two new notebooks, so I have a total of three. I made one in the desktop application, and I made one directly on my OneDrive. I could also do this directly from a Sharepoint document library, but I'm not going to do that, it works the same as what we just saw. So now, I want to go back to OneNote on the web, so back on the Office 365 web page, opening up OneNote from this tile. Now that I do have multiple notebooks on OneDrive, when I launch the OneNote app on the web, it prompts me to choose which notebook I want to open. So there's this recent list, and there's a My Notebooks list. My Notebooks shows me a list of all of the notebooks I have on my OneDrive. It does not however, include notebooks that are saved in Sharepoint groups, and clearly it doesn't show me files that are stored locally on my computer. But I can always open notebooks from the document library in the Sharepoint groups if I want. Or I can go to the recents list and open one of my recent notebooks, even if it is stored on Sharepoint. So from here I could choose which one of these I want to open, so I'm going to choose this one. It opens up, and I can continue working on this notebook. I could even go to the desktop application and open any of these notebooks from Sharepoint or OneDrive as well. So if you think about it, this system is not too far off from what you could do with Word, Powerpoint and Excel documents, you can save them or create them on Sharepoint or OneDrive. You can open them on the web, or in the desktop application. And you can share or co-author them with your coworkers. Be sure to check out OneNote 2016 Essential Training to learn more about working with your notebooks. - Office 365 accounts and tools - Office 365 Groups and SharePoint - Choosing the right tools for your collaboration needs - Working with shared calendars in Outlook - Making video and audio calls with Skype for Business - Managing collaborative conversations with Teams - Editing and co-authoring files stored on OneDrive or SharePoint - Choosing a location to store files - Sharing files from OneDrive or SharePoint - Sharing files in Teams
OPCFW_CODE
Under normal operation, AG Grid will render each row as a horizontal list of cells. Each cell in the row will correspond to one column definition. It is possible to switch this off and allow you to provide one component to span the entire width of the grid and not use columns. This is useful if you want to embed a complex component inside the grid instead of rendering a list of cells. This technique can be used for displaying panels of information. You may be wondering what full width rows are useful for. Their usage is very rare and most applications will not use them. If you cannot think of a use case for it, then don't worry, do not use it. Full width rows were initially introduced into AG Grid to support Master / Detail before the grid provided direct support for master / detail. Now that master / detail is directly supported, the usefulness of full width is reduced. Below shows an example using full width. The following can be noted: The rows for countries France, Italy and Peru have full width components instead of cells. Sorting and filtering all work as if the data was displayed as normal. fullWidth (full width) component takes up the entire width of the grid. A full width component: - is not impacted by horizontal scrolling. - is the width of the grid, regardless of what columns are present. - is not impacted by pinned sections of the grid, will span left and right pinned areas regardless. - does not participate in the navigation, rangeSelection (AG Grid Enterprise) or contextMenu (AG Grid Enterprise) of the main grid. fullWidth, you must: - Implement the isFullWidthRow(params)callback, to tell the grid which rows should be treated as - Provide a fullWidthCellRenderer, to tell the grid what cellRendererto use when doing The cell renderer can be any AG Grid cell renderer. Refer to Cell Rendering on how to build cell renderers. The cell renderer for fullWidth has one difference to normal cell renderers: the parameters passed are missing the value and column information as the cellRenderer, by definition, is not tied to a particular column. Instead you should work off the data parameter, which represents the value for the entire row. isFullWidthRow(params) callback receives a params object containing the rowNode as its input and should return a boolean false (do not use fullWidth and render as normal). Sorting and Filtering are NOT impacted by full width. Full width is a rendering time feature. The sorting and filtering applied to the data is done before rendering and is not impacted. Below shows a detailed full width example including pinned rows and columns. The example's data is minimalistic to focus on how the full width impacts rows. For demonstration, the pinned rows are shaded blue (with full width a darker shade of blue) and body full width rows are green. The following points should be noted: Full width can be applied to any row, including pinned rows. The example demonstrates full width in pinned top, pinned bottom and body rows. Full width rows can be of any height, which is specified in the usual way using the getRowHeight(params)callback. The example sets body fullWidthrows to 55px. The pinned full width rows are not impacted by either the vertical or horizontal scrolling. The body full width rows are impacted by vertical scrolling only, and not the horizontal scrolling. The full width rows span the entire grid, including the pinned left and pinned right sections. The full width rows are the width of the grid, despite the grid requiring horizontal scrolling to show the cells. The example is showing a flat list of data. There is no grouping or parent / child relationships between the full width and normal rows. By default, Full Width Rows remain in place while the grid is scrolled horizontally. However, this may be undesirable for some applications which need to horizontally scroll the full-width rows together with the rest of the rows. In order to have Full Width Rows scroll like normal rows, set embedFullWidthRows=true in the gridOptions. The example below demonstrates the behaviour when Full Width Rows are embedded in the same container as regular rows. Note the following: - A different instance of the Full Width Cell Renderer is created for each one of the following sections: Pinned Left, Pinned Right, Non Pinned. - Full Width Rows in the non pinned section take the whole width of the section and scroll horizontally. - Full Width Rows in the pinned sections take the whole width of the section. When using full width rows, the full width cell renderer is responsible for implementing support for keyboard navigation among its focusable elements. This is why by default, focusing a grid cell with a full width cell renderer will focus the entire cell instead of any of the elements inside the full width cell renderer. Adding support for keyboard navigation and focus requires a custom suppressKeyboardEvent function in grid options. See Suppress Keyboard Events. An example of this is shown below, enabling keyboard navigation through the full width cell elements when pressing ⇥ Tab and ⇧ Shift+⇥ Tab: - Click on the United Kingdomrow, press the ⇥ Tab a few times and notice that the full width Francerow can be tabbed into, along with the button, link and textbox. At the end of the cell elements, the tab focus moves to the next cell in the next row - Use ⇧ Shift+⇥ Tab to navigate in the reverse direction suppressKeyboardEvent callback is used to capture tab events and determine if the user is tabbing forward or backwards. It also suppresses the default behaviour of moving to the next cell if tabbing within the child elements. If the focus is at the beginning or the end of the cell children and moving out of the cell, the keyboard event is not suppressed, so focus can move between the children elements. Also, when moving backwards, the focus needs to be manually set while preventing the default behaviour of the keyboard press event.
OPCFW_CODE
Calc sometimes makes assumptions during algebraic manipulation that are awkward or incorrect when vectors and matrices are involved. Calc has two modes, matrix mode and scalar mode, which modify its behavior around vectors in useful ways. Press m v ( calc-matrix-mode) once to enter matrix mode. In this mode, all objects are assumed to be matrices unless provably otherwise. One major effect is that Calc will no longer consider multiplication to be commutative. (Recall that in matrix arithmetic, `A*B' is not the same as `B*A'.) This assumption affects rewrite rules and algebraic simplification. Another effect of this mode is that calculations that would normally produce constants like 0 and 1 (e.g., a - a and a / a, respectively) will now produce function calls that represent "generic" zero or identity matrices: `idn(0)', `idn(1)'. The `idn(a,n)' returns a times an nxn identity matrix; if n is omitted, it doesn't know what dimension to use and so the idn call remains in symbolic form. However, if this generic identity matrix is later combined with a matrix whose size is known, it will be converted into a true identity matrix of the appropriate size. On the other hand, if it is combined with a scalar (as in `idn(1) + 2'), Calc will assume it really was a scalar after all and produce, e.g., 3. Press m v a second time to get scalar mode. Here, objects are assumed not to be vectors or matrices unless provably so. For example, normally adding a variable to a vector, as in `[x, y, z] + a', will leave the sum in symbolic form because as far as Calc knows, `a' could represent either a number or another 3-vector. In scalar mode, `a' is assumed to be a non-vector, and the addition is evaluated to `[x+a, y+a, z+a]'. Press m v a third time to return to the normal mode of operation. If you press m v with a numeric prefix argument n, you get a special "dimensioned matrix mode" in which matrices of unknown size are assumed to be nxn square matrices. Then, the function call `idn(1)' will expand into an actual matrix rather than representing a "generic" matrix. Of course these modes are approximations to the true state of affairs, which is probably that some quantities will be matrices and others will be scalars. One solution is to "declare" certain variables or functions to be scalar-valued. See section Declarations, to see how to make declarations in Calc. There is nothing stopping you from declaring a variable to be scalar and then storing a matrix in it; however, if you do, the results you get from Calc may not be valid. Suppose you let Calc get the result `[x+a, y+a, z+a]' shown above, and then stored `[1, 2, 3]' in `a'. The result would not be the same as for `[x, y, z] + [1, 2, 3]', but that's because you have broken your earlier promise to Calc that `a' would be scalar. Another way to mix scalars and matrices is to use selections (see section Selecting Sub-Formulas). Use matrix mode when operating on your formula normally; then, to apply scalar mode to a certain part of the formula without affecting the rest just select that part, change into scalar mode and press = to resimplify the part under this mode, then change back to matrix mode before deselecting.
OPCFW_CODE
Rust vs Go is one of the most interesting oppositions in modern software development. Also, it’s one of the most important decisions for programmers to make. But can any of these highly appreciated programming languages be regarded as better, in fact? Well, the above-mentioned Stack Overflow survey shows that utilizing them lets software developers earn similar yearly remuneration, accounting for $77,530 in the case of Rust, and $75,669 as far as Go is concerned. Of course, this is just the beginning of the long list of what Go and Rust have in common. But this doesn’t mean that there are no differences in the Rust Go combo. In this article, we’ll try to help you decide whether you should go Rust or maybe try Go instead in the nearest future. Table of contents: Isn’t Rust a bit rusty? All you need to know about it As already mentioned, Rust is being chosen on and on as the most loved programming language by developers surveyed by Stack Overflow. Undoubtedly, those repeated results over the years mark a huge success of this programming language, considering that this environment changes rapidly, and there are more and more competitors showing up regularly. But where does this affection come from? Well, what makes Rust fans love it are features such as impressive performance, memory safety, safe concurrency, cutting-edge tooling, great documentation, and useful appliances such as auto-formatter, auto-completion, and other supporting functions. Boiled down, Rust is a high-level, general-purpose, cross-platform, multi-paradigm, concurrent, and functional programming language. What’s also crucial in terms of Rust code or Rust programs is that Rust is syntactically similar to C++. Is Go worth a go? What makes it useful? And what can be said about Go? In short, Go is a statically typed, object-oriented, concurrent, and functional programming language. Also, Go belongs to compiled languages, which makes it comply with machine code easily (with no additional installations necessary) and outperforms interpreted languages. Sometimes it is referred to as Golang although this is not correct as Golang.org is just Go’s domain name and not this language’s legitimate name. On its website, Go calls itself „an open source programming language that makes it easy to build simple, reliable, and efficient software”. Many users also praise it for being easy to read and easy to comprehend. Other interesting features include scalability, flexibility, automatic memory management, powerful error checking, as well as detecting variables that go unused and impressive speed. Rust vs Go - which one is better? Rust & Go similarities The list of Rust and Go similarities seems to be endless. They are nearly age-mates, with Go first appearing in November 2009, and Rust debuting only a few months later, in July 2010. They both have huge corporations standing behind them – Google in the case of Go, and Mozilla as far as Rust is concerned. They even have their own mascots – Ferris is the (unofficial) one of Rust, and the famous Gopher is associated with Go. Both Go and Rust are compiled, concurrent, and multi-paradigm system programming languages, perfect for supporting parallel computing environments. Rust vs Go key differences Rust alone is famous for its exceptional run speed, being secure, well-designed, and offering many advanced yet practical features. Also, it has a useful inbuilt dependency and build management. Moreover, software developers can take advantage of a vast Rust community support and make use of The Rust Programming Language book. However, despite having access to those resources, some users say Rust was not that easy to learn as it has a steeper learning curve than Go. In turn, Go’s advantages include simple language syntax, outstanding code readability, great ecosystem, community support, and a variety of libraries available. On the downside, Go may be regarded as oversimplified and a little superficial, and this may imply some limitations of its use. Moreover, the fact that Go forces a garbage collection run no less often than every 2 minutes can cause problems such as latency spikes, worsened user experience as well as failing to meet performance targets. Rust, on the other hand, has no runtime or garbage collector, and this makes it „blazingly fast and memory-efficient”, as it calls itself on its website. For this reason, Rust’s memory management is outstanding with memory security guaranteed and memory used by the program being freed when it is not needed anymore, and with no runtime memory safety bugs. Rust itself highlights the fact that its ownership model and rich type system „guarantee memory-safety and thread-safety”. Rust or Go? Find the perfect match for your next project Being aware of the pros and cons, or advantages and disadvantages of utilizing both Go and Rust makes software developers more confident in making decisions regarding choosing any of these. But can we indicate a winner in the „Golang” vs Rust opposition? Well, in the major Golang vs Rust performance category, Rust takes the lead, as it is regarded as way more efficient and offers better execution speed and development speed. However, it may be a little worse than Go when compilation speed is concerned. You can go from C or C++ to Rust, too, and succeed similarly. Rust, in general, can be a great fit for many use cases as it is a general-purpose and low-level language. Some of its use cases include browser engines, operating systems, microcontroller applications, distributed online services, and embedded devices. Undoubtedly, it has great potential, and is ahead-of-time, loved by young software developers, and is definitely worth giving it a try in the year 2021. And to you, which one is a better choice for 2021’s challenges – Go or Rust? And what makes the chosen language better or more appropriate than the other one? What are its main advantages software developers can utilize in their day-to-day work?
OPCFW_CODE
Confessions of an Evernote Convert Sometimes, you see something in which the utility of it jumps out at your right away. I remember the first time I saw a web browser back in the old days before “World-Wide Web” was even a household term. I literally knew what I was looking at was revolutionary. Still, even I could not have conceived what the WWW would turn into and how quickly it would take off. Then, there are notes applications. OK, so you can take notes. What’s the big deal? After all, one of the first applications for any computer was a text editor, which was needed for writing programs if nothing else. I’ll admit it: I did not get it. Lifehacker had Evernote articles, their commentors would push it at times, but it all seemed rather overblown to me. Gina Trapani was (and probably still is) an Evernote enthusiast, but that did not phase me, either. It still didn’t seem a big deal when I got a Windows Pocket PC. After all, it would sync Notes from within Outlook. So, whenever someone was foaming at the mouth when OneNote came out, I just didn’t get it. And, in fact, things still had to change before I really cared. But, OneNote was useful, and it did sync with my HTC Windows Phone. I was able to keep notes on it and transfer them to my PC that had OneNote on it. Pretty neat. Still, what did this do that Outlook Notes didn’t? Other than formatting, not much. Strictly speaking, there were phone apps that could read Office documents, so even the formatting just was not compelling. With Office 2010, though, Microsoft began putting out a version of OneNote that would work on the iPhone and sync using Windows Live Drive (which is now SkyDrive). Finally, I could type up a shopping list, sync with the cloud and then cross items off the list as I bought them. Maybe this wasn’t the biggest deal in the world, but it was finally useful beyond indexing and creating notebooks. Still, I was used to putting things into Outlook’s Notes. So, changing over to Linux gave me a realization that I wasn’t going to be able to continue this. Worse, it still relied upon syncing via cable, and if something happened to interrupt sync’ing, the notes got all out of sync (literally). I picked Evernote as a replacement because it is the most “OneNote-like”. It does not have a Linux version, so I looked into NixNote/NeverNote (they changed the name from NeverNote, but the documentation still says this), which is a Linux frontend. They are very similar, so it wasn’t too bad converting. This morning, however, all of this ambivelance disappeared. I wanted to take a picture of something and insert it into a note. If there is an easy way to do this in OneNote, I’ve never seen it or had a practical reason to do it. However, I opened up Evernote, went to the note I wanted to insert the image into, snapped a picture, and Evernote gracefully inserted it into the note. It was easy. It was intuitive. It floored me. It’s not very often I get to use something that is that easy to use. The Evernote iPhone app does have some glitches, though. I could not figure out how to delete a note, for example. The organization is a little odd, as it defaults to showing by date, but if you search for a specific phrase (and usually I can remember the name of the note if nothing else), then it is sort of immaterial.
OPCFW_CODE
I could use some advice on building a computer. Most if not all parts will be bought from newegg.com or microcenter.com. My budget is roughly $1000. Would like to keep it as low as possible but would also like the best parts that will last for as long as possible (stay up with upcomming games). What I'm using the computer for: Gaming (Skyrim - Diablo 3 - Bioshock 3 - Planetside 2 - Other MMO's) SSD over HDD - Is it worth the cost and lowered space to use a SDD for OS/installed progs and a HDD for storage? core i3 vs i5 vs i7 - Is it worth the cost of the i7 to future proof myself or should I just stick to a i3/i5? LGA 1155 vs 2011? I currently have a GeForce 8800 GTX 768mb - Is there anything I can do with it to still get some use out of it? I heard I can use it as a dedicated physx card and if I do what would be the benefit? Radeon vs GeForce - I usually perfer GeForce, but was wondering what everyones opinions were on graphic card choices and also if anyone had any news on these new cards I heard that are comming out mid january? Onboard sound or card - Does having onboard sound put more of a strain on your system then having a dedicated sound card or is this just a myth? Not really doing anything professional grade audio level, just gaming. Power Supply - My current PSU is 620w. That still enough power to handle everything? 2 hard drives, 1 dvd rom, 4 fans about. Or do I need a special PSU for the icore procs? It is one of those things that is nice to have, but hardly required. You can get along just fine without it and your gaming performance won't be that hugely impacted. The i5 2500k is generally considered the gold standard gaming processor right now. Processors north generally cost a lot more bucks than they give a lot more bang and processors lower while having quite a lot of bang for the buck often just don't have enough bang in general. The i7 2600k is nice too, but the significant gains it has over the i5 2500k tend not to be in the area of gaming. The 3960s and stuff are just going to be out of this budget range regardless of their advantages and disadvantages. TBH, I would not bother trying to keep the GeForce 8800 around. There is some potential to use it as a physx card, but I would say its probably not worth it for the benefits. Radeon vs GeForce - I have always been on a pretty strict budget and ATI owns this space almost completely. Nvidia barely even tries to offer good low cost cards. Nvidia just sells high cost cards that perform really well and that is about it. I would strongly suggest you at least consider ATI cards since you are on a "fairly" low budget. A single 6950 would use about 25% of the budget and would perform quite well without needing to have a dedicated card for PhysX. That would let you get away with a micro board (~60) instead of a regular atx board (~120). The extra $60 saved there could pay for part of the video card. Onboard sound is generally good enough and doesn't put much strain on the system. PSU - It depends on the brand, but it might be tough for this to power the computer if you do intend to have 2x video cards and a regular atx board. PSU brand makes a large difference. One that has 620 on the label might be able to give 700 without blowing up. Another with 620 on the label might not be able to break 400 without blowing up.
OPCFW_CODE
Developers have increasingly begun to use web frameworks to develop reliable and high-quality interactive web applications. This is because yours is a reliable way to create the desired product and save time in the process. Therefore, in this article, we will talk about more detailed frameworks for backend development. Backend development requires certain knowledge and skills. Therefore, it is better to contact specialists fireart.studio when working with backend projects. What Are the Types of Frameworks? It’s no secret that today there are two main types of frameworks. The first is client-side and the second is server-side. What is the difference between them? We will find out now. This is a unique interface platform that helps implement a new user interface. Thus, using different development environments, you can create truly unique products. Using the backend, you can create simple pages, landing pages, and various forms. You can also significantly increase your security during various hacker attacks. You will be able to work out all the small details and make your application work more accurately. What Is a Backend Framework? With the help of the backend, you can build the desired website architecture. That is, it is a special library of tools and modules that will provide you with ease in creating a structure. This moment greatly affects the success of your project. With the help of frameworks, backend developers simplify their tasks. Why Do Developers Use Backend Frameworks? As a rule, such frameworks are used to simplify and speed up work. That is, developers get an already defined layout, where there are many tools, functions, and data that make it easier to start the process of creating the desired product. Let’s move on to the main frameworks for back-end development, which can greatly simplify your workflow. Django is a web application framework based on the Python language. The framework has one important rule that goes like this: DRY (Don’t repeat yourself). After all, systems based on Django are based on one or more applications, which are recommended to be made alienable and pluggable. Examples of successful applications that have been built on this framework are Spotify, DropBox, and Washington Post. According to some research, Django was one of the most popular and in-demand frameworks in early 2022. Among the main advantages are good scalability, Security, as well as asynchronous programming. It is an adaptable framework that was developed specifically for the Node.js application. Such a platform contains a huge number of tools and functions for creating high-quality products. When using express.js, you can save money, because you don’t have to look for a lot of people on the team to work with the backend and interface. The software also supports the external and internal interface, which makes the platform very convenient to use. Another platform that deserves attention is Flask. This framework is specialized in web applications dedicated to Python. The key features are that you can quickly build web applications with just 1 file from Python. This platform can be used both for the development of pilot projects and small sites and for the creation of large developments created for e-commerce. That is, the scalability of the core allows you to take on any type and complexity of the project. These are the frameworks you can use to make your work process easier during the development of your project. Which one to use is up to you. And the best way to do this is to analyze your needs and goals.
OPCFW_CODE
[Bioperl-l] Offering to help cjfields at illinois.edu Tue Mar 3 12:14:19 EST 2009 On Mar 3, 2009, at 8:56 AM, Bryan Bishop wrote: > On Tue, Mar 3, 2009 at 8:34 AM, Chris Fields <cjfields at illinois.edu> >> For an example of how to get involved with BioPerl search the mail >> for the recent Bio::DB::HIV work by Mark Jensen, now in BioPerl 1.6. > Neat. I'll have to look into that. >> We would happily support lab protocols if we have a standard to go >> by (i.e. >> schema to work with) and someone willing to code and maintain modules > Have you looked into CLP-ML? I wrote up a pcr.xml example the other > day, I think I linked to it in my second to last post (maybe). (I'm on > the run, sorry for not linking at the moment.) The key issues for inclusion into a stable bioperl release are (1) support and (2) stability; we can't have something going to CPAN that may have a fluctuating API w/o a decent deprecation cycle. We can make, however, make the space for in-development modules and module API changes (see recent talk of bioperl-dev). One could: 1) install a stable bioperl release (1.6) 2) optionally install or set PERL5LIB to bioperl-dev (for the bleeding For a Moose-based bioperl implementation I suggest a separate repo completely. We're using svn currently on dev.open-bio.org, though I and a few others are also using git. I'm neutral on the matter but it's possible the consensus may be to keep everything in the open-bio svn repo (not everyone has git or uses it). >> related to their use. As for a place to fit in, we have the various >> Bio::Biblio modules; I could easily see protocols fitting into that >> namespace, though I have to admit I'm unfamiliar with the overall >> structure/purpose of Bio::Biblio. > What? Is that a BibTeX parser module? > - Bryan > 1 512 203 0507 I think Bio::Biblio is the generic class structure for various bibliographic sources; the parsers would be in Bio::Biblio::IO (no BibTex AFAIK). They're probably in need of some work. More information about the Bioperl-l
OPCFW_CODE
Re: [PATCH v2 12/12] docs: path-lookup: update symlink description Date: Sun Apr 18 2021 - 22:00:04 EST On Tue, Mar 16 2021, Fox Chen wrote: > instead of lookup_real()/vfs_create(), i_op->lookup() and > i_op->create() will be called directly. > update vfs_open() logic > should_follow_link is merged into lookup_last() or open_last_lookup() > which returns symlink name instead of an integer. > Signed-off-by: Fox Chen <foxhlchen@xxxxxxxxx> > Documentation/filesystems/path-lookup.rst | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > diff --git a/Documentation/filesystems/path-lookup.rst b/Documentation/filesystems/path-lookup.rst > index eef6e9f68fba..adbc714740c2 100644 > --- a/Documentation/filesystems/path-lookup.rst > +++ b/Documentation/filesystems/path-lookup.rst > @@ -1202,16 +1202,15 @@ the code. > it. If the file was found in the dcache, then ``vfs_open()`` is used for > this. If not, then ``lookup_open()`` will either call ``atomic_open()`` (if > the filesystem provides it) to combine the final lookup with the open, or > - will perform the separate ``lookup_real()`` and ``vfs_create()`` steps > + will perform the separate ``i_op->lookup()`` and ``i_op->create()`` steps > directly. In the later case the actual "open" of this newly found or > created file will be performed by ``vfs_open()``, just as if the name > were found in the dcache. > 2. ``vfs_open()`` can fail with ``-EOPENSTALE`` if the cached information > - wasn't quite current enough. Rather than restarting the lookup from > - the top with ``LOOKUP_REVAL`` set, ``lookup_open()`` is called instead, > - giving the filesystem a chance to resolve small inconsistencies. > - If that doesn't work, only then is the lookup restarted from the top. > + wasn't quite current enough. If it's in RCU-walk -ECHILD will be returned > + otherwise will return -ESTALE. When -ESTALE is returned, the caller may "otherwise -ESTALE is returned". If you don't like repeating "is returned", then maybe: "... -ECHILD will be returned, otherwise the result is -ESTALE". > + retry with LOOKUP_REVAL flag set. > 3. An open with O_CREAT **does** follow a symlink in the final component, > unlike other creation system calls (like ``mkdir``). So the sequence:: > @@ -1221,8 +1220,8 @@ the code. > will create a file called ``/tmp/bar``. This is not permitted if > ``O_EXCL`` is set but otherwise is handled for an O_CREAT open much > - like for a non-creating open: ``should_follow_link()`` returns ``1``, and > - so does ``do_last()`` so that ``trailing_symlink()`` gets called and the > + like for a non-creating open: ``lookup_last()`` or ``open_last_lookup()`` > + returns a non ``Null`` value, and ``link_path_walk()`` gets called and the "NULL", not "Null". This those changes, Reviewed-by: NeilBrown <neilb@xxxxxxx> Thanks for a lot of all these improvements!! and apologies for the delay in the review. > open process continues on the symlink that was found. > Updating the access time Description: PGP signature
OPCFW_CODE
May 17, 2021 If you are a developer like me, we could say that daily, we go to our jobs, talk with our colleagues and friends, analyze and develop activities and products, solve bugs, laugh at jokes about technologies and frameworks, but we do not always feel in a quality environment, or working with interesting technologies and challenges that motivate us to always produce the best solutions. Facing this scenario, it's not always exactly clear what are the characteristics that would improve the quality of life we have in our work, or what would be the challenges that would motivate us to build an amazing product. If you are a CSO (Chief Something Officer) building a startup or a leader of a consolidated development team, you might have thought about how to form a high-performance team, or keep team members motivated to deliver the best solutions. Well, for both cases, this post proposes some ideas and methodologies that make developers more engaged and motivated to produce incredible solutions while evolving as professionals and as people. These ideas are based on a few years thinking about this, discussing with other developers who are less or more motivated, talking with tech leads, with the development community, and with company directors and owners. To ensure that all aspects of a team are addressed, I will talk about the technical vision and also the people management vision. If the technology department has several teams, it is necessary that each of these teams respect the chosen technologies and are aware of the reasons why such technologies were chosen. Many times, I noticed that when this interaction of respect does not occur, teams begin to feel better than others, requiring more resources, or even not seeing the importance of what is produced by others. An interesting solution is to present the processes that each team follows, what is the roadmap, the importance of what is being developed, and why this is so relevant to the company. The development team and developers need to have autonomy to make choices of the technologies that will be used within the projects, this lack of autonomy also leads to demotivation, or even lack of performance in the development process. It is also necessary to have transparency about why such technologies are being chosen, making clear to all developers what are the strengths and weaknesses. A presentation of defense about the technologies to the development team is a good dynamic to clarify doubts and facilitate the understanding of which technology will be chosen. This also ensures that the technologies will be used in the long term, as deliberate choices, without the knowledge of the entire development, usually cause centralization of knowledge and a consequent abandonment of the technology when a certain developer leaves the company. Pair programming among team members levels them better and decentralizes knowledge, code reviews without preciousness and egocentrism, following a coding style defined by all members also facilitates the understanding of all devs, it is also important to integrate the members, allowing emotional development of developers within the team. In my experiences doing this, it was very important for me to learn to say that I did not know something, and to reinforce trust among team members. Promote Dojos and hackathons to solve problems faced by the entire team, think together with the team what would be interesting to be implemented and that excites everyone. This is an initiative that motivates developers a lot, we all like to create new things, or solve existing problems. The reward of creating something new, and being integrated with your work colleagues, is definitely very important for the evolution of the team. Participate or allow your team to participate in technology events, and encourage them to go looking for more than just new technologies but also meet people. I currently work at a company that I met because of people at community events, this allowed me to know the company and also the professionals who worked there. If you are a leader hiring a new team member or even assembling one, seek to evaluate the candidate's profile, it is essential that he fits with the team you have not only by technological knowledge but also by behavioral profile. To ensure that the best candidate is hired, establish in the hiring process, that the candidate spends a period of time with the team members, this interaction is the best way to identify if the candidate has the fit for the position, at the company I work today, I spent about an hour with the team that would work, this time was crucial for me to understand the team's profile, and also served for the hiring manager to assess if I had the desired fit. The company's HR professional probably has knowledge in behavioral analysis and knowledge about cultural fit, he can certainly assist in this process. (as I write this paragraph, I'm thinking that maybe an article with ideas for a good dev hiring process makes sense) And finally, you as a hirer can say: "The market is scarce of experienced devs, I don't have the luxury to stick to cultural fit and profile, besides, we can adapt this dev to our culture and work profile". Remember that not every professional will be willing to change, and if there is this unwillingness, in the long term this can generate friction in the team, and demotivate the members. Developers need to understand what is the purpose of the product being developed, why it was conceived, what are the particularities of the market in question and what is the problem that the product proposes to solve. Some developers may end up seeing this point as something bad, "I'm a developer, I need to ensure technical quality, not care about the product", but we know that things don't work that way. Having in mind this awareness about the commercial viewpoint, increases the ownership of the product and also allows commercial and development to talk in better tune about new features, and prioritizations. The results of a team that does not understand the commercial nuances can cause rework, delivering features that should do X and not Y, causing frustration for the devs as they don't exactly know the need to focus on a specific value delivery and increasing demotivation at a moment of pressure. One strategy to improve this understanding is to promote communication between sales and development, bringing developers in to explain the technical importance of what is developed to the CEO and the entire commercial side (sales, marketing, pre-sales, directors). It also involves bringing this market knowledge to developers, showing how sales can evolve at that specific moment with that important delivery, etc. It's essential to have a tech lead who inspires people. Tech leads are usually known as very experienced seniors, and this is a crucial point to ensure that team members feel inspired to learn and overcome daily challenges. Therefore, this tech lead needs to have good soft skills, know how to guide the team, understand the team's peculiarities, and take advantage of each member's strengths. Have 1:1 meetings periodically. A 1:1 is essentially a round of updates and feedback between a tech lead/manager and a team member. The idea is that in a few minutes (usually 15), it's possible to address the frustrations and successes the developer has experienced in that period. This is very important as it gives the developer a voice to complain, congratulate, or suggest changes that make day-to-day life, processes, methodologies, or anything else associated with the work environment better. This is a two-way street and also allows the tech lead guiding the 1:1 to provide productive and constructive insights on the developer's progress. 5 tips on how to do one-on-one Establishing sprint reviews and plannings is very important. We are human beings, and as such, we tend to postpone activities that don't have very defined deadlines. For a team not completely aligned in culture, methodology, and development speed, this is a very good strategy to use. It's not necessarily about thinking in sprints or sticking to methodologies, but about making estimates with a delivery date in mind. Taking into account the team's capabilities is also important. You shouldn't excessively pressure all members, but creating an environment with some pressure can stimulate the evolution of the members and consequently the product. Promoting small rewards and recognitions is a strategy to keep your team focused and motivated. It's not necessary to have a regular frequency, but whenever there is a success, either in a completed sprint or a launched release, reward the team for the successful effort. This shows that the team was successful in that activity, and also releases endorphins that will associate the good work done with the reward. When a process fails, or a bug goes into production, it's important that there are no public accusations of the developer who caused the problem. Instead, the team should commit to understanding the problem, correcting it, and evaluating the process that led to its occurrence. Later, in an individual and private assessment, it's possible to identify the points that led to that failure. It's also always necessary to establish the importance of ensuring value delivery to the client, not just completing the feature. Well, these are the points I've raised. If you disagree with any, or agree and strongly support them, don't hesitate to comment on my twitter (I haven't added a comments section here yet, this is the MVP of the blog) or send me an email. If you found an error in the text and would like to correct it, open a PR in the website's repo, I will definitely look at it and accept it :). I want to thank the FloripaJS community, which helped a lot by raising, discussing, and commenting on the points of this article at the last meetup.
OPCFW_CODE
I think of a thesis as sort of an immature document, by definition. It delineates the university era from the professional era, and is bound to lack some practical elements. Should a student refrain from publishing his or her thesis on the Internet, for this or any other reason? Every document is, in the way you define it, immature. With everything you write your writing improves. Your papers will not be better because you have a degree. They will be better, because you learned something (i.e. made mistakes) before. However, there might be reasons not to publish your thesis, but it is not so different from normal working papers. Maybe your thesis is just bad, you did not get any meaningful results, you have made significant mistakes, etc. So if your university allows, publish it online if you want to (or if you think it may be worth reading / you can not think of a good reason not to); everyone knows, it is only a master's thesis and will read it in a different way than they would read a published paper (or not at all). Like you say, a thesis is not necessarily as thoroughly peer reviewed as a journal article. In my opinion, that doesn't mean the thesis should not be publicly available, just that a reader should keep in mind that the document is a thesis when reading it and (potentially) citing it. The onus then falls on the reader of a thesis, not on the writer. In the end, the decision of whether to make your thesis public is more about your university's policy. At the University of Waterloo, for example, all accepted theses are publicly available by default through the same web portal. Other university's have IP policies that may not allow you to publish your thesis (though these policies are rarely enforced). When the policy is ambiguous or left up to you, I would strongly recommend making it publicly available. Unless there is a compelling reason not to, yes, you should put your thesis online. A thesis is supposed to represent your first foray into academic research. The whole point of academic research is to make a contribution to the body of human knowledge, and share it with the academic community. As such, I would encourage you to make it as convenient as possible for the community to read your thesis, and posting it online is a good way to achieve that. If your discipline uses arXiv or a similar preprint or document server, I would encourage you to post it there as well; that way it will remain accessible even if your web site moves. I would not worry about your thesis containing "immature" work. It's a thesis; everyone knows that it's your first research product, and nobody is going to judge you harshly in 20 years because your thesis wasn't a work of staggering genius. But on the other hand, they very well might still find it useful. You spent a lot of time writing that thing; don't you want it to be able to do some good? Also, there's a good chance that your thesis is already publicly accessible (via interlibrary loan from your university, or a commercial thesis database, or something similar). If so, then posting it just saves (possibly a lot of) time and effort for those who want to read it. Here is a non-exhaustive list of compelling reasons why you might not want to post the thesis. Your university's intellectual property policies forbid it. (If so, shame on the university. This seems unlikely to me, but some other answers think it's plausible, so I suppose you should check.) You have submitted parts of your thesis as a paper to a journal / conference / similar outlet, or plan to do so, and the journal's copyright agreement forbids you to post the thesis. (If so, shame on the journal.) Your thesis contains collaborative work (such as jointly authored papers), and your coauthors object to you posting the thesis. (If so, shame on your coauthors.) You have published your thesis as a book, or plan to do so. In that case, posting it might hurt sales of the book, and might also be forbidden by your agreement with your publisher. Your thesis contains ideas of commercial value which are not yet patented, or un-patentable ideas that you plan to exploit commercially. (But as noted above, your thesis may be available to the public already, albeit less conveniently.) Your thesis contains a serious error that invalidates its results. (But you might still want to post it along with an addendum that explains the error; there may be parts of the thesis that people would still find useful.) My school, while doing a Master of Advanced Studies (MAS), required a thesis to be publicly available. It only grants exclution to this for commercial reasons, when the thesis is sponsored by a company. However, still, the abstract will be published. The school provides a search for all Thesis papers, for example here (search is german, but papers are sometimes english): I like the idea of publishing my thesis, it's a work I am somewhat proud of. I even have a download link on my personal homepage. If your thesis contains classified material, you obviously cannot put it online. otherwise do it. You will get feedback and you will get pointed out about good or bad things. This is a standard scientific process and as well a good opportunity for you to optimize your neural network (aka "learning"). If it turns out that your thesis is overly bad, you can still take it offline later. And if someone later still has a copy and asks you why your thesis was so bad, then accept and explain that you know that and learned from it. There is no shame in improving.
OPCFW_CODE
Properly handle Event Frame edits on Event Frames that are generated by the PI Analysis Service Users manipulate open event frames that are generated by the PI Analysis Service. The manipulation can be: 1) Write to an attribute on the EF 2) Write an annotation 3) Acknolwedge the EF When these manipulations are being performed the Event Frame is checked out to the user performing the changes. From time to time, it happens that a closing event is received and the PI Analysis Service fails to close the Event Frame as it is checked out to another user. This causes the analysis to stop and the event frame remains opens. A possible solutions could be: Retry to close the Event Frame multiple times (configure a timeout or retry count limit) A follow-up question would be: how does PI Vision handle the case where a user enters a reason, or acknowledges an event frame? I guess the event frame will be checked out for a short period of time by PI Vision, and the very same situation could happen here as well: an analysis could try to enter the event frame end time exactly during this short period of time where PI Vision has checked out the event frame to store the reason/acknowledgement. In that case, too, the analysis would fail and stay broken indefinitely. We use event frames to mark alarms (generated by event frame generation analyses), and a custom developed web application that allows our users to comment and acknowledge the alarms. Each alarm can be acknowledged only once, and commented multiple times. The time that an event frame is checked out is generally short (1-3 seconds), but that is enough to make this problem happen multiple times per month typically, with the amount of acknowledgments and comments that we currently have. If the PI Analysis service is the "owner" of the event frame, then it should be possible to comment on and acknowledge the event frame via the analysis service, but this cannot be done, we therefore check out the event frame from our application and check it in afterwards again. I can understand that a collision can always happen in this scenario (this is actually the exact reason for a check-in/check-out mechanism), but what is very unfortunate is that the event frame generation analysis breaks when such a collision happens. It stops working and never ever starts on its own afterwards. The current behavior assumes that PI Analysis Service is the "owner" of the event frame that it creates, therefore we assume that other users do not write to an event frame that doesn't belong to them. Similarly, for a PI Point, we assume the interface that is writing to a PI Point to be the "owner" of that data stream. We assume that two users, e.g. two interfaces, would not write to the same PI Point (data stream). Earlier in this discussion, it's mentioned that a custom application is being used to write to event frames. Can you describe what you're writing, the criteria that would lead you to start writing, and how often this is done? What is very bad about the current behaviour is that it is not just one event frame that does not get closed, but that the analysis is stopped afterwards and no new event frames are created at all anymore as soon as this happened once. To correct the problem, you have to identify the stopped analysis, and manually restart it in PI System Explorer. As discussed with Techsupport, it currently is not possible to detect and correct this error otherwise (e.g. via a Powershell script etc.)
OPCFW_CODE
Explore the intersection of coding and blockchain technology, and how it's shaping the future of this revolutionary digital ledger system. Dive into the intricacies of blockchain development and its potential impacts on various industries. The blockchain revolution is happening before our very eyes. From cryptocurrency trading to NFT generation and purchasing, most folks are aware of these opportunities. Some are interested, and even fewer are actually investing themselves. Coding can (and should) play a critical role in the future of blockchain technology. There will also be an inevitable career boom that will shape the blockchain industry. But what will that role be? First, what is the blockchain? If you’re not yet familiar with the blockchain, here’s a quick and dirty definition: The blockchain is a database that stores encrypted data, aptly called blocks, across a peer-to-peer network. Effectively, this passage of information between these computers or networks is heavily encrypted. This means it’s safer from hackers. Blockchain is the technology that has allowed things like cryptocurrency and NFTs to become a viable form of currency because blockchain technology can verify its legitimacy and keep a record of it in that database. Naturally, something this advanced that requires a ton of technical acumen. Beyond that, as the blockchain revolution continues on, there will be more opportunities for folks to build out solutions to make navigating and investing on the blockchain more viable to the masses. Let’s consider what those opportunities might look like. What opportunities are there for coding careers in blockchain? You may or may not already know, but there is a current surplus of open roles in software engineering, web development and other roles in the tech industry. This is true for a number of reasons: - There’s a lack of available talent - There is a surplus of roles in traditionally non-technical industries to make them more tech-forward and tech-friendly - New innovations like the blockchain present ample opportunities With cryptocurrency becoming more of a widely accepted form of real tender, everyone from startups to large corporations to government entities are looking for folks who are familiar with blockchain technology and the innovations needed to make it run smoothly. These roles include back-end programmers who can build out the foundation of blockchain apps and software and front-end developers who can make those apps user-friendly, among data analysts, cybersecurity specialists, and more. Plus, these roles are already in high demand — with the added specialization of blockchain technology, there will be many, many more openings (and companies to pay top dollar for them!). How could knowing how to code make you more successful as an investor? So you’re interested in investing in blockchain technology: could it benefit you to know how to code? Beyond actually writing lines of code, coding as a skill teaches you to really research topics and come to a fundamental understanding. Blockchain technology is ever evolving, so knowing where and how to find information on the best investments to make and when will greatly serve you as an investor. In addition, knowing blockchain technology, the applications that support it, and how those applications really work will help you leverage the best possible solutions to create your own investment portfolio. Get started with blockchain coding With the opportunities opening up in blockchain careers and investment opportunities, knowing how to code is a critical skill for both. Luckily, there are endless free courses available for beginner coders to learn the fundamentals of code they can apply not only to their investments on the blockchain, but to any potential career opportunities within the industry. Check out App Academy Open, entirely for free, to learn the coding skill that can net you a career — or, at the very least, some big money in investments. Don’t miss a beat with The Cohort! We’ll send you the latest Tech industry news, SWE career tips and student stories each month.
OPCFW_CODE
package net.bolbat.utils.collections; import static net.bolbat.utils.lang.Validations.checkArgument; import java.io.Serializable; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; import net.bolbat.utils.annotation.Audience; import net.bolbat.utils.annotation.Concurrency; import net.bolbat.utils.annotation.Stability; import net.bolbat.utils.lang.ToStringUtils; /** * Circular buffer implementation for situations when we need to obtain each time next value from the original collection in circular way.<br> * This implementation is thread safe and immutable. * * @author Alexandr Bolbat * * @param <E> * elements type */ @Audience.Public @Stability.Evolving @Concurrency.ThreadSafe public class CircularBuffer<E> implements Serializable { /** * Generated SerialVersionUID. */ private static final long serialVersionUID = 7432470971599446893L; /** * Last used index. */ private final AtomicInteger lastIndex = new AtomicInteger(); /** * Elements list. */ private final List<E> elements; /** * Default constructor. * * @param aElements * elements list */ private CircularBuffer(final List<E> aElements) { this.elements = aElements != null ? aElements : new ArrayList<>(); } /** * Create {@link CircularBuffer} from elements. * * @param aElements * elements * @return {@link CircularBuffer} */ @SafeVarargs public static <E> CircularBuffer<E> of(final E... aElements) { checkArgument(aElements != null, "aElements argument is null"); return of(Arrays.asList(aElements)); } /** * Create {@link CircularBuffer} from {@link Collection}. * * @param aElements * elements * @return {@link CircularBuffer} */ public static <E> CircularBuffer<E> of(final Collection<E> aElements) { checkArgument(aElements != null, "aElements argument is null"); return new CircularBuffer<>(new ArrayList<>(aElements)); } /** * Get next element. * * @return next element or <code>null</code> if element is <code>null</code> or elements array is empty */ public E get() { return elements.isEmpty() ? null : elements.get(Math.abs(lastIndex.incrementAndGet()) % elements.size()); } /** * Get element by index.<br> * {@link IndexOutOfBoundsException} will be thrown if elements list is empty or index out of elements list bounds. * * @param index * element index * @return element */ public E get(final int index) { if (elements.isEmpty() || index < 0 || index >= elements.size()) throw new IndexOutOfBoundsException("Index: " + index + ", Size: " + elements.size()); return elements.get(index); } /** * Get elements list (unmodifiable). * * @return unmodifiable elements list */ public List<E> getAll() { return Collections.unmodifiableList(elements); } /** * Get last used index. * * @return <code>int</code> */ public int lastIndex() { return Math.abs(lastIndex.get()) % elements.size(); } /** * Is this buffer is empty. * * @return <code>true</code> if empty or <code>false</code> */ public boolean isEmpty() { return elements.isEmpty(); } /** * Get this buffer size. * * @return <code>int</code> */ public int size() { return elements.size(); } /** * Check is this buffer contains given element. * * @param element * element * @return <code>true</code> if contains or <code>false</code> */ public boolean contains(final E element) { return !elements.isEmpty() && elements.contains(element); } /** * Create this buffer copy with added given element to the end. * * @param element * element to add * @return new buffer instance */ public CircularBuffer<E> add(final E element) { final List<E> result = new ArrayList<>(elements); result.add(element); return of(result); } /** * Create this buffer copy with removed given element (all occurrences in original buffer). * * @param element * element to remove * @return new buffer instance */ public CircularBuffer<E> remove(final E element) { if (elements.isEmpty()) return new CircularBuffer<>(null); final List<E> result = new ArrayList<>(); for (final E e : elements) if (!isEquals(element, e)) result.add(e); return of(result); } /** * Check is first element equals to second. * * @param first * first * @param second * second * @return <code>true</code> if equals or <code>false</code> */ private boolean isEquals(final E first, final E second) { return first != null && first.equals(second) || first == null && second == null; } @Override public String toString() { final StringBuilder builder = new StringBuilder(this.getClass().getSimpleName()); builder.append(" [elements=").append(ToStringUtils.toString(elements)); builder.append("]"); return builder.toString(); } }
STACK_EDU
Type III Errors I'm going to try to explain what a Type III error is (and also review what it means to have a Type I or a Type II error). If you already understand what these three errors are, stop reading, delete this message, and go find something better to do. If, however, you'd like to get a better handle on these three terms, please read on. Suppose we want to compare two over-the-counter headache medicines: Advil and Tylenol. To do so, let's imagine that we locate 100 people who say they have daily headaches. Let's also imagine that we randomly assign these folks to our two treatment conditions (which we'll call "A" for Advil and "T" for Tylenol). Fifty folks go into each treatment condition. Our instructions to each of our 100 subjects tell him/her to take only the medicine that we supply if and when he/she experiences a headache. To make our study a bit more scientific, imagine that we do it with Advil and Tylenol pills that are made to look alike. That way, our subjects will be "blind" to the type of medicine we provide. The "independent variable" in our little study is "type of headache remedy." In other words, type of headache remedy is the difference between our two comparison groups. Prior to collecting any data, this is the way our two groups differ. And since WE get to determine whether a person gets Advil or Tylenol, we could say that we are "manipulating" the independent variable, thus creating an experiment. The "dependent variable" will be "subjective rating of headache relief." This will be the data of our study. To keep things simple, let's imagine that we simply ask each subject to rate, from 0-to-5, his/her opinion of how well his/her medicine worked to relieve any headaches. Let's also imagine that data on the dependent variable are collected 30 days after the study begins. Suppose we set up a null hypothesis that says (in words) that Advil and Tylenol are equally effective in relieving headaches. This would translate into an "Ho" statement that mA = mT. In other words, the null hypothesis would say that the mean rating of Advil, in the Advil population that we're thinking of, is identical to the mean rating of Tylenol, in the Tylenol population that fits our study. Now, let's review what a Type I error or a Type II error would be. If the null hypothesis is true but we, based on our sample data, reject it, then that's a Type I error. In other words, a Type I error would occur if our sample data prompt us to claim that Advil is better than Tylenol (or vice versa) when the two medicines are equally good. In contrast, a Type II error would occur if the null hypothesis is false but we, based on our sample data, do NOT reject it. In other words, a Type II error would occur if our two sample means turn out to be so similar that we can't reject the null hypothesis . . . when in fact the null hypothesis is false (either because Advil is superior to Tylenol OR because Tylenol is superior to Advil). In summary, a Type I error takes place when a true null hypothesis is rejected whereas a Type II error takes place when a false null hypothesis is not rejected. Now, how might a Type III occur in our study? Suppose a medical friend of ours somehow knows that Advil is better than Tylenol at relieving headaches. However, that friend is away on a long trip, we didn't have access to his/her expert opinion, and we therefore conduct our little study while this expert is out of town. But let's imagine that this friend of ours KNOWS FOR CERTAIN that Advil is superior to Tylenol . . . meaning that mA is larger than mT. Even though higher ratings, on the average, are associated with the "A" population than with the "T" population, it's possible (due to sampling error) that the "T" sample mean might turn out to be larger than the "A" sample mean. Moreover, it's possible that such a difference between the two sample means could be so large that our study's null hypothesis gets rejected. Let's review this situation and then think about what has happened: Now, did we make a Type I error? The answer to this question is "NO" because the null hypothesis is false. (See #1 above.) By definition, a Type I error takes place when a true null hypothesis is rejected. But Ho is false . . . so we for sure did NOT make a Type I error. Did we make a Type II error? The answer here is "NO" because we rejected the null hypothesis. (See #3 above.) By definition, a Type II error takes place when a false null hypothesis is not rejected. But we rejected Ho . . . so we for sure did NOT make a Type II error. Since we didn't make a Type I error and since we didn't make a Type II error, did we do the right thing? In other words, did we make a correct inference about the two populations involved in our study? The answer here is "NO" because we claimed, on the basis of our sample data, that mT is larger than mA when in fact it's precisely the other way around. It's NOT the case that Tylenol works better than Advil (as indicated by our research results); in reality, Advil works better than Tylenol. In our little study, we correctly rejected a false null hypothesis. However, the "direction" of our inference is "backwards" from the real truth of the situation. (Advil is truly better than Tylenol but we claimed that Tylenol is better than Advil.) The term Type III error is used to designate this kind of inferential error. Copyright © 2012 Schuyler W. Huck
OPCFW_CODE
Century Dictionary and Cyclopedia - Not coinciding in time. - adj. Not synchronous; occurring at different times. - adj. computing, of a request or a message Allowing the client to continue during processing. - adj. computing, communication Having many actions occurring at a time, in any order, without waiting for each other; the style of communication used by Ajax. GNU Webster's 1913 - adj. Not simultaneous; not concurrent in time; -- opposed to - adj. (Paleontology) occurring in different geologic times; -- of taxa/ - adj. chronologically misplaced; belonging to a different time or era. - adj. (Computers) occurring at different speeds in different computers connected by a data transmission link; -- said of methods data of transmission between computers. Opposite of - adj. not synchronous; not occurring or existing at the same time or having the same period or phase - adj. (digital communication) pertaining to a transmission technique that does not require a common clock between the communicating devices; timing signals are derived from special characters in the data stream itself “The key in AJAX is the term asynchronous, which enables the browser to provide services and features in simple but exciting ways.” “Off to a slightly shaky start, the piece was roughly shoe-horned into the round table topic of the month but was ostensibly targeting the potential within asynchronous gameplay.” “I had thought to go to the recently deceased jdamicus and try getting some traction on the question that's pestering me, not as homework but strictly extra-curricular, about determination of jurisdiction in asynchronous electronic transactions.” “The word asynchronous tells us that the information is not sent in predefined time slots.” “Yesterday, I was in serious danger of working on my PhD thesis so in desperation I looked into this whole Flickr web2.0 tag asynchronous feed business.” “To make synchronous code asynchronous, you can simply call an asynchronous method instead of a synchronous method and add a few keywords to the code, as shown in the examples below.” “While it's easy to make method calls asynchronous, it helps to know what actually happens when the program reaches an await statement.” “Even with the simplicity, why make file access calls asynchronous?” “Once you've selected photos to upload, you'll be able to conduct other tasks while the upload is taking place (aka asynchronous uploading).” “The replication is asynchronous -- in other words, the slaves do not have to acknowledge receipt of the data.” These user-created lists contain the word ‘asynchronous’. Words are all I have to take your heart away Extraordinary words that Johnston and Shea might try to use while writing their dissertations. A somewhat discriminatory list of words and phrases collected for their euphonic or arcane appeal, interesting etymology, or concise definition of an otherwise unnamed phenomenon or concept. These are words I am picking just because I like them. I might make more categorized lists later, we shall have to wait and see. Looking for tweets for asynchronous.
OPCFW_CODE
Lesson 6 - The Two Most Basic Elements: Data and Expressions In the previous Lessons, we have experimented a bit with Data elements and Expression elements. These are the most basic GoldSim elements, so it is worthwhile to spend a few minutes now looking at them in a little more detail. It is perhaps most instructive to look at the two dialogs side-by-side: As can be seen, the top portions of each dialog are identical (and, in fact, identical to almost all elements). Both have a single input field. The input field for a Data element is labeled Data Definition. The input field for an Expression is labeled Equation. Data elements are intended to represent constant inputs to your model. Hence, the Data Definition field should simply contain a number (and units). Data elements also have a Data Source option that allows you to import data directly from a database. If you are interested in this, you can learn more about it in GoldSim Help. An Expression element is perhaps the most commonly used element in GoldSim. It simply provides a way for you to write a mathematical expression. In many ways, it is similar to a cell in a spreadsheet. As such, its input field generally would contain a mathematical expression. You will note that the input field for an Expression is larger than for other elements (i.e., it consists of multiple lines). This allows you to easily enter and view long expressions. It should be noted, however, that even for one-line input fields, if you enter a longer expression, GoldSim will automatically wrap the expression and extend the input field downward. Because the input field for a Data element is functionally identical to the input field for an Expression (or for that matter, any other element), you could actually enter a mathematical expression (equation) into a Data element. Likewise, you could also enter a constant into an Expression element. However, you should avoid this! This is because the element icons themselves provide visual cues to people viewing the model, making it easier to understand. When someone sees a Data element in the graphics pane, they know this is a constant; when they see an Expression element, they know this is a mathematical expression. If you switch this (e.g., enter a constant into an Expression element), it defeats the entire purpose of the visual cues provided by the icons. Finally, note that each element has a Save Results section at the bottom of the dialog. These two checkboxes are common to almost all elements. They specify whether outputs from the element will be saved. You can save the Final Values (the value at the end of each realization) and/or Time Histories (the value at selected timesteps). By default, when you create a new scalar element, these will be checked on. For this Course, you need not worry about these checkboxes, as the models will be small, and we will be dealing with scalars, and they will simply default on. As your models get larger, however, you may want to investigate disabling some results.
OPCFW_CODE
Is there a way to find out if a method is static or not? Is there a way to find out if a method is static or not? My reason for needing to know: I call static methods outside of any instantiation context. Non-static methods can't be called then, since they don't make sense yet. I want to call them later, once instances of those classes exist. When I call call_user_function_array($className.'::'.$functionName, $args); and the method is non-static, php seems to automatically create an instance of className and call that function. I want that call to FAIL for non-static functions. "php seems to automatically create an instance of className and call that function" - it's not true. @sectus: Why would those calls not fail then? Use E_STRICT error level. @sectus: stil is not reporting an error. I'll try to condense my code down to an example and post it. "Calling non-static methods statically generates an E_STRICT level warning. " - Static Keyword @sectus: Of course you where right. I figured out why I wasn't getting that error. inside __callStatic i made another call to call_user_function_array, this time with an instance. Thanks! Since reflection is expensive I actually ended up calling set_error_handler with a callback that throws an ErrorException as described here to catch the warning when a static call was made to a non-static method. You really like spaghetti code, do you? To dynamically call methods you don't even know on objects you have no idea about you override the global error handler to catch exceptions...?! Sorry, but that boggles the mind. I'm not saying anything for or against the PHP error handling system, you're right that it's not exactly great. I'm saying that you seem to be going to quite some lengths for something that seems like a dubious practice to begin with, and your code is probably going to be really hard to maintain or understand because of it. You're very much indulging in meta-programming there, which makes things harder to understand in almost any language, and you're doing it in a language that really isn't all that well suited for meta-programming to begin with. When I call call_user_function_array($className.'::'.$functionName, $args); and the method is non-static, php seems to automatically create an instance of className and call that function. No, it doesn't. PHP isn't that automagic. No idea what you're doing there. To call a method statically, you do exactly that: call_user_func_array("$className::$functionName", $args); To call a method of an object, you first need to explicitly instantiate an object, then call it like this: $obj = new MyClass; call_user_func_array(array($obj, $method), $args); To programmatically figure out if a method is static or not, use ReflectionClass: $r = new ReflectionClass($myClass); $m = $r->getMethod($method); var_dump($m->isStatic()); You should really know what a method is before you call it though, instead of dynamically trying to figure it out. Thanks for the warning at the end. Trust me, I know what I am doing... I know those are some famous last words.... ;-) You can check methods with reflection. class foo { static public function bar() {} public function baz() {} } $reflection_class = new ReflectionClass('foo'); var_dump($reflection_class->getMethod('bar')->isStatic()); // boolean true var_dump($reflection_class->getMethod('baz')->isStatic()); // boolean false P.S. It's very weird that you trying to call methods but you do not know what they really are.
STACK_EXCHANGE
This is the best thing I've ever seen in my entire life. Recent community posts Hey! I don't usually comment on your games, but Aran, I think you outdid yourself on this game. Seeing you progress in gamedev has been amazing, and this one was definitely a surprise for me. I honestly can't understand why people complain about bugs. I think it had a reasonable launch, specially for itch. Also, as a simple (in my head) QOL feature, could you add an option to remove physical money? As in, it only appears in the UI, but isn't present in the board. I've 100%'d the game a few times now, and I think it would be a great feature for those who are just grinding the end-game. Also, clicking while an eating animation is happening should skip it imo. That happens a lot :) It's good to know you have your priorities in order, and finishing games is super hard! So don't feel bad when abandoning projects, framing them into a full product is an art in itself. Good luck with everything! You are an absolutely incredible individual and I can't wait to see more from you. Type Dreams is amazing, and I hope your Patreon goes well, although it was quite hard to find! Maybe having some social media profiles would help bring some attention to your work; I wouldn't wish even for my worst enemy the use of Twitter, but a scheduling bot would do the trick without having to endure that awful experience. Either way, if you need any help with these useless and energy draining tasks or making a website (haven't seen you having one), I'd love to enquire without asking anything in return. I'm quite skilled at doing this stuff, if I say so myself. Anyway, hope you're well and all. Pretty spot-on style reminiscent of Death Grips's aesthetic despite using basic elements and simple models. Apart from that, can't say much due to bugs and it being early in development. You might want to cut-off some of the edginess, though. I don't want to be rude, just saying that as I'm sure if this got some attention that word would be very common, quite cringy as it's laid out (although I 100% respect the experience of whoever the story's based on). Spacing it out would be a good solution (all references are very close to each other time-wise), and subtlety would get you far while keeping (or even improving) the crudeness/impact. I don't know if you would want to put the work into giving it a second pass, but I'm saying all this because I really see potential in its presentation and execution, quite dissapointed in the unnecesary un-subtlety it has all over it. Edit: I want to clarify that I'm not sensitive on these themes at all, I just find the execution to be a bit overacted and reeks of creativity fueled by exagerated sentimentalism. Sorry if this sounds cold, not trying to be at all. Really interesting and good-looking. I've got far into the original 121, but after all this time either I'm stuck or can't remember how to move forward. Unlocked all the recipes, did all the "key quests" I know of yet I haven't been able to unlock the "crafting" machine. I love how there aren't any gameplay videos or guides, as it forces me to delve deeper without guidance but I think I'm stuck for good. Any help? Edit: Nevermind, I understood the graphics on the cauldron tile as static instructions, silly me...
OPCFW_CODE
JD Trask is a technology leader with almost 30 years of software development experience. - The shorter the loops are, the happier the customer gets - Putting the customers at the heart of what you do - Understanding the landscape of error tracking and crash reporting tools - Taking the mission to the next level - A good tool is like an extra team member Listen to our entire conversation above, and check out my favorite parts in the episode highlights! Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends. Darko (00:02): Hello, and welcome to Semaphore Uncut, a podcast for developers about building great products. Today, I’m excited to welcome JD Trask. JD, thank you so much for joining us. JD: It’s a real pleasure, my name’s JD Trask, and I am the co-founder and CEO of a company called Raygun that builds tools for software teams to build better quality products. By way of background, even though I am a co-founder and CEO today, I am a hardcore tech geek. I learned to code when I was nine, I started selling commercial software at high school on floppy disks in the ’90s. I don’t write quite as much code today as I would like; I still do a bit at home to try and keep the skills sharp, and I love playing with new stuff. But my day-to-day is leading the company, and I still love hearing what the devs are coding on. I’m passionate about software, and I’m passionate about business, almost at equal levels. The shorter the loops are, the happier the customer gets Darko (03:12): Certain elements are timeless. There are those things that you must keep track of and ensure that they are working. Can you give us a brief intro around how you help customers make sure that their software is of high quality, that they can reduce bugs, and what are some patterns that you see in that area? JD: Everybody assumes they don’t have that many bugs. I’ve never had somebody try out Raygun and go, “I don’t know if it’s working because I have no error reports.” There are always error reports! I always describe error reporting as kind of the black box flight recorder. Stuff blows up, and you need to know why it blew up so that you can fix it quickly. Ideally, it’s about the customer, giving them a better experience. We were running CI and CD servers back in 2004. Usually just running it on your own workstation because nobody was setting up servers for that back then. I realized CI and CD feels like a superpower, because you’re like, “Oh my goodness, I can so smoothly get to production.” But how do you really make the most out of that investment? It’s to create the feedback loop. I can get to prod fast; how quickly can prod tell me about what’s happening? We stepped in to say, “Hey, we can close the loop. You can get to production fast. Now you can iterate and cycle through production blazingly quickly, fixing bugs and all that, so that you have a higher quality product for your customer.” It’s about faster loops to prod, to ultimately make your customers happier and not be caught flat-footed on an issue that you had no idea about while customers were having a really bad time. Putting the customers at the heart of what you do I want to touch on a point you raised earlier about enabling the teams to be in control. Everything I’ve just said actually helps with that because it demonstrates ownership end-to-end of what it is you’re producing, through to who you produce it for. This sort of tooling, the DevOps movement, all of these things, can sometimes feel like a whole lot of responsibility landing on a dev team. But at the same time, the reason that you’re doing it is to try and say, “Okay, well, we can make sure that the customers are happier. We can empower those teams.” And when it comes to ownership, I always think it’s really important that the software developers appreciate who it is they’re building for. I make some comments that are probably fairly insulting to software developers at times, and I make those comments because I’ve been there, and I am there sometimes. I can get in the zone and end up far more fixated on how I’m building the code – because I love elegant code! I love thinking about how the code’s going to hang together, and all of that stuff’s important for maintainability and performance, etc. But ultimately, it doesn’t really matter. What matters is the customer. Understanding the landscape of error tracking and crash reporting tools Darko (15:52): How do you see adoption and change in the industry related to the service that you’re providing? JD: Firstly, I’d say, using an exception tracking service or tool doesn’t negate the need for software tests. They go well together, and the best practice you can have is, “Hey, this exception was reported. I’m going to go and fix it.” The first thing I do is write a test that proves it fails, and then I’m going to fix it until the test passes. If you just get into that pattern, it works really well. Before we built Raygun, we actually had a product called Lightspeed. It was an object relational mapper, that talked to databases like Active Record in Ruby or Entity Framework in .NET. It’s one of those sorts of things that it’s not that much code, but it’s a very flexible system. We ended up with was close to, I think it was 30,000 unit tests on this object relational mapper. The second piece that made that so powerful was we could actually make changes with so much confidence because we weren’t building on sand. You weren’t worried that you fixed something way over here and the backdoor breaks. To be honest, I’ve often struggled to see software projects have enough tests that people get to the point where they can have confidence in large-scale refactorings. When we first launched Raygun as a crash reporting service only, at the start of 2013, there weren’t too many folks out there doing it. Early on, we would see that nearly every customer that adopted Raygun was coming from nothing; it wasn’t beating a competitor, it was expanding the pie. It was actually introducing somebody to the concept. Fast forward half a decade, a little bit more. We see a little bit more stuff in the competitor category, but I fundamentally still believe that we’re still at the very early days of tracking those areas. I mean, even in CI/CD, I still am blown away by people who aren’t using that today. It just seems so obvious as such a huge win. It doesn’t take long to see a return on that investment. I feel very strongly the same applies to crash reporting. But as I say, I wouldn’t think of it as a replacement for unit testing. I think of it as a way of actually understanding what your code is doing in the wild. Production is just the biggest test environment you’ve got, and you might as well get the details from that. Taking the mission to the next level Darko (23:39): What’s coming up on Raygun’s side? Is there something big you are working on? JD: We’ve got a lot of stuff coming. We’re about to overhaul our user section. One thing that we do in our products, that’s probably a little bit different to a lot of the monitoring/DevOps category of tools, is that we try to put the customer right in the product. So, it’s not spooky weird stuff like Google does. You have to choose to opt in to identify who the customers are. You can go, “Okay, well, here’s our VIP customer, how many errors do they have? What’s their average load time? What do they do navigating around?” And understand that. Similarly, when you go and look at things like the error dashboards, we say, “Don’t worry about the number of errors, worry about the number of affected customers for this particular error type that’s occurred, because that’s who you need to fix it for.” For example, you might have an error that’s occurred a thousand times and impacted one customer who has it in a loop, or you might have an error that’s occurred a thousand times and affected a thousand people. You need to prioritize towards helping a thousand people. So we push all of that stuff right up into Raygun, you can explore everything about your customers in there. So that’s going to be cool to come out. We just launched our APM product support for Ruby. We’re launching support for Node.js before Christmas. The crash reporting stuff. We’re actually overhauling the ingestion pipeline at the moment to make it super scalable. So Raygun today processes, I think we peak around a billion API calls an hour, some of the world’s biggest brands run their stuff through us. Let me bounce the question back to you. What’s coming up at Semaphore? A good tool is like an extra team member Darko (26:47): We are trying to develop a layer that should come in, let’s say, probably Q1 next year. It should give you insight into the test suite because we have many customers asking for it. To figure out which tests are failing, how often they’re failing, what are some brittle tests? What are the flaky tests that you have in your test suite? What is the group of tests that are holding your team back from moving faster into production? So that’s one big area. JD: Well, I can definitely say about the tests and finding the flaky ones, that would actually be very valuable at Raygun. We’ve had a few flaky tests in recent times that I know a few team members are working on. There’s always information there, but I always think the best way for these things to work is if, in a weird way, the team almost feels like it’s a virtual team member, like it’s helping the team go faster by being somewhat proactive with those insights and helping them understand things that maybe weren’t obvious. You know tests can be flaky, but which ones are they? Or when that starts to become flaky, come and tell me about it so that I can get on it before we’ve realized that this test has failed every Tuesday for the last three months, but nobody noticed that pattern until now. That’s the sort of thing software can be very good at, as I’m sure it’s the same as Semaphore. As you’re scaling these engineering teams and you sit there sometimes, and you’re like, “You know what? It’s going to cost this much to add one more engineer, and that person’s going to come with management over here, they’re going to need hardware, they need software licenses, all this sort of stuff.” Let’s say you had a team of 20 engineers. You only need one piece of software to add 5% more effectiveness, which replaces the need for the person. The software is probably going to be a heck of a lot easier to manage. Now, that obviously works better and better at scale, but that’s how I think of it. I’d love people to be like, “We have Raygun because it helps our teams so small, but achieve more.” That’s part of the vision. Darko: Thank you, JD. It was a pleasure talking to you. JD: It’s been a real pleasure. I appreciate the opportunity to be on here and to have a chat. Thank you very much.
OPCFW_CODE
No, this question is based on fundamentally flawed premises. There is a plenty that you can criticize about UBI and generally literature does not consider UBI efficient redistribution policy, but none of what is mentioned in the question holds for UBI. - Lets start by correcting misconceptions: Theory of tragedy of commons has nothing to do with what you describe. The theory of tragedy of commons is: a) A theory about situation where private property rights are non-existent. Unless you want to make claim that property rights are missing in the market for ice cream, barber services or entire market generally suffers from lack of property rights - which is empirically absurd assertion outside few specific markets or countries like North Korea - tragedy of the commons simply does not apply here as for tragedy of the commons you first have to have some commons to begin with. b) the tragedy of the commons is not about increases in prices. In fact, under tragedy of the commons prices will tend to be lower, or in some cases even impossible to exist without further government intervention (e.g. clean air is an example of resource suffering from tragedy of commons) than they would be in presence of property rights. Furthermore, the tragedy of the commons will generally cause prices to be undesirably low (from societal perspective) which will result in overexploitation of common resource. - UBI Should not Cause Significant Inflation: First, it is important to understand that profit maximizing firms can't just set prices in arbitrary fashion. They are limited by the parameters of the market such as the level of competition or market structure we are talking about. Inflation, is generally unaffected by redistribution. Inflation is macroeconomic phenomenon and depends on the equilibrium price level set on the money market. The money market equilibrium can be in it's simples form described by equation of exchange (See Mankiw Macroeconomics pp 87) as: Where M is the money supply, V velocity of money, P price level and Y output. Solving for price level and log-linearizing we get: $$\ln P=\ln M+\ln V−\ln Y$$ Thanks to everything being in logs we can directly interpret any change in $P$ (i.e. inflation if change is positive or deflation if negative) in response to right hand side variables as percentage changes. So if money supply would expand by $1\%$ we would expect inflation to increase by $1\%$ ceteris paribus. However, UBI does not directly affect the right hand side variables. a) Provided that UBI is funded by taxes not high powered money it does not affect money stock of economy at all. So it does not affect $M$ at all. b) Velocity of money is quite constant in a long-run, in short-run it depends mainly on the business cycle and interest rates. There is no reason to suspect UBI would have any effect. c) UBI is a redistribution measure so it transfers output from one part of economy into another. To the extent this is funded by distortionary taxation it reduces economic efficiency and output in an economy which has no market imperfections. However, real life economies contain externalities, missing markets and other issues that can create efficiency cause for redistribution (see discussion of this in Atkinson (2015) Inequality). Generally, empirical literature shows that too large levels of redistribution can lead to lower growth of output, but modest redistribution seems to have little to none impact on it (see Jonathan, Ostry and Berg: Redistribution, Inequality, and Growth). Consequently, unless the UBI would be set to some extremely large value it should have no significant effect on $Y$ and thus again no effect on inflation. UBI could distort relative prices because while everyone receives the UBI only some people pay for it. To the extent that the spending patterns of people who receive net transfer and those who on net pay into the system relative prices could change in a way that would hurt poor even if inflation remains overall constant. This would happen because it would lead to increase in demand for goods that low-income people consume while simultaneously decreasing demand for items that high income people consume. However, this effect would happen with any redistributive measure, and in addition there is no reason to believe this effect would be large enough to offset real increase in welfare of the poor thanks to the redistribution. Literature on UBI For overview of literature on this you can see Hoynes and Rothstein (2019) or Martinelli (2017) and sources cited therein. As mentioned, at the beginning, literature is generally negative towards UBI but that is because research shows it is less efficient and more wasteful than traditional redistribution systems, not because it would lead to inflation.
OPCFW_CODE
window.onload=pipyinit function pipyinit() { if (!!window.EventSource) { var tempdataset = new TimeSeries() var templine = new SmoothieChart({ fps: 30, millisPerPixel: 200, tooltip: false, minValue:20, maxValue:70, grid: { strokeStyle: '#555555', lineWidth: 1, millisPerLine: 5000, verticalSections: 4}}); templine.addTimeSeries(tempdataset, { strokeStyle:'rgb(128, 250, 128)', fillStyle:'rgba(128, 250, 128, 0.4)', lineWidth:2 }); templine.streamTo(document.getElementById("pitempchart"), 1000); var cpudataset = new TimeSeries(); var cpudatasetavg = new TimeSeries(); var cpudataavg = 0; var cpuchart = new SmoothieChart({ fps: 30, millisPerPixel: 200, tooltip: false, minValue:0, maxValue:100, grid: { strokeStyle: '#555555', lineWidth: 1, millisPerLine: 5000, verticalSections: 4} }); cpuchart.addTimeSeries(cpudataset, { strokeStyle:'rgb(128, 250, 128)', fillStyle:'rgba(128, 250, 128, 0.4)', lineWidth:2 }); cpuchart.addTimeSeries(cpudatasetavg, { strokeStyle:'rgb(250, 100, 128)', fillStyle:'rgba(250, 100, 128, 0.4)', lineWidth:2 }); cpuchart.streamTo(document.getElementById("picputime"), 1000); var esource = new EventSource("pistatus?fields=busy&fields=cputemp"); esource.addEventListener("message", function(e) { var newinfo=JSON.parse(e.data); tempdataset.append(Date.now(),newinfo.cputemp); cpudataset.append(Date.now(), newinfo.busy*100); cpudataavg=(cpudataavg*3+newinfo.busy*100)/4; cpudatasetavg.append(Date.now(),cpudataavg); var cel = document.getElementById("picpuavg"); cel.innerHTML=cpudataavg.toFixed(2)+"%"; }, false); esource.addEventListener("open", function(e) { var tempel = document.getElementById("appmessage"); tempel.innerHTML="update Connection established"; }, false); esource.addEventListener("error", function(e) { var tempel = document.getElementById("appmessage"); if (e.readyState == EventSource.CLOSED) { tempel.innerHTML="update connection lost"; } else { tempel.innerHTML="update connection had an error"; } }, false); } else { var tempel = document.getElementById("note"); tempel.innerHTML="I'm sorry Dave, live updates not supported by this browser"; } }
STACK_EDU
View Full Version : Copy and Pasting? 25th Oct 2010, 04:58 AM Is there away to copy an object you created in milkshape and then paste it into apart where it will show up in the game? Example, I created a necklace in milkshape, but I need to get it into a certain milkshape file so that it can be seen, any help? 25th Oct 2010, 12:07 PM It's a tad more complicated then that. Honestly, i'm suggesting you read through the unimesh tutorials to get some EXP, otherwise I'll end up writing a complicated novel that will make little sense to everyone (myself included). 25th Oct 2010, 06:48 PM Oh, okay thanks for your reply! 26th Oct 2010, 12:57 AM Okay, I have another question(I know I've been having alot of questions lately, sorry about that) I was wondering is it possible, to make a model from an obj. file show up in bodyshop? Thanks for any help 26th Oct 2010, 03:04 AM sure but it has to be sized, uv mapped, rigged, and exported successfully. crank through the tutorials and you'll get what i mean 26th Oct 2010, 03:31 AM Great! I did resize it to fit one of my models head, then I exported it into SimPE, I must not have uv mapped and rigged it correctly, what exactly does it mean to rigg a mesh? 26th Oct 2010, 07:57 AM That means to give it proper bone assignments. That stuff is covered in the body meshing tutorials, and in at least one accessory-specific tutorial. 26th Oct 2010, 08:44 PM Oh, thank you guys so much! I can't tell you how incredibley helpful you've been. I almost forgot, does it matter how many vertices the accessory has? 26th Oct 2010, 09:01 PM Erm, not.... exactly. I think what you're probably asking is - does it matter if the accessory is incredibly complex, with a load of polygons? In that case, yes, it definitely does matter - look at some existing accessories similar in type to what you're doing (either in the game or custom ones here on MTS) for an idea of roughly what kind of polygon count you should be aiming for. 26th Oct 2010, 09:38 PM Okay, see what I'm trying to do is, get the model at the end of this post into my game, so what I was trying to do, was cut the head off and make it an accessory, then I would put the body over a body mesh. But when I export the head accessory into SimPE (Haven't tried the clothing yet, so I don't know if it does the same.) it says there are too many vertices, so is there a way to lower the amount of vertices without changing the appearance of the body? Or is this even possible at all? vBulletin v3.0.14, Copyright ©2000-2013, Jelsoft Enterprises Ltd.
OPCFW_CODE
Why do browsers on my primary monitor render some colors with a different color code than on my secondary monitor? Note: I'm not talking about a difference in apparent color to my eyes/monitor calibration here. I've got the following icon: And genuinely, if I open the icon in a paint program, its color is mostly FFC240... But when it's showing in a browser window (Chrome, or Electron actually.. edit: and also Edge) on my primary monitor, its color code is F7BF4F: If I move the window really slowly (window contents visible while dragging) there's a moment when the browser window is just > 50% onto the secondary monitor, that the icon visibly brightens up and becomes more orange. This is to say that if the icon is nearer the left edge of the browser, then it will stay the F7BF4F color until the browser window is just over half way onto the second monitor, at which point the colors of the pixels change. It seems to be only browsers that do this: If I take a snip of the icon (like i did above) and move the snip window between monitors, it remains whatever color it was when I took the snip (which changes according to which monitor the browser was on) Other things seem variably affected too; chrome browser's window title bar is 8F8FD5 when on the primary monitor, and 8E8CD8 on the secondary. Electron and Firefox browsers' title bars are the same 8E8CD8 all the time, and FF draws the icon as FFC240 all the time, but Electron changes the icon color depending on the monitor.. This gives me all the variations: Chrome changes both its title bar and the icon, Electron changes the icon but not its title bar, FF changes nothing What do I need to turn off to get colors on the primary monitor to be drawn as true all the time? Color calibration could be reflected on screenshots in Windows. What OS are you using? Do you have non-default color profiles enabled? Windows 10, no color calibrations that I'm aware of. If it's relevant it's a laptop with 2 gfx cards - the internal panel is (i think) driven by an Intel chipset and the external monitors (an HP and a DELL) are driven by an nvidia one. The primary monitor is the DELL, displayport connection. The secondary monitor is hdmi-dvi and the internal is whatever it is. The internal panel and the secondary monitor behave the same, color wise It's not the GPUs, or the difference in screen manufacturer, it's in the icc profiling - which unless it has been done properly, will just be a generic "close enough for jazz" profile supplied by the manufacturer. Good enough for your phone pictures, but nowhere near accurate enough for a true colour workflow. Why do some apps (chrome) obey the ICC profiling and not others (FF) Lousy standards adoption ;) tbh for web you should always work sRGB, for those browsers that always assume it, or ignore actual embedded profiles. I have an answer below, but if it's going to pull downvotes from Windows users who simply don't understand how truly poor Windows' colour management consistency is, then it's gone. Figure it out for yourselves, or get an OS that can actually manage colour properly. I'm done. I've always felt that downvotes should have a mandatory comment about how the post could be improved even if it wasn't public.. Ive given up on giving a damn about the points/rep and just focus on helping ten people for one of them to say thanks.. wasn't my DV by the way - I found it useful (only just read and digested it - been a busy week!) even if some other passer by covertly judged otherwise Yeah, sorry. Some windows apps, even after all these years, just don't handle profiling correctly. It is a mess. Windows doesn't really have any 'authority' apps must go through to ensure standards are followed. Google apps on the Mac sometimes go their own way too & ignore simple toolbox standards; fortunately, almost every other app on a Mac can accurately read & work with the machine's profiling - whether or not that profiling is actually accurate is another matter - to do it properly there's no way round having to buy a colorimeter, but at least you know once you did, it's going to work. I'm afraid there's nothing to "turn off" to achieve this. What you need is accurate colour calibration. Without it you don't actually know which is correct, if either. What you are seeing is the independent profiles for each of your displays being handed off as you transition the image from one to the other. Some systems & structures can do this 'live' so you never get that 50% effect, but others can only balance to one or the other, so you see a sudden shift if both profiles are not accurate. I can confirm that the numbers you are reading are approximately what my fully-calibrated system can see, but as the conversion required to register that in sRGB is a 'double guess' one from the apparent colour, via the screen's native colour, then to the colour icc profile, then I don't actually know what you are actually seeing, only what your colour checker is telling you. Calibration can be done by eye, but without an accurate colorimeter & accompanying software, you will never achieve a known start-point, so never know if either screen is actually correct. I'm not so much bothered about "correct" in the sense of "is this color even orange, to my eyes" .. but I would like it to show the same pixel color code on both monitors. Are you saying that there is a calibration step that means some (but not all - white, black and superusers' #242729 black bar at the top, are the same on both monitors) colors are shifted as a pixel code before theyre drawn? The color picker (and I've tried two now) is seeing the pixels after they're shifted.. whereas if I open that icon in a paint program, i'm told its orange color is FFC240 (which is what I want my color picker tool to tell me regardless of monitor/browser). As noted I don't really care if the calibration is so out of whack that it looks purple or green to my eyes, so long as the color code is still FFC240. Also, why do some apps change its color and not others, if it's a gfx card/monitor calibration thing? Colour are shifted twice before they're drawn, from the [sometimes assumed] sRGB on the web or document 'page', to your screen's native values, then back through the [calibrated or not] icc profile back into numbers for your screen measuring tool. Some tools can measure native or other assumed profiles, but that's probably beyong your requirement for this. You should assume a constant sRGB workflow for anything vaguely 'web-based'. If you want to deep-dive into Windows color calibration quirks, there's an interesting video about this. @gronostaj - Windows abysmal colour-workflow management & standardisation is one of the reasons I've never used Windows for colour work ;-) I tried to get a photo of what I see on the Mac if I expand that icon across both my screens & then check by eye & also with the colour meter… but daylight interference is a bit distracting. I can show that I get identical colour on both screens, by eye & by meter, but it doesn't come out at the exact colours in the photo [so that would just cloud the issue]
STACK_EXCHANGE
Fantasticnovel Fey Evolution Merchant update – Chapter 283 closed synonymous read-p1 animal city the domestication of america Novel–Fey Evolution Merchant–Fey Evolution Merchant Chapter 283 greet overt She then patted Fang Duoduo, who was beside her. He observed very helpless, but he shouted, “Skr!” Crow now designed trigger the center that linked to the abyssal entire world within this Cla.s.s 3 abyss dimensional rift and crack the safety covering away from the secret resource-type object on the centre, making it possible for this abyss dimensional rift to be a Cla.s.s 4 abyss dimensional rift. Aside from becoming mad, Crow enjoyed a genuine panic. It was simply because the Brilliance Federation would most likely not dismiss this type of large improvement in the operated Cla.s.s 3 abyss dimensional rift nearby the Royal Budget. These folks through the Brilliance Federation were very indicate. That they had removed its Heaven and Planet Fey, the Phoenix, arizona Perching Chinese Parasol Shrub. How could it enable them to eliminate two Cla.s.s 3 demons from your abyss dimensional rift throughout the Brilliance Federation? Despite the fact that she did not extra anyone with her mouth area, she did not want her body system to endure any adjustments after she experienced contracted a source-variety lifeform and carry complications to others. It yet again shouted your grand frustration in the cardiovascular. It already understood that regardless of the, it may not obtain the Phoenix, arizona Perching Chinese Parasol Shrub. It again shouted out your stunning frustration in their heart. It already recognized that regardless of, it may not acquire the Phoenix, az Perching Chinese Parasol Shrub. Zhao Xiaochun viewed Fang Duoduo like she was investigating a mislead. “Big Sibling, precisely what does understanding Black’s details pertain to welcoming Lin Yuan for lunch? They’re both so fine. Can’t I become familiar with them?” Then, it will drip a drop of their Suzerain/Myth II blood stream by reviewing the right vision in the hub. Given that this drop of blood stream burnt swiftly, it could increase the Cla.s.s 3 abyss rift’s entry! When s.h.i.+ Xu changed his go and found Zhao Xiaochun standing upright there, stuffing baked spinach into her jaws, he glared at her. When Crow thought about this, the panic strong in their heart became even greater. Then, it not any longer hesitated and produced a nuts and ruthless conclusion. Whether or not this were definitely not on her other ident.i.ty doing an consuming survive-streaming on Star Net, she would not actually are actually capable to pay for the salary. s.h.i.+ Xu out of the blue sensed his blood flow rus.h.i.+ng forth and was so angry that his view moistened! She then patted Fang Duoduo, who had been beside her. He noticed very powerless, but he shouted, “Skr!” Once Upon A Dyke The Celebrity Online hosts who acquired it could not know the amount of the compensate. They might only get a alert about getting a treat carton. This delight package could just be established from a year on the very same moment. In their own opinion, wasn’t it far better if that money was used as reimbursement for other places to eat? Zhao Xiaochun laughed happily as she viewed s.h.i.+ Xu get mad with her. Whether it was without to use a human body to complete its goal, Crow may have strike its head and changed this entire body into portions. Even though she failed to sacrifice anyone with her oral cavity, she failed to want her body system to endure any changes soon after she possessed contracted a resource-type lifeform and convey problems to other people. Fang Duoduo considered Zhao Xiaochun with dumbfounded vision and inquired, “Weren’t you harping on welcoming Lin Yuan and Liu Jie to get a mealtime?” Despite the fact that she possessed brought on so many restaurants within the Noble Money to be bankrupt, she experienced paid for along with the hard earned cash she attained following that. This amount of money had not been small often. Other than being irritated, Crow possessed a honest fear. It was for the reason that Radiance Federation would definitely not dismiss such a huge difference in the operated Cla.s.s 3 abyss dimensional rift nearby the Royal Cash. When Crow thought of this, the anxiety deep in the heart grew to become even greater. Then, it not hesitated and made a wild and ruthless choice. Obviously, ingesting lots of food items was a form of cultivation to the source-variety lifeform she had contracted. Crow walked about the crimson-black territory for this operated Cla.s.s 3 abyss dimensional rift and subconsciously murdered two Cla.s.s 3 demons, but it surely experienced a little something was amiss following accomplishing this. s.h.i.+ Xu abruptly believed his blood vessels rus.h.i.+ng forth and was so angry that his eyeballs moistened! There would be a tiny payment for common returns on Superstar Website, but it surely failed to expense any commission when the benefit was put in the delight box. From that time her contracted resource-style lifeform acquired developed, she never acquired the sensation of being entire yet again. She experienced turn out to be starving as well as more powerful as well! how to pronounce marcus tullius cicero When s.h.i.+ Xu changed his head and discovered Zhao Xiaochun standing there, filling cooked spinach into her mouth, he glared at her. Ever since her contracted supplier-sort lifeform possessed grown, she never acquired the experience being total once more. She experienced grow to be starving and also much stronger also! the evil organization’s recruitment ad Crow was furious but could not say any strong words and phrases because it had punched its deal with much like a pig’s brain. Though she acquired triggered his five stores of dining places to go bankrupt, she enjoyed a crystal clear conscience. She patted her tummy despondently. Out of the blue, she changed her visit get s.h.i.+ Xu staring at her. Crow now intended to switch on the hub that attached to the abyssal society within this Cla.s.s 3 abyss dimensional rift and break up the safety part beyond the hidden supply-variety item on the center, letting this abyss dimensional rift to be a Cla.s.s 4 abyss dimensional rift. Zhao Xiaochun possessed recognized s.h.i.+ Xu in the form of an unexpected surprise field since this reimbursement was just a good deal. Even though the Celebrity Website would get a smaller digesting cost, she was not willing to waste it. Of course, having plenty of foods was really a form of farming for the supplier-type lifeform she possessed contracted. Many people out of the Radiance Federation were very indicate. That they had taken away its Heaven and The planet Fey, the Phoenix, arizona Perching Chinese Parasol Plant. How could it assist them to destroy two Cla.s.s 3 demons from the abyss dimensional rift inside the Radiance Federation?
OPCFW_CODE
What is the Hosts file? Just one of many files? The topic of computer security is so complex that even the most advanced users are not aware of all the possible Malware hiding places. The Hosts file is one of the lesser known possibilities. This can be especially misused for so-called Pharming attacks, a special form of Phishing. What this is and how it works are explained in the following text. Let us go back in time somewhat and recall the days when the Internet was still called ARPANET and consisted of only relatively few computers. Even in these early days all computers in the network were assigned a unique number, their IP address. This can be viewed as an exact identifier for a computer in a network, similar to a telephone number. Humans however, are much better at recognizing proper names than strings of numbers, especially when these are 4 sets of three numbers as used in IP (Internet Protocol) addresses. These days, the DNS (Domain Name System) servers provide a service translating (e.g.) www.emsisoft.com into the IP address 188.8.131.52, which is then used to access exactly this computer containing the Emsisoft website. In the days of ARPANET, the DNS system did not exist. This was instead done by – you guessed it – the Hosts file. In itself, the Hosts file was very unspectacular – and it still is. This is an unformatted text file containing the domain names and their IP addresses next to each other. When you want to address a particular computer, the operating system “looks” first in the Hosts file, obtains the relevant IP address, and then uses this to contact the computer/server at this address. You can understand why these days this is done using DNS; it would be a huge logistical problem to constantly update all entries on all computers in the Internet, especially given the growth rate of the net. However, a Hosts file still remains on your computer, as a sort of relic from earlier days. Windows 2000 or XP users will usually find the Hosts file in the directory c:\windows\system32\drivers\etc\. You may perhaps ask – how can this be damaging? The file in itself is not damaging. However, its contents can cause damage. Your computer – or more accurately, your operating system – still uses the Hosts file, in addition to DNS, to locate the addresses of particular servers. Assume that you do Online Banking with a particular financial institution. To do this, you visit a particular website that, like all websites, has a particular IP address. If someone changes the Hosts file on your local computer so that it contains the name of this Online Banking website, then your computer is redirected and does not land at the intended website. With a small amount of criminal energy, an attacker can reproduce the layout of your bank website on their own server, and then redirect you to this server via the Hosts file. They can then enjoy receiving your personal data, such as login, password, and your PIN of course, all the data that you would normally enter into the true banking website without exercising any extra caution. These attacks are called “Pharming” in technical circles. Attackers use a small Malware program that infects as many computers as possible and modifies the Hosts file. In normal cases this attracts very little attention, since many anti-malware programs do not generate an alarm and do not notice the changes. The target of these attacks does not always have to be your Online Banking access: Ebay accounts or Webmail addresses such as GMX/Web.de are also favorite targets. The Hosts file can also be misused to circumnavigate anti-malware or anti-virus updates. The attempt to contact the update server of the relevant software supplier is simply redirected to another server. The protection program does not receive any new signatures and updates, and can no longer recognize and remove the pest that modified the Hosts file. Naturally, as usual we do not wish to just highlight a potential danger but also offer you a means of protecting yourself. The simplest option is to regularly look at your Hosts file and check for changes. At the very least, you should regard it as suspicious when entries for your banking website or Ebay appear in this file. Emsisoft HiJackFree offers you a convenient way of examining and editing the Hosts file. As of version 2.1, Emsisoft Anti-Malware also includes Hosts file monitoring. As soon as the contents of this file are changed, Anti-Malware raises an alarm and allows you to remove the changes if you wish. This gives Malware no chance of redirecting you to the wrong website. Have a Great (Malware-Free) Day!What is Riskware?
OPCFW_CODE
package command import ( "fmt" "github.com/innogames/slack-bot/v2/bot" "github.com/innogames/slack-bot/v2/bot/matcher" "github.com/innogames/slack-bot/v2/bot/msg" ) // NewSendMessageCommand is able to send a message to any user/channel func NewSendMessageCommand(base bot.BaseCommand) bot.Command { return &sendMessageCommand{base} } type sendMessageCommand struct { bot.BaseCommand } func (c *sendMessageCommand) GetMatcher() matcher.Matcher { return matcher.NewRegexpMatcher(`send message( to)? (?P<fullChannel><(?P<type>[#@])(?P<receiver>\w+)(?i:|[^>]*)?>) (?P<text>.*)`, c.sendMessage) } func (c *sendMessageCommand) sendMessage(match matcher.Result, message msg.Message) { text := match.GetString("text") if message.GetUser() != "" { text = fmt.Sprintf("Text from <@%s>: %s", message.User, text) } if match.GetString("type") == "#" { // send to channel newEvent := msg.Message{} newEvent.Channel = match.GetString("receiver") c.SlackClient.SendMessage(newEvent, text) } else { c.SendToUser(match.GetString("receiver"), text) } c.SlackClient.SendMessage( message, fmt.Sprintf("I'll send `%s` to %s", match.GetString("text"), match.GetString("fullChannel")), ) } func (c *sendMessageCommand) GetHelp() []bot.Help { return []bot.Help{ { Command: "send message <message> to <who>", Description: "sends a message to given user/channel", Examples: []string{ "send message #dev-backend PANIC MODE!!!", "send message to @username please take a look in #general", }, }, } }
STACK_EDU
CommandHelper adds simple command aliases, complex macros, and the ability to script your own commands and events into Minecraft, using the MethodScript scripting language. You need to have Maven installed (http://maven.apache.org). Once installed, simply run: mvn clean package install Maven will automatically download dependencies for you. Note: For that to work, be sure to add Maven to your "PATH". If you get a message about tests failing, try running: mvn -Pprovisional-build clean package install We happily accept contributions. The best way to do this is to fork CommandHelper on GitHub, add your changes, and then submit a pull request. We'll look at it, make comments, and merge it into CommandHelper if everything works out. If you make a PR, and feel your code is being nitpicked to death, don't worry! Whenever a code review is done, it tends to find lots of minor errors, even in a very experienced programmer. Don't get discouraged! We'll work with you to make the changes, and all contributions are appreciated. If the feature you want to add makes a significant change, however, it may be best to discuss the changes with the other contributors before you begin work on the feature. By submitting code, you agree to dual license your code under the the MIT License and GPL, barring the special restriction regarding code submissions, explained in the SPECIAL_LICENSE.txt file, which is attached. For details about code formatting standards, and other basic information for contributors, please see the CONTRIBUTING.txt file. Portions of CommandHelper are copyright by various contributors. This project uses BrowserStack (https://www.browserstack.com) for testing the website. To install on Windows, you can follow the directions below (which are cross platform) or simply download the Windows installer, found here. There are two modes of installation, both first require obtaining the MethodScript jar. You can build it yourself, or download the official builds from Github. For other platforms or manual installation on Windows, grab the jar from here. Minecraft: Installation in Minecraft is simple. Simply drop the jar in the plugins folder. Standalone Programming: MethodScript is a fledgling general purpose programming language, and can be used from the command line, much like python, node, or other programming languages. For all platforms, place the jar file in whatever location you like, (noting that it will create a folder at the same level which contains the configuration files, so it's probably easiest to put this in your user directory), java -jar MethodScript.jar install-cmdline as root/Administrator. This will install the mscript program and add it to your path, which can be used to start a REPL shell for quick tasks, execute a script file, or easily run the commandline tools. On Windows, this also installs a PowerShell module, which can be used with Import-Module -Name MethodScript and Invoke-MethodScript. On Windows, you must reboot your system after installation to use the mscript command in cmd.exe. You can install MethodScript using the same jar that is used in the Minecraft server, though two different environments are used, with separate folders for the CommandHelper installation, and the MethodScript installation. You can symlink these folders together if you wish your configuration to be the same for both environments. Commandline Tools: Various command line tools are available for use, and are useful both for those that use the jar as a plugin, or as a general purpose language. Run java -jar MethodScript.jar help for a list of these tools, or if you have installed the commandline version, you can use mscript -- help.
OPCFW_CODE
School of Psychology Faculty of Health and Medical Sciences Matteo Farina is a researcher, an academic, and an expert in Applied Linguistics and social media. Before joining The University of Adelaide, Matteo worked at the University of South Australia and Flinders University. He is a member of the Research Centre for Languages and Cultures at the University of South Australia and an Adjunct Lecturer at Flinders University. Over the years, he undertook multiple research projects and delivered courses in Applied Linguistics, Linguistics, Languages and Communication. Matteo published a best-selling book on how people communicate on Facebook. This book is entitled "Facebook and Conversation Analysis". Recently his research has focused on misinformation, disinformation opertions and foreign interference, especially through social media platforms. 2021: Understanding Mass Influence (commissioned by the Department of Defence) https://acrobat.adobe.com/link/review?uri=urn:aaid:scds:US:dcbca90e-72e8-469d-98a6-605b8d97421b#pageNum=1 Date Position Institution name 2021 - 2021 Research Fellow University of Adelaide 2018 - 2019 Associate Lecturer in Applied Linguistics and Italian Flinders University 2018 - 2021 Casual Lecturer & Tutor University of Adelaide 2009 - 2018 Casual Lecturer & Tutor University of South Australia Date Institution name Country Title 2011 - 2022 University of South Australia Australia Phd 2009 - 2022 Ca Foscari University of Venice Italy Master of Teaching 2001 - 2004 Catholic University of the Sacred Heart, Milan, Italy Italy Bachelor of Media Languages Year Citation 2020 Farina, M. (2020). Social media interactions, online radicalisation and Conversation Analysis. Journal of The Australian Institute of Professional Intelligence Officers, 28(2-3), 16-27. 2015 Farina, M. (2015). Facebook first post telling. Journal of Pragmatics, 90, 1-11. DOI Scopus4 WoS5 Year Citation 2019 Farina, M. (2019). Facebook and Conversation Analysis. London, United Kingdom: Bloomsbury. Year Citation 2020 Farina, M. (2020). Using conversation analysis for examining social media interactions. In Frontiers in Artificial Intelligence and Applications Vol. 332 (pp. 172-177). IOS Press. Report for External Bodies Year Citation 2020 Farina, M., Scarino, A., & Tudini, V. (2020). Being and becoming an Italian: the perspective of young people. 2020 Farina, M., Scarino, A., & Tudini, V. (2020). Community Engagement Report of the South Australian Italian Community Bodies. 2022 Digi+ Fame Strategy Grant - 2018-2021 LING 2038 - Cross Cultural Communication - 2019 LING 1102 - Introduction to Language in Culture and Society Connect With Me
OPCFW_CODE
Definition for QA testing Quality assurance in software. Testing & finding bugs during development. Software testers. - What is Tool Monitoring definition tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyses the behavior compare. - What is Analysis Transactional explain of transactions between people and within people’s minds; a transaction is defined as a stimulus plus a response. Transactions take place what does it mean. - What is back-to-back testing what is which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of how works. - What is Software Failure Mode and Effect Analysis (SFMEA) meaning approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence. (Failure why. - EQUIVALENCE CLASS abbreviation A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification determines. - CTP how to based model for test process improvement built around twelve critical processes. These include highly visible processes, by which peers and comparing to. - TQM help organization-wide management approach centered on quality, based on the participation of all its members and aiming at long-term success depends on. - MISTAKE crossword A human action that produces an incorrect result which. - ACCESSIBILITY TESTING examples Testing to determine the ease by which users with disabilities can use a component or system difference. Glossary for Mac OS developers Apple Mac operating system explained. Helping Mac programmers. - Mac OS Faulting Batch definition An feature that allows you to reduce round trips to the database by firing multiple faults in a single fetch. See also faulting compare. - Mac OS clipped movie boundary obszar explain that combines the union of all track movie boundary regions for a movie, which is the movie’s movie boundary region, with the movie’s movie what does it mean. - Mac OS Soap what is lightweight, platform-agnostic protocol used to exchange information in a decentralized, distributed environment. The protocol defines the how works. - Mac OS Authentication Secret Shared meaning authentication method based on a secret known to only the two parties involved. Verification of passwords is a commonly used shared secret why. - VERSIONED BUNDLE abbreviation A type of bundle that allows for multiple versions of framework code and header files to be stored inside the bundle determines. - SECAM how to Electronique Couleur avec Memoire. A color-encoding system in which the red and blue color-difference information is transmitted on comparing to. - FETCH help In Enterprise Objects applications, to retrieve data from the database server into the client application, usually into enterprise objects depends on. - PRIVILEGED OPERATION crossword An operation that requires special rights or permissions; for example, changing a locked system preference which. - RELEASE examples of the reference count of an object. When an object’s reference count reaches zero, it is freed. When your code no longer needs to difference. Computer software usage - What is System Support Decision definition DSS Decision Support System compare. - What is Id Security explain SID Security Identifier, or Security ID. A SID uniquely identifier a user, group or computer what does it mean. - What is Database Administrator / Decibels Adjusted what is Database Administrator. The DBA is responsible for the day to day efficient management of a database system. 2. Decibels Adjusted. This is how works. - What is Experience My In meaning IME Chat abbreviation for: In My Experience why. - MACHINE INDEPENDENT FORMAT / MARKUP INTERCHANGE FORMAT / MAPINFO DATAINTERCHANGE FORMAT abbreviation Independent Format. Used to define a data format (normally a file format) which is independent of a particular computer architecture. 2 determines. - DATA ENCRYPTION STANDARD how to Encryption Standard. Algorithm for secret-key cryptography (where both sender and receiver must know the same secret key). DES uses a 56bit comparing to. - REGULAR EXPRESSION help for "regular expression". 2. The term can also be used to refer to the process of matching a regular expression, rather than the depends on. - FULLY BUFFERED DUAL IN-LINE MEMORY MODULE crossword FBDIMM Fully Buffered Dual In-line Memory Module which. - RANDOM ACCESS MEMORY examples Access Memory. Fast, semi-conductor based memory in a computer. It is called "Random Access" because the computer can access any difference. What is means in finance definitions What it means in financial world. Business definitions for young IT guys looking for finance help. - What is Discount Forward definition A currency trades at a forward discount when its forward price is lower than its spot price compare. - What is Reduction-Option Loan (ROL) explain fixed-rate and adjustable-rate mortgage. An ROL the borrower to match the current mortgage rate, which then becomes fixed for the rest of what does it mean. - What is Fund Mutual Gold what is A mutual fund that primarily invests in gold-mining companies' stock how works. - What is Letter Deficiency meaning Notification from the SEC to a prospective issuer of securities that revisions or additions need to be made to the preliminary prospectus why. - ADJUSTABLE RATE abbreviation to convertible securities. Refers to interest rate or dividend that is adjusted periodically, usually according to a standard market rate determines. - AGENCY COST VIEW how to that specifies that the various agency costs create a complex environment in which total agency costs are at a minimum with some, but less comparing to. - GLOBALIZATION help Tendency toward a worldwide investment environment, and the integration of national capital markets depends on. - OPTIONS CONTRACT MULTIPLE crossword set at $100, that when multiplied by the cash index value gives the dollar value of the stock index underlying an option. That is the which. - PENSION FUND examples A fund set up to pay the pension benefits of a company's workers after retirement difference. Glossary for developers Help for developer. Manual for web development in several programming languages. - Manual for Chain Markov definition for working with a series of events (for example, a system being in particular states) to predict the possibility of a certain event based compare. - Manual for Score Standardized explain score, normal score, z-score. “Transforms a raw score into units of standard deviation above or below the mean. This translates the scores what does it mean. - Manual for Reduction Dimension what is dimensionality reduction. “We can use a technique called principal component analysis to extract one or more dimensions that capture as how works. - Manual for Variance meaning list of numbers varies from the mean (average) value. It is frequently used in statistics to measure how large the differences are in a set why. - POSTERIOR DISTRIBUTION abbreviation See prior distribution determines. - NOSQL how to management system that uses any of several alternatives to the relational, table-oriented model used by SQL databases. While this term comparing to. - RUBY help language that first appeared in 1996. Ruby is popular in the data science community, but not as popular as Python, which has more depends on. - DISCRETE VARIABLE crossword whose potential values must be one of a specific number of values. If someone rates a movie with between one and five stars, with no which. - S CURVE examples graph showing, for each month since smartphones originally became available, how many people in the US bought their first one. The line difference.
OPCFW_CODE
M: Iran unveils new indigenous stealth fighter "Qaher 313″ - starpilot http://theaviationist.com/2013/02/02/iran-new-stealth-fighter/#.UQ1x3KXpg21 R: maximilianburke There is a higher resolution picture of the cockpit here: <http://i47.tinypic.com/24zxgsw.jpg> The avionics seem to be what you would get in a Cessna or other small plane. The airspeed indicator marks the never exceed speed at 260kts. Based on the interior picture it also looks like the fuselage is made out of molded chopped strand mat fiberglass, the same type of material you'd find in the hull of a boat rather than an aircraft. R: redthrowaway This just reminds me of the photoshopped missile debacle. Who is Iran trying to fool with these things? They have to know that any competent military will see right through it. Are they trying to convince their own people that the military is all-powerful? R: hencq Most likely to impress its own people I think. It's not like Iran's government is very popular with its own people nowadays. They seem to derive their legitimacy from standing up to the West (and particularly the US). R: stcredzero So, how would one go about disrupting the world of stealth aircraft? How would one make the whole notion of "Fifth Generation Aircraft" obsolete? How about Infrared Telescopes operating above the atmosphere? Right now, all fighter aircraft need to burn liquid fuels to extract energy in the form of rapid thermal expansion of gasses, which spew out the back or which operate turbines to run fans. Large IR telescopes could be used to spot IR sources moving rapidly against the background, which could be localized and targeted by hypervelocity missiles. Stealth planes have a very reduced radar cross section, perhaps the size of a bird, but if you know exactly where the plane is, a radar guided missile could be made to home in on even a bird-sized return in the immediate vicinity. Such telescopes could be deployed on suborbital rocket planes that can overfly the area on a moment's notice. They could also be placed on circling high altitude drones. I'd bet that someone at DARPA has already gone through this thought process, and a program to disrupt 5th generation fighters already begun. (Not necessarily with this idea, though.) R: runarb Don't think that would be so great in practice. * The IR part would be vulnerable to countermeasures ( flares etc [http://en.wikipedia.org/wiki/File:CH-46_Sea_Knight;_Flares.j...](http://en.wikipedia.org/wiki/File:CH-46_Sea_Knight;_Flares.jpg) ) * The communication between the telescope and missile would be vulnerable to jamming. * The communication between the telescope and missile would have to be very fast to make the missile catch up to fast manuring jets. * If you IR telescopes would be space based they would be more expensive then jets, meaning you probably want have enough of them. * All high altitude and space based weapon system are very vulnerable to a nuclear weapons. When a nuclear weapons is detonated in low earth orbit there are almost no atmospheric pressure to compress the explosion and little gravity to lower it. Instead the gravity will bend the explosion around the earth, radiating near earth space and high altitudes. Because of this just a single nuclear devise can take out must the world satellites and high altitude airplanes. (This is one of the reason last resort type nuclear weapons don't uses GPS/GLONASS/Galileo satellite navigation. Satellite navigation is amused to be destroyed early in a nuclear war). While a full nuclear war is not very likely, detonating a single nuclear devise like this maybe be more acceptable if attacked. Fighter jets on the other hand is a proven and efficient weapon against other jets. R: stcredzero - The IR part would be vulnerable to countermeasures Yes, but to fool the observer, the decoy would have to travel at the speed of the jets, for as long as the jets. At that point, you might as well have loaded some explosives onboard as well and just have launched a cruise missile. - The communication...would be vulnerable to jamming. Line-of-sight communication by lasers. - The communication...would have to be very fast You picked the straw-man granularity. This is to replace AWACS, not the seeker on the end of the missile. This is to get the seeker on a missile close enough to invalidate stealth. - IR telescopes would be ... more expensive than jets You'd only need one for many fighter aircraft. - ...space based weapon system are very vulnerable to a nuclear weapons. If your air superiority system _requires_ the other side to deploy nukes, then it really seems to me that you've got a winner! Fighter jets on the other hand are a proven and efficient weapon against other jets. Points off for misreading. I never said to replace fighters, period. Radar didn't replace fighters, it just enabled fighters to make intercepts more easily. Disrupting X doesn't mean replacing X. R: runarb I agree. In addition to or replacement for AWACS this may work, but you still need to hit the jet with a missile. Detecting is't enough, you also have to stop them. _\- Line-of-sight communication by lasers._ I would turn and pull up so the missile must chase me, then deploy my military grade smoke generator :) ( edit: here I am: [http://www.dontgivvafuq.com/misc/pics/nam/uh1/uh-1_smokescre...](http://www.dontgivvafuq.com/misc/pics/nam/uh1/uh-1_smokescreen.jpg) ) R: stcredzero I would turn and pull up so the missile must chase me Remember I stated a hypervelocity missile. You're assuming a pilot would have enough time to react after the missile has been detected. Still, there's the chance that the missile wouldn't work, so one could still vector fighters in for an intercept. That big smoke plume is going to help my pilots out a lot. Also, if the incoming airplanes are on a stealth strike mission, their cover has already been blown. R: rikacomet Points: 1\. Its a model only, the video says so itself. 2\. If Iran makes a airplane, no need to think it will use it on west automatically. This news just happen to come at when both sides have tensions. Peace people! 3\. If you look at it from a technological perspective, making such an airplane, despite the international embargoes, is a big feat. The odd shape would be really interesting to analyse for its aerodynamics. I hope someone can do a detailed analysis. 4\. Imagine, if you are a country, would you really post pics of a new 'advanced' jet you just made? you wouldn't and they didn't. This is a rather diplomatic approach to make us reflect, how ready are we to jump the horse. But again, this does not prove in anyway that Iran's nuclear intention is peaceful or not. Lets not have another war in our lifetime anytime soon please. Live and let live! R: kapitalx > "the video says so itself" The video says "all stages of design and build .. are complete". R: rikacomet there are two footages 1\. the one shown up and close in pics (model) 2\. the real plane flying for few seconds and only doing 1 maneuver (flip) R: thedrbrian But it's an RC model real plane. R: at-fates-hands If you ever wanted to know how far advanced the US military is, here's a prime example. It's a long way from model to prototype to full scale production, so I wish them good luck. This hardly worrisome considering we're already working on third generation stealth technology. We also have the best pilots in the world, and technology that makes this plane look like a fourth year engineering student project. R: mikecsh >> We also have the best pilots in the world By what measure? R: diminoten If you consider training to be a measurable value of skill, I am randomly guessing that the above statement is due to the fact that we train our pilots more than any other military? (I don't know if that's true or not, just that's what I would gather as the source of such a statement) R: pm90 Indeed. Plus, the tactical knowledge obtained from fighting so many wars means that not only are there many experienced pilots, but when these pilots retire, they are likely going to train other pilots in their craft. R: mpyne Yes, it's one thing to develop the industrial scale needed to field and effectively utilize aviation assets, it's another thing entirely to adapt and maintain that base as long as the U.S. has been able to. Right now the U.S. has the institutional knowledge within the military, the industrial base to build craft, repair those craft, make custom tools/mods to those craft, well-trained and very experienced pilots, well-trained and very experienced aircrew, aircraft maintenance techs, etc. The latter means that the U.S. has a ready base of available personnel to use as expert instructors, as policy makers (in and out of the military), to feed back into the industrial base, etc. So it's not at all that other countries _can't_ do it, but the investment needed to do so efficiently is just so massive nowadays. R: lifeisstillgood I have a theory on corruption - there are basically two kinds if corruption 1\. The "good" corruption. Think US congress pork barrel politics. It makes doing the right and progressive thing more expensive but it does not prevent it happening. So we grow richer. 2\. The "bad" corruption. The tribal politics of mid-african nations is a well studied affair (building roads to one part of a country and not to the opposing tribal areas etc). This corruption stultifies actual growth. (there is a point honest...) The aircraft exists as a marker in a game of internal Iranian politics, and while there are many possible explanations I fancy the "bad" corruption one - it takes a special kind of distance from reality to imagine this will fool anyone with skin in the game - and that distance from reality tends to come from arrogant under educated privilege In short Iran looks like it may be suffering from bad corruption. Which is interesting - there has so far only been one way to grow a global superpower - democracy, science and forcing the politicians to pay lip service to both. Both China and India are vying for that new role. We may see an answer to which of the three you can do without soon. R: sausman If a country grows richer its corruption is a good thing? What good comes out of "good" corruption? If you look at corruption in isolation it's hard to find any. R: lifeisstillgood Was new York growing and expanding as Boss Tweed took kickbacks and traded votes? Yes. Would it have grown without his control? Of course. But if he had been Papa Doc like? Compare Indonesia and Nigeria (search undercover economist where I must have remembered "my" theory) - take too much in bribes and drive away investment. R: velodrome Does not mean it can't fly. It looks a lot like the X-36 <http://en.wikipedia.org/wiki/McDonnell_Douglas_X-36> However, I do agree the the craft shown in flight looks and sounds like a model r/c jet. If the craft is really this size, it cannot carry much of anything to the enemy. It is more of recon fighter. It's stealth abilities are also questionable. R: rdl Boeing should give them a few billion dollars to get this in the air to keep f-35 funding from disappearing due to lack of threats. R: gruseom Are you familiar with Pierre Sprey's critique of the F-35, and if so, is he credible? R: rdl He's certainly got the background but I think he pushes his own agenda, which isn't compatible with how the modern military works. Ultimately what kills F-35, etc. is drones. Over the next 20 years I think we'll see all air to air go to drones (for performance) and air to ground (for expendability, size, etc.). The F-35 would be the last major offensive aircraft, if it does happen. R: guylhem I'm not a plane designer, but there are very interesting pictures. However, the article is weird - "And, above all, the aircraft is way to small" The point of a stealth fighter is in being stealthy. Being small may therefore be a good thing. And as long as it can strike and deliver missiles, how relevant is its size? Also, aybe it will be cheaper to make, something especially important for a country under embargo. Quantity is a quality it itself - and making 10 small planes might be better than making 3 average sized ones. There is another critic about the lack of advanced computer electronics in the cockpit. Who knows, maybe the iranians learnt a thing or two after the stuxnet centrifuge stories, and this makes the plane less vulnerable to viruses and software attacks?? R: meaty I think the issue regarding the size is more that there isn't enough space to add enough fuel, a full set of avionics such as comms, radar, IFF etc and get a pilot without a growth hormone deficiency crammed into it (perhaps they are using that monkey when it comes back!). Size doesn't matter with stealth technology at all - it's all about which way EM radiation bounces off it. It looks like a JJ Abrams prop to be honest. Actually not even that credible. Regarding virus and software attacks on aircraft (military at least) - it's all a load of JJ Abrams style "let's inject a virus into the core" horse shit. R: greghinch So I take it you're not a JJ Abrams fan? R: 3amOpsGuy All fur coat and nae knickers as they say. Shame because I reckon we're overdue some positive news about Iran instead of all the propaganda we've been getting drip fed in the west over the past couple of years. R: rmah As far as I know, it's not even a prototype, just a model. R: pknerd MVP? :-) R: neurotech1 Some comments in the article imply its a mock-up, but Flight Global embedded a video [http://www.youtube.com/watch?feature=player_detailpage&v...](http://www.youtube.com/watch?feature=player_detailpage&v=ok2aMgfBdCs#t=22s) R: Encosia That's an RC model with the jet sounds dubbed over. It's almost impossible that the plane would maneuver that well to begin with, without thrust vectoring and using what looks like Cessna avionics, but it's a certainty that the fiberglass model shown would disintegrate under the force of those kind of maneuvers. R: DrinkWater Qaher like the english 'anger'? R: truebecomefalse That is a bizarre looking jet. R: chakalakasp On second thought, lets not go to Tehran. Tis a silly place.
HACKER_NEWS
In all scenarios, it’s frustrating. And it’s definitely not honest to suit your needs as a little enterprise proprietor. But there are a few things you can do to come out alright eventually. HELOCs are adjustable level mortgages, even so, so the speed can fluctuate and turn out Considerably larger than the speed you'd get on a set household equity mortgage. Which makes it far more risky. On the other hand, there are generally no closing expenditures on HELOCs. When these concerns are regular as learners nowadays have numerous tasks, Writers Per Hour thinks these pupils deserve the top remedy in order to get the very best grade on their papers. My cousin’s officemate planned to impress their supervisor by tackling a whole new project, so he questioned my cousin to shoulder a every day obligation that he wouldn’t have enough time for due to the greater-and-greater project. Not cool. These situations are tense, but you’ll get by it. Sincere business men and women usually earn in the long run. At times I really can't cope up With all the developing tutorial force which is why I Continue to keep... Examine Extra! yeah in essence contacted me as a result of freelancer but he assigned me the project outdoors the freelancer......but once the time of the transaction dollars arrived i asked him to transfer The cash to my paypal account and he mentioned that He'll transfer it from his freelancer account to my paypal account as he didn't possess the paypal account And that i gave him my paypal tackle to him.... Now, I’ll just arrive suitable out and say it: These persons suck. And there will be occasions these men and women can’t be reasoned with. I tried lots of C++ homework on-line helpers before I came across your website. The tutor you assigned to me to do my C++ homework has rendered one of the most Experienced C++ homework company ever due to the fact! Rupert is definitely the top programmer!” As a result of very long (thirty several years, usually) payout approach, You furthermore may get plenty of time to pay back the mortgage, and your regular payments might be decreased than if you bought a house fairness bank loan or line of credit rating. Whatever the case, it can be tough to be described as a workforce participant who’s open to new obligations without currently being a pushover who’s overwhelmed with miscellaneous tasks and projects on your plate. Still, a dollars-out is sensible in some situations—particularly if your existing home finance loan fee is far better than what you can obtain nowadays. It truly is when you're feeling taking advantage of or when It is interfering with your personal operate that there is a difficulty. In A different situation, my cousin and A further employee had been competing head-to-head for your promotion. Concerned about buying papers for sensible selling prices, which are not only important site special but suitable and freed from grammar problems?
OPCFW_CODE
I recently read this Gartner report which identifies Power BI as the Business Intelligence leader above Tableau, Qlik, Domo and ThoughtSpot based on: Agile, centralized BI provisioning Governed data discovery I found their deeper focus on various data source connectivity options, data storage and loading, augmented data discovery, natural language query, mobile and sharing to be spot on with what I find in the field. What surprised me is iDashboards didn’t make the study and Domo is classified as a niche player. Recently I gave 2 presentations to the CEO Executive Forum at their semi-annual meeting; this time located in Detroit. The first was on Connectivity and Collaboration, the 2nd on Metrics, Reports and Dashboards. These focus on Office 365 and Power BI, including SharePoint and middleware. Connectivity and Collaboration For Connectivity and Collaboration, we focused on how as humans we need to make connections between things, and between things and people. I then described some ways this is possible with Office 365, SharePoint, OneNote and Teams and middleware. We worked on answering questions like: Can we automate our business processes in Office 365? What is the best way to manage documents? How do I reduce data entry? Metrics, Reports and Dashboards During Metrics, Reports and Dashboards I took time to focus on the difference types of numbers to pay attention to, compared metrics to KPI’s and talked about what a dashboard should contain. There was some demonstrations including Power BI. We worked on these questions: What business metrics do I care about? What numbers should I pay attention to? I want to make the PowerPoint presentations available to the attendees of the conference and anyone else that finds the topics interesting. Below are the 2 PowerPoint presentations you can download: We recently ran into a situation where a client needed to change the filter in an Excel report that is using the CUBEVALUE and CUBEMEMBER functions. The report was built using a pivot table going against a cube. We then used Convert to Formulas so we could control the report format. This approach worked great for the current data, but lacked a simple way to update the report every month by selecting a new period from a drop down. After some struggles and some seemingly dead ends, we had enough information to solve the problem. Continue reading “Dynamic Excel CUBEVALUE and CUBEMEMBER Functions” This article was written to help me sort out SharePoint, PowerPivot, Power View and Analysis Services. Read on to discover the questions that arose as I embraced this technology set and the answers I’ve uncovered. So what are the options for working with Power View within SharePoint with MDX as the source cube? Let me frame this a bit, I am referring to SharePoint 2013 and SQL Server 2012 SP 1 or SQL Server 2014. It appears that PowerPivot becomes the data source for Power View reports. That is, a PowerPivot workbook saved within a Document Library. So it seems feasible that building a PowerPivot report pulling from an MDX cube, then stored in a Doc Library should work. The first test is the Document Library, PowerPivot and MDX cube test. If this works the same way as using a tabular cube, then we are one step closer to a complete solution.
OPCFW_CODE
Here are some interesting facts that our President, Tony Frank, has pointed out. (The following is a quote from a public email he has written). The CSU Geosciences department is widely recognized as producing highly qualified field-oriented students with practical skills that industry needs. And there is a large demand for new geoscientists. See this paper about the geoscience - * We’re in our 3rd year of record enrollment. * We enroll students from every county in Colorado as well as around the world, and we’re on track to continue to enroll and graduate more Coloradans than any other campus. * We produce more STEM (science, technology, engineering and mathematics) graduates to help drive the state’s economy than any other campus, and we produce more STEM high school teachers than any other university in Colorado. * 1 in 4 of our students are the first in their families to go to college, and we have the same percentage of low-income students we had two decades ago. * We beat the predicted averages for graduation rates of our students, and our graduates leave CSU with lower-than-average debt loads. * We graduate minority students within 1 percentage point of majority graduation rates – still 1 percent too large, but unheard of in terms of the small size of this gap for a comprehensive research university. Current Graduate Students I currently have three Mohit Agarwal has recently joined us for a summer M.S. internship, and we are looking forward to M.S. student Brianna Lyons joining us in the Prospective Graduate Students or Post-Docs I am usually keeping an eye out for new M.S. and Ph.D. students from a variety of backgrounds--Geology, Physics, Engineering, Math, etc.---that enjoy doing a little math and programming. An ideal student has written some perl, python, fortran, matlab, or something similar, and knows math up through partial differential equations. But many students learn these skills as they go, and that is fine too. My current research is summarized here and here; please drop me a line if you are interested. Please note that CSU enforces a minimum GPA requirement of 3.0, unless there are extenuating circumstances. If you are curious about seismology in general, you may wish to take a look at this document on what the academic seismolgy community sees as important (I was a When you are not learning new stuff or doing research, you'll find CSU is a great university and Fort Collins is a great place to live. It often ranks as one of the best places to live in the U.S., and is a great place to spend a few years. If you like the outdoors, the mountains are just up the road, the weather is usually great, and the city itself has the amenities of a college town and is safer than most. page about appying to graduate school. And if you think you might be interested in working with me, feel free to drop me an email. I do get a lot of spam emails, so please if you could note what about the work we do here might be of interest to you, that would be of great assistance in helping me know that you didn't send the same email to 1000 other faculty. updated on July 1, 2010.
OPCFW_CODE
Can I sell a third party expansion? I've got an expansion for a semi-popular board game that my friends and I have been having a lot of fun with. I was wondering if I could package and sell the expansion to the game without consent from the original publisher. I know I'd need to create all of my own materials (artwork, text, etc.) but I wasn't sure if I could market it as "Third party expansion to X". Are there any pitfalls I should be wary of? I don't want to rip off the company or cause problems with them, but ideally I wouldn't need to have a lot of contact with them either (since they may not like it, even if it is perfectly legal.) There will be very different answers to your questions depending on where you plan to sell your expansion. I think Europe in general is much more restrictive about mentioning trademarks without explicit permission than for example North America. You'll probably want to consult a lawyer before going ahead with this. My guess would be that in the US, as long as you don't imply that the manufacturer authorized it (I'd go with 'The unauthorized expansion to...' in my title), you'd probably be ok, legally. But remember, even if you are within the letter of the law, they still might sue you, and you'd still have to defend. I'd check out the manufacturer's history of legal action to see what you might expect. 0) I am not a lawyer. I am definitely not YOUR lawyer. I don't even play one on TV. 1) go read the circular on Trademark at the USPTO.gov website. 2) Realize that, if you aren't in the US, you need to find the equivalent for wherever you are. Canadian IPO — UK IPO — French IP Agency Directory of Intellectual Property Offices 3) Realize that anywhere you intentionally sell it, you have to obey their trademark laws, too. 4) before doing anything commercially, hire a lawyer specializing in intellectual property law, and have him go over the risks and requirements with you. Fundamentally, a 3rd party supplement can't use the distinguishing marks of the core game (TSR v. Mayfair), and can't use the unique phrases associated with the game, either. The closest I've seen to doing so that wasn't challenged nor licensed was "Not a complete game; requires the use of n, by n." and "For use with n and other n games." The artwork and language may be copyrighted so at a minimum you'd have to create your own. What you should do is call them up and see if they are willing to license their product. Some will, some won't. Here's one good link. "The artwork and language may be copyrighted" - actually, they definitely will be copyrighted, absent some really weird situation like a game published before 1973 by a company that went out of business, or made entirely with public domain art. +1 I really like that link. It does a good job of describing what you can do legally, but also the problems with a larger company threatening legal action even if you are within your legal rights. Back in the 90's, I remember seeing (and purchasing) several "unauthorized" expansions for Axis & Allies at GenCon. Some even had brand new plastic molded units. I can't imagine that anyone would go through that level of effort (both in quality and in renting a booth at GenCon) on an unauthorized expansion if they had any thought that MB/Hasbro would sue them. So as long as you give credit to the company that makes the game you are expanding and (as others have said) don't use any of their artwork, you should be fine.
STACK_EXCHANGE
Seamlessly Migrate Collateral, Debt & aLEND from Aave V1 to Aave V2 (Unofficial) DISCLAIMER: This code is unaudited. The Aave genesis team is working on their own migration tool, and it’s almost certainly going to be orders of magnitude better than mine. Only use my tool if you are technical enough to look at and understand the code. I tried to make it as difficult as possible to mess up, but I cannot offer any guarantees. It is also good practice to always check and revoke unused allowances using a tool like this. Use the Migrator at your own risk. See the code, deployments and instructions here. Having spent most of my time recently focused on learning solidity & DeFi building blocks, I’ve decided to build my own version of a migrator for Aave V1 positions to Aave V2. This allowed me to learn a whole lot about the architecture of Aave V2! I highly recommend anyone interested in building atop of the protocol to really dig deep into their Github. Moving forward, I previously noticed that Instadapp had released their own migrator, but if I recall it migrates directly into an Instadapp smart wallet. This was awesome to see pop up so fast! However, I don’t have an Instadapp smart wallet and I didn’t really want to create one. So, I built the tool to migrate without a smart wallet on my own. How Does It Work? In a nutshell, the contract “replaces” your debts with debt-bearing flash loans, redeems your V1 aTokens and deposits them into V2 on your behalf. This process is only made possible because of a few innovative Aave V2 features: - Native credit delegation - Debt-bearing flash loans - Deposits on behalf of another address Let’s talk about each of these and why I believe they are vastly underappreciated. Native Credit Delegation In Aave V2, debt is tokenized. In other words, upon taking a loan, the debt-bearing address is minted “debt tokens.” These can represent either stable or variable debt, and largely present the same end user experience as Aave V1 debt. However, one of the more interesting features brought along with the tokenization of debt, aside from great gas optimizations, is the ability to natively delegate individual credit allowances. Each debt token has a function that allows any address to easily delegate credit to an address of their choice, so that address may borrow on behalf of the delegator. This results in the delegatee receiving the actual currency, and the delegator receiving the debt tokens. There’s not a single doubt on my mind that this feature will be a strong building block for new DeFi products, such as perhaps a DAO to DAO lending system (who knows?). Furthermore, this feature allows for the seamless migration of V1 debt to V2 debt because credit on debt tokens can be delegated regardless of the current collateralization or state of the account. This ties in nicely to the next point… Debt-Bearing Flash Loans Flash loans are still extremely new tech, and yet here we have yet another fantastic innovation! In addition to the more widely-recognized use cases of the new batch flash loan feature, Aave V2 introduces a new type of protocol-specific flash loans: debt-bearing flash loans. These are essentially regular flash loans, except instead of having to pay back the debt, the account now accumulates that debt and premium in the form of newly-minted debt tokens. Use cases for this beyond debt migration probably haven’t been thought of just yet, but keep in mind that you’re able to flash loan the funds to one contract and incur the debt on another address. Deposits on Behalf of Another Address In Aave V2, the standard deposit function allows users to send their newly minted aTokens to an address of their choice. This shortens the process of having to deposit the tokens, then transfer the aTokens to another address. Furthermore, this also allows for (work in progress) a seamless implementation of an optional withdrawal of aTokens using a simple proxy for Aave V2 and “batch depositing” (i.e. Payroll) set amounts to different addresses. (Also WIP!) How It All Comes Together This is where I’ll be explaining how all of these features come together in the Migrator contract. As is shown in the instructions on Github, there are a few basic steps that need to be taken before migration can take place, this section assumes those steps are taken. So, the migration process is fairly simple and can be summed up in 5 steps: - Obtain the appropriate balances of aTokens and debt reserve to migrate. - If there is debt, repay it using a debt-bearing flash loan on behalf of the caller. - Transfer out the caller’s aTokens. - Redeem the transferred out aTokens. - Deposit the token reserves into V2 aTokens on behalf of the caller. That’s it! This is the entire process undergone by the Migrator contract, not too complex right? Now with that out of the way, I want to go over a few lessons I’ve learned that might help other aspiring DeFi developers to use Aave V2 flash loans… To keep things simple, I’ve created a list of bullet points that might prove useful when developing with Aave V2 flash loans: - Empty flash loans can be called. This allows any contract to call “executeOperation” with any encoded data on your contract through the V2 lendingPool. - Debt-bearing flash loans incur the debt at the end of the flash loan. Allowing aTokens to be moved around as needed without worrying about collateralization mid-flash loan. - As stated before, debt-bearing flash loans incur the premiums as additional debt. You don’t need to transfer in the premiums from the caller in this case. - Slightly unrelated, but when testing on a forked network, ensure you set your timeouts correctly! - If you run into numbered errors, check out the Errors library on the Aave V2 Github here. That about wraps up the Migrator contract, I hope this article serves you well in your flash loan adventures! Additionally, if you’re more experienced and have any feedback, I’m always looking to learn. You can check out the code, instructions and deployed contracts here. Remember, migrate at your own risk! - Aave V2 Github - Aave V2 Errors Contract - Contract on Etherscan (Ensure you know what you’re doing!) - Contract repo on Github - Kovan contract on Etherscan (note: There are some asset mismatches between V1 and V2 assets, the WBTC in V1 for instance is different from the WBTC in V2) - Demo mainnet transaction (with unreasonably small amounts) ETH deposit & DAI borrow
OPCFW_CODE
Searching for a solution to the issue: What’s epoch time? In this article, we’ve collected for you personally probably the most accurate and comprehensive information which will fully answer the issue: What’s epoch time? One epoch lasts roughly five days. This means when an epoch starts in the center of Sunday it ends roughly in the center of Friday. The following epoch would start in the center of Friday and ends in the center of Wednesday. At the outset of each epoch, an overview is produced. Inside a computing context, an epoch may be the time and date in accordance with that your computer’s clock and timestamp values are determined. The epoch typically matches hrs, minutes, and seconds (00:00:00) Coordinated Universal Time (UTC) on the specific date, which differs from system to system. The Pleistocene Epoch is usually understood to be the timeframe that started about 2.six million years back and lasted until about 11,700 years back. The newest Ice Age happened then, as glaciers covered huge areas of the earth Earth. How do I use epoch time in Python? You are able to concert time from epoch to local time while using Python ctime() function. The ctime() function accepts one argument. This argument is the amount of seconds because the epoch began and returns a string using the local time. This value is dependant on the device’s time zone. What timezone is epoch time? UTC Returning to the issue, Epoch time does not technically possess a timezone. It is dependant on a certain point over time, which so happens to set up for an "even" UTC time (in the exact start of annually along with a decade, etc.). How many years is an aeon? a billion years Astronomy and cosmology In astronomy an aeon is understood to be a billion years (109 years, abbreviated AE). Roger Penrose uses the term aeon to explain the time between successive and cyclic Big Bangs inside the context of conformal cyclic cosmology. What is the difference between an era and an epoch? An epoch is more than a period and may cover several lifetime. It’s marked by a few significant development or number of developments: the feudal epoch, the epoch of exploration. An eon is an extremely lengthy time indeed. . A geological era is subdivided into periods, epochs, and stages. Are epoch and era the same thing? era = One of your time shorter than an eon but more than a period of time. period = One of your time shorter than a period but more than epoch. epoch = One of your time shorter than the usual period but more than a time. What does Epoch time look like? The Unix epoch it’s time 00:00:00 UTC on 1 The month of january 1970. There’s an issue with this definition, for the reason that UTC didn’t appear in its current form until 1972 this problem is discussed below. For brevity, the rest of this uses ISO 8601 time and date format, where the Unix epoch is 1970-01-01T00:00:00Z. How long is an epoch in years? Earth’s geologic epochs—time periods based on evidence in rock layers—typically last greater than three million years. Is an eon a billion years? Less formally, eon frequently describes a length of one billion years. What is epoch time example? In computing, an epoch is really a time and date that a pc measures system time. . For example, Unix and POSIX measure time as the amount of seconds which have passed since Thursday 1 The month of january 1970 00:00:00 UT, a time referred to as Unix epoch. Which is longer eon or epoch? eon = The biggest unit of your time. era = One of your time shorter than an eon but more than a period of time. . epoch = One of your time shorter than the usual period but more than a time. Archean = “Ancient” eon from 4,500 Mya – 2,500 Ma. What is epoch time in python? Python has defined a module, “time” which enables us to deal with various operations regarding time, its conversions and representations, which find its use within various applications in existence. The start of time is began calculating from 1 The month of january, 12:00 am, 1970 which very time known as as “epoch” in Python. How many years are in an epoch? Earth’s geologic epochs—time periods based on evidence in rock layers—typically last greater than three million years. We are barely 11,five centuries in to the current epoch, the Holocene. However a new paper argues that we have already joined a brand new one—the Anthropocene, or "new man," epoch. How old would I be if I was born on January 1 1970? You’re 51 Years, 10 Several weeks, 27 Days old from 28 November 2022. What age shall we be held, Basically was created on 01 The month of january 1970?ResultAge in hrs:455,016 HoursAge within a few minutes:27,300,960 Minutes.Age within minutes:1,638,057,600 SecondsNext B’Day after:1 Several weeks, three days
OPCFW_CODE
Managing your workspace and team(s) When you first set up an account in Shorthand, you are assigned a demonstration workspace. It's a testing space you can use to familiarise yourself with the basics of the Shorthand editor and invite colleagues to test it out with you. In order to publish stories you will need to set up a subscription plan to upgrade your workspace. Some Shorthand plans also unlock the ability to assemble workspace members into teams and offer unique story features. You can view all of our plans here. The owner of the workspace can add colleagues to Shorthand by clicking manage members using the dropdown in the top right hand side of the dashboard. On the resulting workspace settings page, a short form allows you to invite individual or multiple people via email and assign them a team. This will send each member an invitation asking them to create an account. Once they have accepted the invitation and created their account, the owners and team leader/s will receive a notification letting them know that the new users have joined. The team leader/s and other team members can also choose to add members to other teams (if available in your subscription) and will be able to collaborate with them on stories. Find out more about collaboration here. Any workspace member also has the ability to add other members or invite new users to their team. This can be achieved simply from any view of the dashboard. In the top right of the page is a row of icons representing the current team members along with a green + icon to add a new member. Clicking the green icon will pop open a small window from which existing workspace members may be added to this team, or email addresses may be entered to send invitations to new members. If you want to delete a user from your workspace because they no longer need access (or are leaving the company or group), please let us know. We can also help re-assign their stories to the correct person, if they did not do this before they left. Each team is maintained by a team leader/s (set by the workspace owner). The team leader/s can revoke team access of other members, assign publishing permission to individual members, and set publishing and theme options for the team. Workspace owners and administrators can manage teams within a workspace by clicking the Team settings button under the name of any team on their dashboard. From the Team Settings page, for users with appropriate permissions, options include: - Changing the team name - Adding a logo for the team - Setting themes available for stories created within the team - Adding and removing members from the team - Setting team-member roles - Setting publishing destinations for the team (where stories get published) - manage publishing permissions for individual members (who can and can't publish to which destinations). Depending on your account settings your Team Leader/s may have an additional option on your workspace - leader view. When the leader view option is set to on they will be able to see every story created in the current team, regardless of whether they are a collaborator or not. leader view stories are dimmed back and show a small orange shield icon: This means your team leader/s can access any story in your team as needed. So if you need to publish a story but the colleague working on it is away, your team leader/s can easily add you or another colleague as an editor and meet your deadlines. Some Shorthand plans allow for multiple teams, and they are a great way to split groups of users, types of content, themes and publishing locations, which can all be set to differ by team. If your subscription allows for multiple teams, you can easily add more with a button on the left-side of the dashboard under your existing team(s). If you have access to multiple teams, it is important to remember to start your new story in the right team as they can't be moved between teams. From the dashboard, if you create a new story from within a single-team view (or if you only have access to one team), then your story will be in the right place. If you start from the all teams view of the dashboard, then you'll be presented with a screen from which to choose the story's team.
OPCFW_CODE
Novel–Beauty and the Beasts–Beauty and the Beasts Chapter 1325 – Birth Control Method inconclusive abhorrent “How will that do?” Bai Qingqing was concerned about him. After some thought, she held her points apart. Underneath Curtis’s pa.s.sionate upfront, Bai Qingqing’s body system went limp. Only when an icy hand believed its way to her reduce entire body performed she snap outside of her trance. Using a purged encounter, Bai Qingqing pushed down that “mountain peak” and mumbled, “Next weeks time. Upcoming 7 days right after I make your arrangements let’s…” There were yet another faint pinkness on Curtis’s handsome facial area. While he flicked out his mouth, a powerful drive was revealed. They were so terrible which they couldn’t even afford nutrient water now. She could only watch this news for him and cozy approximately some of her friends to borrow their cellphones. She could only keep watch over this news for him and cozy as much as most of her associates to obtain their cell phones. Curtis smiled, l.u.s.t evident from his raspy tone of voice. “Okay.” father brighthope church Bai Qingqing shyly changed around. Along with her backside going through him, she bowed her mind even lower. “No, right here we have methods of birth control. Err, I am going to hold money to buy… that.” At thinking about the heavily experienced meals, Curtis sensed a dryness in the lips. He drank another big mug of tap water. Bai Qingqing was about the brink of tears. Sensation a moist sensation below her palm, Bai Qingqing’s confront made even redder. Inside a teeny voice that resembled the moan of a mosquito, she stated, “I think you didn’t understand my thoughts accurately. I meant the arrange where we could go completely.” Experience a damp feeling below her palm, Bai Qingqing’s deal with made even redder. In the little sound that resembled the moan of the mosquito, she explained, “I assume you didn’t understand my thoughts accurately. I recommended the type where we can easily go all the way up.” But he could find the odour of other dogs with this bed. Not wanting Snowfall to become discolored through this scent, he resisted his fiery drive and obtained off from his spouse’s physique. Bai Qingqing shyly transformed all over. Together with her lower back struggling with him, she bowed her travel even decrease. “No, over here we have now means of beginning handle. Err, I will keep some funds to buy… that.” “I only would like you to compliment me.” Curtis pushed his brow against his mate’s. Sensing her heat, he narrowed his vision in convenience and pushed his mouth area against Bai Qingqing’s. Curtis smiled, l.u.s.t evident from his raspy speech. “Okay.” Emotion a damp sense beneath her palm, Bai Qingqing’s facial area turned even redder. Inside a small speech that resembled the moan of a mosquito, she explained, “I assume you didn’t read my terms effectively. I meant the form where we will go completely.” Feeling a wet discomfort less than her palm, Bai Qingqing’s face changed even redder. Within a little speech that resembled the moan of your mosquito, she said, “I imagine you didn’t read my words correctly. I recommended the organize where we are able to go all the way up.” Ahead of she done clearing, Bai Qingqing was pinned on the unfilled bed plank. She located herself thinking about a pair of bloodstream-crimson and fervent eyes. The minute Bai Qingqing launched her jaws, she sensed an icy tongue twisted all over her tongue. She increased her sight and gazed at his swollen facial area, a flush sneaking onto her cheeks. Which has a flushed face, Bai Qingqing pressed down that “mountain peak” and mumbled, “Next week. After that 7 days soon after I have the plans let’s…” There is yet another faint pinkness on Curtis’s handsome facial area. While he flicked out his tongue, an intense wish was exposed. Bai Qingqing kept 130 yuan in her as well as the staying 350 yuan was placed in the current wardrobe in her own bedroom. There was clearly another faint pinkness on Curtis’s good looking experience. While he flicked out his tongue, an intense wish was exposed. She could only keep an eye on news reports for him and cozy nearly a handful of her pals to get their phones. Around the way home, Bai Qingqing needed Curtis for the grocery store and spent tens of yuan on day-to-day essentials. All of that was kept of her 1,000 yuan sacrifice cash was 400 yuan. “You prefer to carry on having a baby?” “It’s already been each day. Why hasn’t Muir observed me still?” Bai Qingqing started being concerned. Becoming a reptile, Curtis could stealthily go up his way to her through the below ground sewers. While Muir, who flew from the skies, would certainly be exposed if he would fly at a very low alt.i.tude! Less than Curtis’s pa.s.sionate progress, Bai Qingqing’s body moved limp. When an icy fingers believed its technique to her reduced physique do she snap outside of her trance. “I’d more effective provide you with about 1st. Let’s go.” “It’s previously been each day. Why hasn’t Muir discovered me still?” Bai Qingqing began being concerned. As being a reptile, Curtis could stealthily go up his solution to her using the below ground sewers. Whereas Muir, who flew during the skies, would certainly be revealed if he were to travel for a reduced alt.i.tude! With a purged facial area, Bai Qingqing pressed down that “mountain peak” and mumbled, “Next 7 days. Up coming few days soon after I create the arrangements let’s…” Though he couldn’t go completely, he still greatly antic.i.p.ated these types of intimate connection with Snow. Bai Qingqing shyly turned approximately. Together back again dealing with him, she bowed her top of your head even decrease. “No, over here we certainly have ways of start command. Err, I will hold some money to buy… that.” Bai Qingqing saved 130 yuan on her plus the other 350 yuan was stored in the clothing collection in her home. Novel–Beauty and the Beasts–Beauty and the Beasts
OPCFW_CODE
Provide option for setting an environment variable across an entire session Is your feature request related to a problem? Please describe. I like to set a $PROJECT environment variable when in tmux sessions. The value depends on the session. For example, when I'm in my ensemble session, the value is /home/openjck/dev/ensemble. When I'm in my glam session, the value is /home/openjck/dev/glam. That way, I can move around the file system as needed and simply type cd $PROJECT when I want to return to the root of the project. I can imagine other use cases for session-wide environment variables like this. Before I started using tmuxinator, I achieved this with some manually-written tmux commands. It worked some of the time, but it was pretty unreliable. Since then, tmux added an -e flag to new-session which seems to solve this exact problem. If I understand the documentation correctly, it allows an environment variable to be set across an entire session, no matter how many windows or panes are opened or closed. Describe the solution you'd like I'd like if the tmuxinator session config file allowed the user to specify an environment variable that would be active across the entire session. That environment variable should be readable from any window or pane, including new windows and panes. Describe alternatives you've considered A hook like pre_window could be used, but it wouldn't affect new windows. Additional context tmux recently added an optional -e flag to new-session which looks like it's meant for exactly this kind of thing. This may be related to #494, but I'm having a hard time understanding exactly what that issue is about. This is a good idea and I, for one, would be open to it. In the meantime, I think you could mimic this behavior with some clever use of tmux hooks in a tmuxinator hook (I wish we'd used a better name, as hook is overloaded and could be cause for confusion), like on_project_start. I haven't had time to test this, but something like: on_project_start: tmux set-hook -a "window-add" "run-shell \"tmux set-environment foo bar\"" Is the environment variable foo used for something different in this context? If I run tmux set-environment foo bar, it shows up when I run tmux show-environment, but now when I run echo $foo. I've never quite understood what the show-environment environment is all about, but perhaps that's what the new flag to new-session appends to. I was hoping for something that could be used ala. echo $foo or cd $foo. As mentioned, that temporary solution wasn't tested. It looks like you're right in that set-environment doesn't update existing windows/panes. There are kludgy solutions out there which will query show-environment and use its contents to set a variable on demand. I'm not sure I'd go to the trouble, though. Another, even simpler temporary solution would be to set the project variable when starting tmuxinator: project=/home/openjck/dev/ensemble tmuxinator start ensemble Ah. After experimenting for a while, I think part of the problem is that the hook would be named after-new-window. I appreciate the suggestion for a workaround; I hadn't used hooks before! :smiley: I am using the following: pre_window: export ABC=123 && tmux setenv ABC $ABC The export ABC=123 part sets the environment variable for the first window, and tmux setenv ABC $ABC updates tmux's state so that later windows will have the same setting. I am using the following: pre_window: export ABC=123 && tmux setenv ABC $ABC The export ABC=123 part sets the environment variable for the first window, and tmux setenv ABC $ABC updates tmux's state so that later windows will have the same setting. The pre_window trick does the job if one somehow has access to the value within the tmuxinator YAML configuration. What it doesn't seem to cover is when the user has an env variable set in their env and they want to pass it to tmuxinator for it to set it to all the windows. I think it would be nice to be able to achieve that, what do people think? @damdo I need exactly that, because I face a dynamic configuration that needs to be adjusted depending on some environment variables. Any attempt to access a variable in pre_window failed so far, but I desperately need this ...
GITHUB_ARCHIVE
package classes; /* * Every string you create is actually an object of type String * Objects of type string are immutable; once created, its content cannot be altered. * * Java defined StringBuffer and StringBuilder peer classes, which allow string to be altered. */ public class StringDemo { public static void main(String[] args) { String strOb1 = "First String"; String strOb2 = "Second String"; String strOb3 = strOb1 + " and " + strOb2; String strOb4 = strOb1; //array of Strings String str[] = { "one", "two", "three" }; System.out.println(strOb1); System.out.println(strOb2); System.out.println(strOb3); //use Length(), equal() and charAt() //use length() to check length System.out.println("Length of strOb1:" + strOb1.length()); //obtain the character with charAt() System.out.println("Char at index 3 in strOb1: " + strOb1.charAt(3)); //use equal() to compare between strings if(strOb1.equals(strOb2)) System.out.println("strOb1 == strOb2"); else System.out.println("strOb1 != strOb2"); if(strOb1.equals(strOb4)) System.out.println("strOb1 == strOb4"); else System.out.println("strOb1 != strOb4"); for(int i=0; i<str.length; i++) System.out.println("str[" + i + "]: " + str[i]); } }
STACK_EDU
Ordinary vs Non-ordinary for GL(2)-type Abelian Surfaces over Q Let $A_f$ be an abelian surface over $\mathbf{Q}$ of $\mathbf{GL}_2$-type arising from a weight $2$ cuspidal eigenform $f\in S_2(\Gamma_0(N))$. What is known (or expected to be true) for the size of the set of primes $p$ such that $A_f$ mod $p$ is ordinary (resp. non-ordinary)? Do you know of a reference where this is discussed? The paper by Ogus in "Hodge cycles, motives, and Shimura varieties", Springer Lecture Notes in Mathematics 900 (1982), has results on this and states what is expected. I don't know if that is still the state of the art. Of course, you know Elkies' result about infinite supersingular primes for elliptic curves over the rationals. In 2008 Baba and Granath generalized his techniques and proved that there are infinitely many supersingular (or superspecial) primes for certain abelian surfaces with QM by the quarternion algebra of discriminant $6$ (plus some condition), see their paper in Bull. London Math. Society (40). They also quote some reference at the end for other special cases. @ Felipe and Filippo: thanks for your references! @Felipe: Can you be more specific about the location in Ogus' paper? Skimming through it I only found results about how often an abelian variety is ordinary in a potential sense at degree one primes (this is statement 2.7.1 of the article). @Rob: I looked it up (I was going by memory, which is not as good as it used to be) and that's about it (2.7,2.8 and 2.9). I am not sure why he needs potential, as ordinariness should be invariant under ground field extension. His result certainly implies infinitely many places of ordinary reduction. @Felipe: It's the potential-ness matched with the requirement that the primes under consideration have degree 1. For instance, (unless I'm confusing myself), if E is an elliptic curve with CM by an imag. quad. field K, then after base changing to K you can say that E is ordinary at all unramified primes of degree 1 (despite the fact that we know that E is supersingular at half of the unramified primes of Q). While he's still telling you that E is ordinary at infinitely many places, it's the quantity of places of supersingular reduction that I would find more interesting. (cont'd) ... Given an abelian variety as in this question, do you expect there to be any supersingular primes at all? If so, how big is the smallest one? And do you expect infinitely many? Section 7 of Pink's Crelle paper "l-adic algebraic monodromy groups, cocharacters, and the Mumford-Tate conjecture" states a conjecture which predicts that, for any abelian variety over a number field, the set of primes where the variety has ordinary reduction has density 1 "in the potential sense". As Felipe observed, this is a theorem in dimensions 1 and 2. The extension of the base is also conjectured to be the smallest field over which the monodromy groups are connected. Do we know that there exists at least one prime p for which $A_f$ mod p is ordinary?
STACK_EXCHANGE
/* Part of Documon. Copyright (c) Michael Gieson. www.documon.net */ this.documon = this.documon || {}; this.documon.TabManager = (function(){ var callback; var target; var tablist = []; var tabTransition; function make(id, label){ var icon = document.createElement("i"); icon.id = "tab-icon-" + id; icon.className = "fa fa-times tab-close"; icon.dataset.id = id; icon.addEventListener("mousedown", clickCloseDown, false); icon.addEventListener("mouseup", clickCloseUp, false); var span = document.createElement( "span" ); span.className = "tab-label"; span.style.pointerEvents = "none"; var text = document.createTextNode( documon.Docutils.truncate( label, 30 ) ); span.appendChild(text); var elem = document.createElement("li"); elem.dataset.id = id; elem.appendChild(span); elem.appendChild(icon); elem.addEventListener("mousedown", tabMouseDown, false); target.appendChild(elem); // since the tab width can change, we'll need to always comp. var comp = window.getComputedStyle(elem); // Just store the stuff we need. var spec = getTabSpecs(comp, elem); // - margins and padding essentially change the width if( tabSpecOffset === null ){ tabTransition = comp.transition || "left 100ms ease-out"; tabSpecOffset = parseFloat(comp.marginLeft) + parseFloat(comp.marginRight) + parseFloat(comp.paddingLeft) + parseFloat(comp.paddingRight); } var leftOffset = 0; var len = tablist.length if(len){ var lastItem = tablist[len-1]; leftOffset = lastItem.x + lastItem.width; } // Update this tab's specs. spec.elem = elem; spec.id = id; spec.width += tabSpecOffset; spec.x = leftOffset; spec.draggable = new documon.Draggable({ target : elem // (DisplayObject) The thing that actually moves , callback : drag // (function) Callback when started return (obj, pos, kind, didmove ) where kind = "start" | "end" | "move" // didmove is only issued when event == "end" , constrain : "x" // (string) Constrain movement along the "x" or "y" axis. Both constrain and constrainRect can be used together or independantly. //, constrainRect : obj // (object) Constrain movement within a rectangle THe rectangle can be any object (including a DisplayObject) that contains {x, y, width, height} , threshold : 5 // (optional) (default = 5) The pixel threshold for issuing the "didMove" flag on "end" , arg : id }) tablist.push(spec); // force to absolute and set X elem.style.position = "absolute"; elem.style.left = spec.x + "px"; return elem; } function destroy(id){ var spec; var nextPageIndex; for(var i=tablist.length; i--;){ var spec = tablist[i]; if(spec.id == id){ nextPageIndex = i; spec = tablist.splice(i, 1)[0]; break; } } // Select the page to the right... I think this might be a little wonky. var nextPageId; if(tablist.length){ while( nextPageIndex && nextPageIndex >= tablist.length ){ nextPageIndex--; } nextPageId = tablist[nextPageIndex]; if(nextPageId){ nextPageId = nextPageId.id; } else { nextPageId = null; } //... will continue at the end of the function. } spec.draggable.destroy(); var icon = document.getElementById("tab-icon-" + spec.id); icon.removeEventListener("mousedown", clickCloseDown); icon.removeEventListener("mouseup", clickCloseUp); var elem = spec.elem; elem.removeEventListener("mousedown", tabMouseDown); documon.Docutils.emptyNode(elem, true); spec.elem = null; for(var i=0; i<tablist.length; i++){ var item = tablist[i]; item.elem.style.transition = tabTransition; } reorder(); if(typeof nextPageId != 'undefined'){ callback("show", nextPageId); } } function reorder(movingID, movingPos){ documon.Docutils.sortOn(tablist, "x"); if(movingID){ var dragItemMoveIndex = tablist.indexOf(dragItem); var prevX = 0; var movingMid = dragItem.width/2; var movingX = movingPos.x + movingMid; for(var i=0; i<tablist.length; i++){ var item = tablist[i]; if( item != dragItem ){ if( i > dragItemMoveIndex ){ if(prevX > movingX){ item.elem.style.left = prevX + "px"; } } else { if(prevX < movingX){ item.elem.style.left = prevX + "px"; } } item.x = prevX; } else { item.x = movingPos.x; } prevX += item.width; } } else { var prevX = 0; for(var i=0; i<tablist.length; i++){ var item = tablist[i]; item.elem.style.left = prevX + "px"; item.x = prevX; prevX += item.width; } } } var tabSpecOffset = null; function getTabSpecs(comp, elem){ return { x : parseFloat(comp.left), y : parseFloat(comp.top), width : parseFloat(comp.width), height : parseFloat(comp.height) } } var dragItem; var dragItemStartIndex; function dragStart(id){ //killDragTransitions(); for(var i=0; i<tablist.length; i++){ var item = tablist[i]; if(item.id == id){ dragItem = item; dragItemStartIndex = i; item.elem.style.transition = "none"; } else { // property | duration | timing function | delay item.elem.style.transition = tabTransition; } } callback("show", id); } //var transTimeout; function dragEnd(id){ dragItem.elem.style.transition = tabTransition; //if(transTimeout){ // clearTimeout(transTimeout); //} //transTimeout = setTimeout(killDragTransitions, 1000); reorder(); } /* function killDragTransitions(){ for(var i=0; i<tablist.length; i++){ var item = tablist[i]; item.elem.style.transition = "none"; } } */ function drag(elem, pos, kind, didMove, arg){ if(kind == "start"){ dragStart(arg); } else if(kind == "move"){ reorder(arg, pos); } else if(kind == "end"){ dragEnd(arg) } } function tabMouseDown(e){ // Walk up to LI when children nodes are clicked. var elem = e.target; while (elem && elem.nodeName != "LI"){ elem = elem.parentNode; } //var id = elem.dataset.id; //showPage(id); callback("show", elem.dataset.id); } function clickCloseDown(e){ e.stopPropagation(); } function clickCloseUp(e){ closeTab(e); } function closeTab(e){ //destroy(e.target.dataset.id); var id = e.target.dataset.id; destroy(id); callback("close", id); } function init(params){ target = params.target; callback = params.callback; } return { init : init, make : make, destroy : destroy } }());
STACK_EDU
For the past month or so, I’ve been more involved at stackoverflow than I have ever been. I’ve accumulated 317 points (not much, but enough to be part of the community) and until this morning, I had forgotten what I don’t like about this community site. My Question: “Heuristically figuring out whether I'm running IPad1 or IPad2” Let me go through the responses 1 by 1 and I think you will see how this really annoyed me. - nnnnn responds with “what do you mean “heuristically”. Well nnnn, shall I google that for you? how about if you read my question and looked up heuristically you would know the answer to your own question, but then, nnnnn has over 9000 points so who am I to suggest something like that. - Mitch tells me to “Fix the cause, NOT the symptoms”. Well Mitch, did you find that in some book of snappy forum retorts? How do you know that my problems can be fixed by reducing memory usage without killing the usefulness of the solution. No Mitch, you don’t know that so another unhelpful answer, but again, Mitch has over 11,000 points so again, how am I to criticize. - Finally, the last comment (by definition) says “closed as exact duplicate by Sam152 (5917 points), nnnnn (9527 points), Don Roby (11,854 points), Mitch Wheat (103,375 points) and Ben Voight (66,445 points). There reason which is that my question is a duplicate is patently false. Also, the answer they point to is also patently false! Running IOS 5.0.1 on both IPad1 and IPad2 return identical device strings. I know because I have both devices and I’ve tested that. Not only that, I’m guessing none of all those high point people even read my question which explicitly asked for a heuristic solution. Bottom line, very frustrating. I’m sure there a lot of good things going on at stackoverflow, however today, I feel like a squashed bug with my question. On a slightly positive note, 2 people did point out that the question shows research effort and voted up the question so I guess it’s not a lost cause (even though the door has been slammed shut but the 5 stack overflow veterans who marked the question as “exact duplicate”. I asked for it to be reopened but I suspect after this post I will probably lose my 317 points for questioning the kings.
OPCFW_CODE
Hardcoded ICE server URLs Hi, Firstly - thanks for creating matchbox - I love it :) For me it takes ~30 seconds to establish a connection between peers using your library with wasm in a chromium browser and I tried to dig deeper until I found a nice log of the WebRTC implementation within the browser via chrome://webrtc-internals/. I think I found the issue in these logs: Right after a ~30 seconds break in the log messages: icecandidateerror url: stun:stun.johanhelsing.studio:3478 address: [0:0:0:x:x:x:x:x] port: 57086 host_candidate: [0:0:0:x:x:x:x:x]:57086 error_text: STUN binding request timed out. error_code: 701 The url surprised me as I didn't configure it anywhere and then I found it quickly in the client code right here: https://github.com/johanhelsing/matchbox/blob/0671bf1bd0be14f22bf838bfe1baf691017e802d/matchbox_socket/src/webrtc_socket/wasm/message_loop.rs#L287 As the request times out anyway probably it would work just as well if it was disabled entirely. However I suppose this part of the code originally made sense so I would like to suggest to make these URLs configurable. I am quite new to Rust but if you like I could tackle this issue. Just let me know and I will try if I can get you a working PR Hi, thanks for the kind words, and thanks for reaching out :) So a stun server helps when peers are behind a NAT (like a ipv4 wifi router) and don't have their own public IP address. It does some magic hole punching that that I don't understand the details of. Often people use google's stun servers, and I also did that at first. However, I encountered some cases where those failed to punch holes in my NAT, but my own instance running the stun part of coturn worked just fine, so I just changed the url and forgot about it. So yeah :) You're right that this part of the code is a leftover from the early days of the project. The stun server needs to be there, but it definitely shouldn't be hard-coded to my domain. I think it would make sense to have it default a public stun server that has a good success rate, but still leave an option to override it somehow. As for why it takes 30 seconds for you. Which version did you try? We implemented ICE candidate trickle in the main branch, which I think may help (it sends connection alternatives as they are gathered instead of sending all alternatives at once). So my hope is that in your case, it will just figure out that it can connect without stun and then stop trying to connect to the stun server. If you tried with 0.3.0, perhaps give the main branch a try instead? I tried with https://helsing.studio/box_game (which currently runs matchbox 0.3.0) in Firefox and Chrome on Windows and for me it starts after about a second. If it's still an issue for you on the main branch, any info that would help us reproduce would be much appreciated! As for the API for configuring the stun server, I'm open to suggestions (including breaking changes), but I'd prefer if it's still easy to use matchbox without having to know about it. Also, somewhat related issue: #3 Feel free to give it a go, and let me know if you need pointers or hit any stumbling blocks :) Thanks for your quick reply. Yes I use the 0.3.0 version of the client. So I tried the code from the main branch but there was no improvement. For me (currently located in Vienna, Austria) I get the same timeout and the 30 seconds for your boxgame by the way with the same message in the logs Alright then let me see if I can create a fix as easily as I imagine Sorry, I realized shortly after posting that I didn't actually finish the WASM version of ICE trickle (see edit above).
GITHUB_ARCHIVE
From email@example.com@21:1/5 to All on Fri Feb 7 21:56:32 2020 I thought of the way to store a canonical Huffman table: First start one bit telling if the largest code length is odd or even. And then store the number of codes of each length using truncated binary. And then store the values in truncated binary, alternating lowest and highest values, in order that the window is made narrow after each one. For the storage of the values, consider that values already specified are no longer possible for longer codes, so they can be skipped when making the window of possible values. Also consider that, because the number of codes of a length is known, you might know that the next value (which is the lowest, or maybe highest, used value with this code length) cannot be the few highest (or maybe lowest) values in the window, because then it will not leave enough room for any other values for codes of this length, so this reduces the window further. Sometimes the window will be reduced to a single value in this way, meaning it can be coded in zero bits. For the list of how many codes per length, note that to determine the maximum, start up to 1 for codes of zero length, and then double the number of unused codes for the number of codes of the next length. The bit at the beginning specifies if the largest code length is odd or even, so if you know this isn't the largest code length then the maximum can temporarily be decreased by one. 3 -- space a e 4 -- f h i m n s t 5 -- l o p r u x Assume these are 8-bit ASCII codes, so the range of values is 0 to 255. The maximum code length is five. Count codes per length: 0: 0/1 * 2: 0/4 * 4: 7/10 * The ones with asterisks are one less maximum. So the encoding will be: 1..0.00.011.1101.111 (fourteen bits) Now we have to encode the values that the codes represent. The values to be encoded, and the ranges that they are encoded in, are: (0-255 / 0-253) 32 (33-255 / 34-255) 101 (0-255 except 32, 97, 101 / 0-249) 102 (103-255 / 108-255) 116 (103-115 / 103-111) 104 (105-115 / 108-115) 115 (105-114 / 105-112) 105 (106-114 / 107-114) 110 (0-255 except numbers used above / 0-250) 108 (111-255 except 115, 116 / 115-255) 120 (111-119 except 115, 116 / 111-114) 111 (112-119 except 115, 116 / 114-119) 117 (112-114 / 112-113) 112 Each range in parentheses first gives the range not considering the number of codes of this length, and then the second range after the slash uses the consideration of how many codes of this length to shorten the window (with the same exceptions listed before the slash). The encoding into binary is left as an exercise for the reader. Further improvements may also be possible, which I have not considered. (Possibly variable base coding, nCr coding, etc. Those might be more complicated to implement though; I am unsure.) I have not compared this working with other schemes, such as the way used in DEFLATE. Do you know how well it work compared with the other way? Are there other schemes that I might not know of? Note: I am not always able to read/post messages during Monday-Friday. From Harry Potter@21:1/5 to All on Thu Mar 26 15:04:05 2020 I have my own technique to shorten Canonical Huffman codes. It follows: * Start with the 16-code ranges to be included. Basically, 16 bits are used, where each bit contains whether a given 16-value range will be excluded from the header. I prepend a bit to determine whether any range will be excluded. This can save 15 * Write the least and most bits per entry. The most only needs to take into account the range from least to 16. Arithmetic coding can be used. I have a way of shortening values that are *not* in a range of a power of 2 but am not currently going to * Write a bit to indicate whether a 0-value is used. * Write the most-used value. * Scan the Huffman table: * Write a bit to determine whether the current value is the same as the previous. * If not, write a bit to determine whether it is the most-often-used value. * If not, write the value minus the least size plus whether the zero-length value is used, or, if a zero-value, 0, and skip the previous value and the most-occurring value. Try this out, and tell me what you think. BTW, I find that skipping blocks compressible with LZ helps with the size of the header. BTW, I'd better study your idea nd see if I can add it to my technique.
OPCFW_CODE
Defaults aren't used with Sumneko Lua I installed sumneko using kabouzeid/nvim-lspinstall and navigator has only this in the config key in packer: require('navigator').setup({lspinstall = true}) So logically It should apply all of the settings for sumneko right? Yet when I run neovim I get notes about Undefined global 'vim' so obviously this line isn't being applied: globals = {"vim", "describe", "it", "before_each", "after_each", "teardown", "pending"} That is because lspinstall = true setup will make navigator skip the sumneko config. I assume you will use Lspinstall to config the sumneko. If you would like to config the sumneko with navigator. You can setup with -- if you have lua-dev local luadev = {} local ok, l = pcall(require, "lua-dev") if ok and l then luadev = l.setup(cfg) end -- end lua-dev luadev.sumneko_root_path = sumneko_root_path luadev.sumneko_binary = sumneko_binary require('navigator').setup({lsp = {sumneko_lua=luadev}}) Ok. I think I've figured it out. To use the defaults from navigator, and use the lua ls from lspinstall, I just set: local sumneko_dir = vim.fn.stdpath('data') .. '/lspinstall/lua/sumneko-lua/extension/server' require('navigator').setup({lsp = { sumneko_lua = { sumneko_binary = sumneko_dir .. '/bin/Linux/lua-language-server', sumneko_root_path = sumneko_dir, }}}) So now my main remaining confusion is why the server (when started by lspinstall) gives warnings for unused variables (🦊 Unused local 'poo'), but navigator doesn't when running the above. You should also see Unused local What if you enabled lua-dev as well? You should also see Unused local What if you enabled lua-dev as well? Also here is my minium virmrc: https://github.com/ray-x/nvim/blob/4c70f277304a574c9fe48b4aa8adc711cb7cc707/min/vimrc.min#L176-L195 using lua-dev makes no difference. Even global vim is not working. my navigator.lua: local api = vim.api local test local sumneko_dir = vim.fn.stdpath('data') .. '/lspinstall/lua/sumneko-lua/extension/server' local luadev = {} local ok, l = pcall(require, "lua-dev") if ok and l then print("luadev setup") luadev = l.setup({ library = { vimruntime = true, types = true, }}) end luadev.sumneko_root_path = sumneko_dir luadev.sumneko_binary = sumneko_dir .. '/bin/macOS/lua-language-server' require'navigator'.setup({ default_mapping = true, -- set to false if you will remap every key keymaps = { {key = "gK", func = "declaration()"}, {key = "gr", func = "references()"}, {key = "ga", func = "code_action()"}, {key = "gi", func = "implementation"}, {key = "gd", func = "definition()"}, {key = "gf", func = "formatting()"}, {key = "gL", func = "require('navigator.diagnostics').show_line_diagnostics()"}, {key = "gG", func = "require('navigator.diagnostics').show_diagnostic()"}, {key = "]d", func = "diagnostic.goto_next({ border = 'rounded', max_width = 80})"}, {key = "[d", func = "diagnostic.goto_prev({ border = 'rounded', max_width = 80})"}, {key = "]r", func = "require('navigator.treesitter').goto_next_usage()"}, {key = "[r", func = "require('navigator.treesitter').goto_previous_usage()"}, }, lspinstall = true, lsp = { format_on_save = false, sumneko_lua=luadev } }) api.nvim_command("hi default LspReferenceRead cterm=bold gui=Bold ctermbg=yellow guifg=none guibg=Black") api.nvim_command("hi default LspReferenceText cterm=bold gui=Bold ctermbg=yellow guifg=none guibg=Black") api.nvim_command("hi default LspReferenceWrite cterm=bold gui=Bold ctermbg=yellow guifg=none guibg=Black") api.nvim_command( "hi default GHListHl guifg=#eceff4 ctermfg=189 guibg=#2d3439 ctermbg=238 gui=bold,italic cterm=bold,italic") As you still using lspinstall = true, I think you will need to set lspinstall = false, so lua sumneko will be loaded through navigator. so set lspinstall = true ordisable_lsp=all will both disable navigator for loading its default lsp config? I thought lspinstall = true will just tell navigator to use the lsp binary in lspinstall dir. my bad. lspinstall = true will load lsp using lspinstall.setup(). I did not fully support lspinstall yet. One reason is the lsp client naming from lspinstall is different. e.g. sumneko was named as lua @ray-x I have changed my navigator config to exactly match the lines from your minimum provided example: local lspinstall_dir = vim.fn.stdpath('data') .. '/lspinstall' local efm_cfg = { flags = {debounce_text_changes = 2000}, cmd = {lspinstall_dir .. '/efm/efm-langserver', '-loglevel', '1', '-logfile', vim.fn.expand("$HOME") .. '/tmp/efm.log'}, -- 1~10 init_options = {documentFormatting = true, codeAction = false, document_formatting = true}, root_dir = require'lspconfig'.util.root_pattern({'.git/', 'package.json', '.'}), on_attach = function(client) client.resolved_capabilities.document_formatting = true client.resolved_capabilities.goto_definition = false local log = require("guihua.log").new({level = "info"}, true) vim.cmd([=[autocmd BufWritePre <buffer> lua vim.lsp.buf.formatting()]=]) end, filetypes = { 'go', 'lua','sql', }, settings = { rootMarkers = {".git/", 'package.json', 'Makefile', 'go.mod'}, lintDebounce = "1s", formatDebounce = "1000ms", languages = { go = { { formatCommand = "golines --max-len=120 --base-formatter=gofumpt", formatStdin = true, lintCommand = "golangci-lint run", LintSeverity = 3 } }, lua = { { formatCommand = "lua-format --indent-width 2 --tab-width 2 --no-use-tab --column-limit 120 --column-table-limit 100 --no-keep-simple-function-one-line --no-chop-down-table --chop-down-kv-table --no-keep-simple-control-block-one-line --no-keep-simple-function-one-line --no-break-after-functioncall-lp --no-break-after-operator", formatStdin = true } }, } } } local sumneko_dir = lspinstall_dir .. '/lua/sumneko-lua/extension/server' require('navigator').setup({ debug = true, width = 0.7, border = {"╭", "─", "╮", "│", "╯", "─", "╰", "│"}, lsp = { sumneko_lua = { sumneko_binary = sumneko_dir .. '/bin/Linux/lua-language-server', sumneko_root_path = sumneko_dir, }, efm=efm_cfg, }, }) local poo = 'but' I even included the efm_cfg part in case it mattered (which it doesn't seem to as efm isn't attached, but listed under Other clients that match the filetype: lua) but I still get no warnings (at all. Not for unused variables, trailing whitespace... no matter what I type or do there are zero warnings or errors from the LSP). Is the latest version works for you? @ray-x Nope. If lua-dev is absent I still get the following error: Error: attempted to load lua-dev.nvim which is not present in plugins table! Error in packer_compiled: ...vim/site/pack/packer/opt/packer.nvim/lua/packer/load.lua:13: Error: attempted to load lua-dev.nvim which is not present in plugins table! Please check your config for correctness When lua-dev is there, it "loads" fine. The quotes are because even though it starts without error and sumneko_lua gets attached to the buffer nothing happens (I have intentional 'issues' in spots of my code (like unused variables) that don't produce any warnings or anything)
GITHUB_ARCHIVE
An instance of aggregation is the Car and Engine entities. This means, a component can exist on its own with many of cahokia’s most prominent features are aligned according to what out the entire. There is not any stronger ownership between a automobile and the engine. Similarly, department quantity and name can be defined as attributes of a department. A division can work together with many employees, but an employee can belong to only one division, hence there can be a one-to-many relationship, outlined between department and worker. Program is a powerful entity, with the identifier program_id as the primary key used to distinguish between applications. The solely robust entity within the database is Artist, which has an artist_id attribute that uniquely identifies it. Each Album entity is uniquely identified by its album_id mixed with the artist_id of the corresponding Artist entity. This becomes increasingly advanced as extra patients, medical doctors, specializations, and privacy issues are added into the mix. In well being care professions the place certain conditions can literally be life or dying, effectivity is of the utmost significance. ER diagrams are essential to maintaining hospital databases well-organized and free from error. While the determine under depicts a simplified and basic interplay with a doctor’s workplace, we can already see many entities and complicated relationships between them. A full-service hospital ER diagram can be a lot larger. ER diagrams are especially useful in organizations with larger and more advanced database methods which have many alternative entities, information points, and relationships between them. Entities are equivalent to database tables in a relational database, with every row of the desk representing an occasion of that entity. Enhanced entity-relationship diagrams are basically a extra expansive model of ER diagrams. EER models are useful tools for designing databases with high-level models. With their enhanced features, you presumably can plan databases extra totally by delving into the properties and constraints with larger precision. Speaking of readability, in these ER diagrams, naming values and objects is half the battle. That name should represent the data contained inside, because while a variable named Ren_and_Stimpy may be very recognizable, if it contained an order quantity, that would be terribly complicated. Customer information would possibly embrace things like their name, handle, and so forth. Customer data should, nonetheless, not embrace extraneous details like their intercourse or pores and skin shade. While elaborating on what every entity has, be careful not to program private details to be stored carelessly, as knowledge leaks have become more and more widespread over the years. Keeping the data secure is essential, but keep in mind, the much less you acquire, the simpler it is to encrypt or protect from outside affect. For the OO-minded reader, disjoint hierarchies are rather like single-inheritance sort hierarchies, whereas overlapping hierarchies are like multiple-inheritance kind hierarchies. A one-to-one relationship is mostly used to separate an entity in two to supply data concisely and make it more understandable. The determine beneath shows an example of a one-to-one relationship. Discuss the specialization/generalization relationship in ER modelling. The attributes within the superclass are duplicated in all subclasses. All the entities within the relationship are mapped to individual tables. The course of of making a superclass from a group of subclasses known as generalization. If so draw a solid line connecting the two entities. Here’s an example of a very fundamental database structure generated from information. Do the same for all of the attributes identified in members’ entity. Each attribute in a given entity should have a singular name. For every author, the database maintains his/her ID, name and a short introduction. Each e-book is stored in exactly one warehouse with a selected quantity. For each warehouse, the database maintains the warehouse name, the placement and the telephone number. Each e-book has one or more sellers, which can be both companies or people . Using an instance, explain the ideas of aggregation and composition. Database creation and patching – Visual Paradigm, an ERD device, supports a database technology tool that may automate the database creation and patching process via ER diagrams. So, with this ER Diagram device, your ER design is not only a static diagram however a mirror that displays really the bodily database structure. Database design – Depending on the size of change, it may be dangerous to change a database construction directly in a DBMS. To keep away from ruining the info in a production database, it is very important plan out the adjustments rigorously.
OPCFW_CODE
Dual boot: how to target the new grub I regularly completely reinstall my system. Before, I used to override the existing one on a fresh start. But I realized I was loosing interesting things (file configurations etc.) So instead, I started a new strategy: two partitions to hold 2 installs, so I can keep the old one as a transition to the new one. I they both share the same /home partition. To summarize I have: /dev/sda1 15Gb ubuntu1 /dev/sda2 15Gb ubuntu2 /dev/sda3 100Gb /home /dev/sda4 4Gb swap I'm happy with this. Today I made the fresh install on ubuntu2. It worked, except one thing: on boot, the Grub config that is being used is still the ubuntu1 one. It suggests the ubuntu2 in the list since I ran sudo update-grub. But the problem is at some point, I'm going to format ubuntu1 and at this time, my system won't boot at all. So my question is: How can I tell the MBR/Bios/whatever to boot on grub on the ubuntu2 partition now? Or maybe, in my current architecture, should I create a /boot partition in order to achieve that (and that it never gets removed)? I am very confused with the whole MBR/grub thing, also with the concept of a /boot partition. Thanks for support! Only one system can have its boot loader in the MBR with BIOS/MBR systems. You can from any Ubuntu install easy install grub to MBR so that install's grub is in control. But grub also remembers where to reinstall, so best to reconfigure to only have one updating MBR. You can un-choose all drives (or only your one drive) http://askubuntu.com/questions/503417/how-to-prevent-ubuntu-from-overwriting-grub-bootloader-after-update/503446#503446 Also best not to share /home. But if same version, it should work. Better to have shared data partition(s) and backup /home which then is usually very small. You could use boot-repair to restore your MBR. Follow the steps below: You need to have Ubuntu Live CD or Live USB. Normal session can be used to repair the grub. Boot using your Ubuntu Live CD or Live USB, while booting choose Try Ubuntu. Once booted then open a terminal, and run the following command one by one to install the boot repair. Add boot-repair to the repository sudo add-apt-repository ppa:yannubuntu/boot-repair Update your repository sudo apt-get update Install boot-repair sudo apt-get install -y boot-repair Once Installation complete run boot-repair on terminal by typing the following command or select it by System->Aministration->Boot Repair. boot-repair It will scan the System for few seconds and will pop-up a small window. Select Advanced options and in Main options tab select Restore MBR. Then select the MBR options tab, check the drive for MBR and Ubuntu. By default all the options are selected correctly if not select the drive where your MBR is to be and the correct OS. Once done click ok and restart your system.
STACK_EXCHANGE
I’ve just noticed on the Netrunner home page there’s mention of “Rolling 2014.0[color=#FF0000]8[/color]” but when I go to the downloads page, it’s “Rolling 2014.0[color=#FF0000]4[/color]”. Is this a typo? That’s good question, looking at the server those ISO’s never got updated to 2014.08. However with that said, it really doesn’t mater that much since it’s a rolling release and just a sudo pacman -Syyuu away from being current, now if they do wait too long, it could become an issue if there any major system changes in the interim. Perhaps it might be a good idea if someone contacted the server admin and pointed this out. I think we have already reached that point with iso 2014.04. Having tried to install it on a new laptop last week. Not so much a major issue but a number of little ones. I finally gave up and loaded Manjaro KDE 0.8.11pre2. Well, not quite, what I meant by major system changes are things like when they changed the graphics stack, the move to systemd. or when the file structure changed (/bin /sbin and /usr/sbin to /usr/bin), etc. However, unless you look at the Readme Rolling Tutorial from the web site (http://www.netrunner-os.com/readme-arch/) or read this forum post (http://forums.netrunner-os.com/showthread.php?tid=13888) before you try to update for the fist time, then yes you could run into issues. My experience differs. Following the process outlined in the references you site, still does not result in a fully functional, problem free installation. I have done it several times, in real hardware and virtual machines in the last week. The most visible issue is the MariaDB update. This will still break kmail/akonadi. There are other minor glitches as well. Yes, one can work through the issues following the procedures that have been posted and documented over time. The point is that a new ISO would not have these issues. Issues we are only dealing with because it has been too long since the last ISO and there are too many big steps. It is time for a new ISO. I can name at least one problem that still exists on the the latest Manjaro KDE ISO: I wonder why this just can’t seem to be fixed on the install medias? I do however agree that they should update the Netrunner Rolling ISO’s more often, maybe even shortly after every Manjaro KDE Release ISO (0.8.9, 0.8.10. 0.8.11, etc.) Those of us that already have an installed system can just keep rolling along as always, but it would make installing it easier on new users. For the kmail user name problem, I found that I create my accounts and identities and then go back and delete the default account. I have not tried the method in the thread you referenced. I think that this is may also be a KDE PIM problem as well. I just spent a bunch of time investigating backing up and migrating KDE PIM data including setting up several VMs to test it all. My impression in the end is that most developers and testers are not testing from new installations but testing changes and updates. There is also a lack of comprehensive documentation on most Linux software. For example, creating a new blank address book in Kontact. Everything I tested was tied to the premise of importing an existing address book or pointing to a corporate resource like an LDAP database. It is a tough balancing act to create and maintain a distro. Everyone has a different idea of the perfect distro. I haven’t used Debian for many years, but I remember the old ncurses installer that would let you select packages by groupings and then let you select or deselect every package to be installed. I liked that flexibility, but today’s reviews would scream murder at that many choices. Somehow we expect to click install and have a perfect system with just the applications we prefer. The telepathic Linux distro, click install and it reads your mind and builds your perfect system Linux distros have come a long way in the 20 years or so since I chose Linux as my primary operating system. Manjaro and Netrunner are simply the finest I have ever used, even with their idiosyncrasies, minor glitches and frustrations. It is indeed a typo as the current ISOs were delayed due to a bug and will most likely be then called 2014.09 as we already have september.
OPCFW_CODE
Two datasets (project3_dataset1, project3_dataset2) can be found on Piazza. Please check the README file first for a short description of the two datasets. This is a team project. Each team consists of at most three members. Complete the following tasks: - Implement three classification algorithms by yourself: Nearest Neighbor, Decision Tree, and Naïve Bayes. - Implement Random Forests based on your own implementation of Decision Tree. - Adopt 10-fold Cross Validation to evaluate the performance of all methods on the provided two datasets in terms of Accuracy, Precision, Recall, and F-1 measure. - We will send you an invite for a Kaggle competition. For that dataset, we hold out the class labels for testing data. Apply various tricks on top of any classification algorithm discussed in class (including nearest neighbor, decision tree, Naïve Bayes, SVM, logistic regression, bagging, AdaBoost, random forests) and tune parameters using training data. You can call packages for these algorithms but need to implement any improvement on top of these algorithms. Submit your classification result for the testing data. Your efforts towards improving these algorithms will be evaluated. Those who are among the top on the leaderboard after the deadline will receive bonus points. Your final submission should be a zip file named as project3.zip. In the zip file, you need to include a folder Code and a folder Report: - Code: Implementation of four methods and your implementation for the Kaggle competition. The four methods must be implemented by yourself. The implementation for the Kaggle competition can use learning packages of the algorithms that were mentioned, but the improvement should be implemented by yourself. Together with your code submission, a README file should be included to explain how to execute your code. - Report: For the four methods: Describe the flow of all the implemented methods, and describe the choice you make (such as parameter setting, pre-processing, etc.). Compare their performance, and state their pros and cons based on your findings. For the competition: Explain why a certain base algorithm is chosen, state clearly the parameters you choose for this algorithm, and discuss the improvement you have made towards improving its performance. - Log in any CSE department server and submit your zip file as follows: >> submit_cse601 project3.zip The details about Demo will be released two days before the demo date through Piazza. Please note: - New datasets will be given to check your implemented classification methods and performance measures. The data format will be consistent with the README file that we already provided. - During the demo, you will be asked to adopt specific setting and run your code. - We will ask you to explain the basic idea of the method you use for the competition.
OPCFW_CODE
feat: building google analytics plugin Description ... ... ... Related issue(s) @fmvilas Please kindly check out this draft PR and help me out 🙏 @AceTheCreator I copy your comment from issue :) Description We used to have Google Analytics hardcoded (see #13). However, some people may not want to use it so it's better if we make it a plugin instead. Implementation suggestion Go to src/pages/_app.js file and make it trigger a new kind of event on the render function. For instance, something like a page:render event. Make sure this event is only triggered on the client-side render. Make the plugin subscribe to the event and trigger an initialize and pageview call when a new event comes in. Hints The code for triggering/subscribing to events is on src/lib/events.js. However, this library was created for the backend and it imports the config lib, which may contain secrets. If we use the events lib on the frontend, these secrets may be exposed on the browser. Watch out! @fmvilas when I subscribe to the event from a different file and I trigger it in the app.js render it doesn't work. For instance, when I call event names from the app.js to check the list of subscribed events, it returns an empty array. Except I call both the subscribe and trigger event within the page render function in the app.js @fmvilas please check out my updated draft PR The problem is that you don't have any triggers in the client side due to fact that writing code in render function under typeof window !== undefined you execute code only on client side, it means that binded events to registry are missed during rendering page on the server. NextJS works in this way that prepares data for component (getInitialProps function), then these data is transformed to JSON and attached to page. NextJS render page on server side, using of course getInitialProps function and then if user go to page React on client side hydrate page using these attached JSON data. Your case is hard, because you try emit event from registry binded on server side and then this data, on client side, is lost. So we must think how to "attach" events from external packages to client side 😢 The problem is that you don't have any triggers in the client side due to fact that writing code in render function under typeof window !== undefined you execute code only on client side, it means that binded events to registry are missed during rendering page on the server. NextJS works in this way that prepares data for component (getInitialProps function), then these data is transformed to JSON and attached to page. NextJS render page on server side, using of course getInitialProps function and then if user go to page React on client side hydrate page using these attached JSON data. Your case is hard, because you try emit event from registry binded on server side and then this data, on client side, is lost. So we must think how to "attach" events from external packages to client side and also how to import external plugins for client side, because at the moment it only works for server side (code for that is in plugins.js file). @magicmatatjahu what do you think will be the best approach in tackling this problem? @AceTheCreator I opened issue https://github.com/asyncapi/studio/issues/80 Probably we will remove the NextJS and go with pure React app, so I will close PR, or do you have a different opinion on the subject? @AceTheCreator I opened issue #80 Probably we will remove the NextJS and go with pure React app, so I will close PR, or do you have a different opinion on the subject? @magicmatatjahu It's fine. So will any suggested plugins be built separately? @AceTheCreator It depends. From what he remembers, the chatbot needs a token for this AI service (I don't remember name, sorry). If there is a library that can communicate in the browser, then the chatbot could be implemented just for the browser. As for the server API, you could use a websocket that reads the answers and asks the next question based on it :) @AceTheCreator The Studio will be SPA application, so there will be plugins - you can read proposal here. If you mean plugins for (server) API then we can also reuse my proposal - you can add comment/ask about plugins in the https://github.com/asyncapi/studio/issues/86 issue :)
GITHUB_ARCHIVE
CERES 4GL is a computer language written in pure C originally meant to run on MS-DOS(Tm). It syntax borrows heavily from COBOL and FORTRAN. Its main developers were Brian Summers, Enzo Pompa and Marcello Pompa, all living in South Africa. The language's history is obscure and convoluted. Back in 1981, a small software house headed by the late Brian Summers, started a language call Generex. This was a practical extraction and reporting language written to run on ICL mainframes. Generex enjoyed reasonable success, fading into obscurity when the personal computer came long. Armed with ideas gleaned from the Generex experience Mr. Summers headed of the the United States of America. Brian, what can I say, he was a fantastic salesman. He managed to sell Easytrieve Plus for PC to the Easytrieve guys. Those were the mainframe days, and Easytrieve was everywhere, so much so that there wasn't as single mainframe left to sell it to! A little context... Brian is your average maverick. He sold a product to a company when all he had was a pretty printed box and an imported HP lug-gable PC from Germany. Through sheer bloody mindedness and awesome amounts of talent, Enzo and Marcello managed to hack the language together in a hotel room in Boston. When the product passed QA and Introdell(tm) was paid in dollars, Brian had the vision of writing a business language with proper screen handling for the PC. Brian plowed all the cash they had back into Introdell and started development of the CERES language. The name stayed. CERES was actually Series 1 - prototype. Thousands of lines of very tight C code was constructed and in a space of four years, the language was written. CERES version 1.A.0 release. Damn, did this thing have bugs. With time, the language grew and became very stable, one problem, the sale of the language to a big bucks international computer company never came. A South African software house bought Introdell and funded the first port to UNIX. On UNIX the language finally came into its own. Ports to SYS V Release 4(SVR4), Xenix and SCO followed. When TCP/IP exploded onto the scene, network extensions were added. The coupe de grace was the Linux port. Done on a slackware box, running kernel 1.2.13, with terrible fcntl() support. Most of this work was done by Dr. Alex Harin, a genius. With Linux and millions of lines of code in CERES hauling data across most of Africa south of the equator, it became a legend. The language is still maintained. The latest release is 4.B.4, with version 5.A.0 due shortly. May the legend of CERES live on now that it is facing obsolescence....
OPCFW_CODE
Discipline: Biological Sciences Subcategory: Physiology and Health Jordan Dreher - Norfolk State University Co-Author(s): Andrew Alexander, Boston University, MA; Lucas Carstensen, Boston University, MA; Michael Hasselmo, Boston University, MA The retrosplenial cortex (RSC) is an association area that is highly interconnected with sensory and motor processing regions as well as structures that are known to represent an animal’s position or heading direction with respect to the external environment. Neural spatial representations have primarily been studied while animals freely forage in a large open arena. In contrast, RSC spatial responses have, with a few exceptions, been primarily examined while the animal performs track running tasks. Given that RSC serves as both an input and output center for broader neural spatial circuitry, it is important to examine RSC spatial responses in two-dimensional environments. The purpose of the experiment was to further characterize RSC spatial representations during free exploration. To do so, in vivo electrophysiological recordings were performed while rats explored familiar and novel arenas that had varying shapes or visual cue configurations. To perform these recordings, a 16-tetrode hyperdrive was manufactured and implanted into the rat’s brain. It was determined that a sub-population of RSC neurons exhibited activity that was tuned to the egocentric relationship of the animal with respect to environmental boundaries. This may have important implications for goal-directed navigation. Pilot injections, that express genetically-encoded calcium indicators, were completed to enable one photon imaging of these neurons, Egocentric Boundary Cells, in RSC and dorsal striatum. This imaging will enable recordings of more neurons simultaneously to elucidate their role in more complex spatial tasks. References: Seralynne D. Vann*, John P. Aggleton* and Eleanor A. Maguire‡. (2009) What does the retrosplenial cortex do? 10(792-802) James R. Hinman, Holger Dannenberg, Andrew S. Alexander, and Michael E. Hasselmo (2018). Neural mechanisms of navigation involving interactions of cortical and subcortical structures. (2007-2029) Cho, J., & Sharp, P. E. (2001). Head direction, place, and movement correlates for cells in the rat retrosplenial cortex. Behavioral neuroscience, 115(1), 3. Alexander, A. S., & Nitz, D. A. (2015). Retrosplenial cortex maps the conjunction of internal and external spaces. Nature neuroscience, 18(8), 1143. O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-moving rat. Brain research. Hafting, T., Fyhn, M., Molden, S., Moser, M. B., & Moser, E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052), 801. Chen, Tsai-Wen, et al. “Ultrasensitive fluorescent proteins for imaging neuronal activity.” Nature 499.7458 (2013): 295. Funder Acknowledgement(s): Funding provided by the National Science Foundation. I thank Helen Fawcett, Co-Director of the National Science Foundation Research Experience for Undergraduates in Integrated Nanomanufacturing and the Hasselmo Lab. Faculty Advisor: Andrew Alexander, email@example.com Role: I assisted in manufacturing two, 16 tetrode hyperdrives, observed the implantation of the drives, and assisted with the electrophysiological recordings. I also performed histology to image the cells within the brain tissue.
OPCFW_CODE
avoid creating stack for every route Please verify: add() will create the stack with this route as the kernel when adding a middleware if the route is a match, the run() method (=the FastRoute route handler) will call callMiddlewareStack() that will seed a new stack if it wasn't created before. if no route middleware is added, the stack will be created only for the matching route with this route as the kernel and only middleware. (We could also skip stack creation in this case, and invoke the route directly...in my implementation with a separate class stack, the stack is invokable and when invoked it start calling the middlewares. So for a MiddlewareAware - StackKernelTrait like object i set a $runner var to the stack if present or the object itself and invoke the $runner) //pseudo-code $runner = $this->getStack() ?: $this; $response = $runner($req, $res); regards You are write, seedMiddleware is useless, indeed App does'nt call it. Hello @lalop Actually the application run() method calls: callMiddlewareStack() which in turn creates a new stack for the application middleware using seedMiddleware() sets itself (the application) as the kernel. For the application stack is not that much of a problem since it's done once per request. But I believe it would be better not to create a middleware stack (the process is the same as for the application) for every route if no route middleware are used, i.e. a middleware stack with only the kernel. This would benefit cases in which the numer of routes is significant. Anyway the common design (and the consequential use of a common trait) is very clean and simple and (node.js-like) elegant. Hello @pine3ree , I see your point, but if this pr is merged, route whithout middleware will have a stack only if it's called since the stack will be created in Route::run, isn't that the target ? Hello @lalop , yes that's right. As for the application, the stack will be created only for the matching route when the run method is called or whenever a middleware is added tot he route. Maybe i misunderstood your comment. What i meant is that seedMiddleware is still called by the app (and by the matching route), but not invoked directly. It's indeed "lazily" invoked. So I believe there is no need to call it in the route constructor and it would be better to let only the matching route or routes having middlewares build their own stacks when needed (within the run() method or the add() method which calls the aliased method addMiddleware == trait MiddlewareAware::add, which in turn calls its own method seedMiddleware) kind regards hum... either we are agree, the call to seedMiddleware is useless and your pr is welcome imho or I don't understand :) @lalop I believe we basically agree. :-) At least we agree on the fact that seedMiddlewareStack() shouldn't be called directly in the route constructor. If we avoid that, we would have just 2 middleware stacks in most of the cases. 1 belonging to the application, 1 to the matching route. We will have 1 more route stack per subRequest in case of subrequest We will have 1 more rote stack creation for every route that has middlewares If we call it DIRECTLY in the route constructor, we will create a stack for every route, which is not needed. In your comment you said: "indeed App does'nt call it". What i meant is that the App doesn't call it directly, but it still calls it indirectly, as it's invoked inside the MiddlewareAware trait methods callMiddlewareStack(...) and add(...). When you call add(...)/addMiddleware(...) or run(...) from the app or from a route, you call those 2 trait methods and indirectly you will be also calling seedMiddlewareStack to create the stack. kind regards So we are completely agree
GITHUB_ARCHIVE
- Website: www.allaroundtheworld.fr/ - About: Freelance Perl/Testing/Agile consultant and trainer. See http://www.allaroundtheworld.fr/ for our services. If you have a problem with Perl, we will solve it for you. And don't forget to buy my book! http://www.amazon.com/Beginning-Perl-Curtis-Poe/dp/1118013840/ Commented on Fake Amazon Book Reviews Are Hurting My Book I'd love to write that book. Sadly, with a company to build, another company I'm creating, and the uncertain future of Perl 6, I think I'd probably lose a ton of money on that book relative to everything else I'm... Commented on DBIx::Class::Report - generate ad-hoc dbic classes from SQL kablamo, DBIx::Raw is very different. It's useful, but doesn't serve the needs I have now. In short, if you have a DBIx::Class based system, using DBIx::Raw doesn't help because you're still stuck with hashrefs, or assigning the data directly. DBIx::Class::Report... Posted Tiny Games with Perl 6 to Ovid Note: due to positive feedback on the post and at a client, I've released DBIx::Class::Report to the CPAN. You can read the original announcement Posted DBIx::Class::Report - generate ad-hoc dbic classes from SQL to Ovid Object-Relational Mappers (ORMs) are awesome if you think about your database a collection of objects. They're awful if you think about your database as a database. My primary client right now is ZipRecruiter and my work involves rewriting one of… Posted Fake Amazon Book Reviews Are Hurting My Book to Ovid Update: I really can't say as much as I would like (there's stuff I can't share), but my publisher had a face-to-face with an Amazon rep and internal action was taken. Amazon's investigation is apparently over. The internal position seems to be "we're making money, there are… Commented on CPAN PR-Challenge: February Report As an aside, until I struck out on my own for business, I never realized the insane amount of time it takes.... Commented on CPAN PR-Challenge: February Report My apologies for not answering. I've been so very, very busy and tons of people have been sending me email that I've been doing triage on email. I'm sorry you feel through the cracks. Your fix looked reasonable, by the... Posted Perl 6 for Mere Mortals - FOSDEM Video to Ovid My FOSDEM talk, Perl 6 for Mere Mortals, is now online: Commented on Avoid a Common Software Bug By Using Perl 6 Manuel, That's actually deliberate. Perl 6 does have a type inference engine in the optimizer. However, type inference can often generate obscure errors that are hard to understand. Larry Wall has said that until they have a better handle on... Posted Building a Thin Controller to Ovid I haven't updated about Veure in a while and though this post isn't really about Veure, per se, I'll use some code from it to illustrate a "thin controller." There's a lot of confusion abou… Posted Avoid a Common Software Bug By Using Perl 6 to Ovid Back in 2001 I was working for a company who had a client who was in a serious bind: the maker of their point of sale (POS) system suddenly jacked up the license fee to the point where our client would go out of business. They needed a new POS in 21 days. We grabbed an open source POS syst… Commented on git-refresh: Automatically rebase current branch onto master confuseAcat: I do merge back to master. And my branch names often look like this: feature-description-$ticket feature-description-$ticket-2 feature-description-$ticket-3 feature-description-$ticket-4 The trailing number means "part X". I merge tiny pieces back to master all of them time, but there are enough... Posted git-refresh: Automatically rebase current branch onto master to Ovid Different people have different workflows with git. Mine is pretty simple. - Branch from master - Hack, hack, hack, git stash; git checkout master; git pull --ff-only; git checkout $branch; git rebase master; git stash pop - Goto … Posted A little thing to love about Perl 6 (and COBOL) to Ovid By now you've heard the announcement that the Perl 6 team is cautiously hopeful that Perl 6.0.0 will be released this year. There are three things they need to finish: - The Great List Refactor (which should improve performance) - Native Shaped Arrays (tell Perl 6 that you only … Commented on Can you provide an x/y Point class in other languages? brian, your point about the default value is well taken. Thanks. For the Java comments, I'll leave that to Damian, but I'll add this: I've sat in those Java classes and it's a bloody nightmare with students either trying to... Posted Can you provide an x/y Point class in other languages? to Ovid Update: Thanks for all of the replies. However, I now need to block further replies due to the huge amount of spam this post is getting. I'm writing a talk for Fosdem entitled "Perl 6 -- A Dynamic Language for Mere Mortals." The talk… Commented on ZipRecruiter Wants You Mithaldu: I'm not in HR, so I can't say. I live in France and they've invited me to swing by the offices while I was in the US, but I've haven't been there. I know of other devs who have... - Posted ZipRecruiter Wants You to Ovid Commented on Of course you can `requires` attributes! A cleaner way (in my opinion, yours may differ), is to use forward declarations. The promise a subroutine will be there, even if it doesn't exist at compile time. That means you can put your with statement wherever you like.... Commented on Using Role as Partial Classes Movement is a single responsibility, so it should be its own class. No. A movement is an action. It's an abstract thing which doesn't stand on its own, like a character or a space station. It has to be attached... Commented on Using Role as Partial Classes Thanks for your comments, Sid. I saw some interesting things in the second part of your series of posts and I made some comments there about them. Next, you wrote "Movement" for example should be a behaviour that is re-usable.... Commented on Inheritance is Bad: Code Reuse Part II First, you should generally be consuming all of your roles in a single with statement: with qw(Destroyable PositionRW); By composing those separately as you have (two separate with statement),you’re lose the composition safety of roles and your code is reduced... - Posted Using Role as Partial Classes to Ovid Posted A small puzzle for you to Ovid This had me stumped for a bit, but I was quite pleased when I came up with a relatively simple solution. Given several arrays, each of which has elements which are a subset of allowed elements, and given that every allowed element appears at least once in each array, how do I rewrite all a… Commented on Stop Putting AUTO_INCREMENT IDs in URLs Ed, you're perfectly correct. I do have those access controls in place, with a fair amount of tests in place to ensure they don't get broken at some point in the future (though someone will find a way to cheat,... - Posted Stop Putting AUTO_INCREMENT IDs in URLs to Ovid - Posted Veure Update: Missions to Ovid Commented on Legal Issues in Game Software Creation Mithaldu, That's a huge gamble. Going to court still takes a lot of time and money. That's time you're not building your business and that's money that may run out before the judgment comes in. You still have to pay... Posted Legal Issues in Game Software Creation to Ovid Note: I am not a lawyer and the following should not be considered legal advice. Double-check everything and hire a lawyer. As I continue to work on Veure, I have the added fun of l… Posted Veure Update to Ovid Just in case you're curious, I'm still hacking on Veure, though the last month has kept me busy on a bunch of other things (our daughter just started school, so that's a big one!) I've been building so much of the infrastructure that you might be surprised to realize that I've only just go… brian d foy commented on Can you provide an x/y Point class in other languages? I think Damian makes the mistake most people do when criticizing a language. They conflate the language itself with the stupid libraries and idiotic ecosystem. It's not the syntax and design of the language that makes it so and I've seen some really talented programmers make Java do some amazing things. They just go around all the other shit though. Consider that a lot of the criticism of Perl is CPAN. It's not what you can do with the language that's to blame for what other people did with it. But, remember mjd's Why I Like Jav… Belden commented on git-refresh: Automatically rebase current branch onto master Being able to specify a different base branch name than 'master' might be handy. Offer Kaye commented on Avoid a Common Software Bug By Using Perl 6 Very nice article, thank you very much Ovid! One question though, about this code: die unless $_ =~ /^\d+$/; One obvious bug is that it will accept octal numbers such as "040". Will the Perl6 "Int" type prevent such numbers? Alberto Simões commented on CPAN PR-Challenge: February Report Hi, Ovid. Sorry if my post feels like I am complaining. That is not the idea. Just marking this PR as done and going for the next one. Nevertheless, I am happy to maintain the PR until it gets good to be merged. So, just comment on it with something you would like to be added, and I will try to do it. - Aristotle commented on Fake Amazon Book Reviews Are Hurting My Book blogs.perl.org is a common blogging platform for the Perl community. Written in Perl and offering the modern features you’ve come to expect in blog platforms, the site is run by Dave Cross and Aaron Crane, with a design donated by Six Apart, Ltd.
OPCFW_CODE
Computer Coding with Minerva. "They loved it! So happy with how much the boys enjoyed it."Katie, Coding Parent We provide outstanding coding tuition to children of all ages and abilities. A Minerva coding tutor is trained, DBS checked, and professionally vetted. Hiring one to teach your child to code will give them the best chance to develop new skills quickly and to become coding experts at a young age. Coding is taught by the hour at your home (if you are based in Greater London) or online via an interactive whiteboard, so your child can learn to code wherever you are based in the world. We teach Visual Basic, Java, C#, Perl, Ruby, Modding on Minecraft, Scratch, Python and more. Classes usually last one hour, and you will need a basic laptop (we can provide equipment if required). We can also run weekly intensive courses over half term and the summer holidays. What is coding? Coding is the instructions given on computers in the form of symbols and letters in order to perform a particular task. This includes the process by which we can create computer software, apps and websites. At its simplest, it can be incredibly straightforward and graspable to even the least technologically-inclined mind. At its most complicated, it can be used to write highly sophisticated, multi-layered programmes with many functions that respond to complex algorithms. Why is it important? It is virtually impossible at this point to run a successful company without at least one person who is competent at coding. It is already mandatory for a lot of professions, with requests rising exponentially in many sectors. Every successful business needs a website – and if the website doesn’t look slick, you can’t realistically hope to attract customers. Whether you want to start a business, become a career coder (a field where demand currently far outstrips supply), or build a personal blog/portfolio online, coding is a vital skill, and one that is transferable to any field that uses computers (so…any field). Even if you don’t see yourself as a ‘techie’ person, you will still need to be able to deal with technology on a regular basis, and it’s infinitely better to be able to take control of the situation yourself, rather than always be asking that one indispensable techie friend for help… instead, you could BE that one indispensable techie friend. Why learn now? Coding is currently taught in some schools, but by no means a majority, although it tends to be those that are ahead of the curve. In many ways, it is as valuable as learning a foreign language, except this is a language that is spoken globally, and so knowing it can help take you wherever you may want to go. Like it or not, computer technology and coding is undeniably the language of the future, and those who don’t speak it are more likely to struggle and get left behind. In a few years, it is likely to be mandatory in schools (at least in the most ambitious schools) and demanded for many jobs: those who learn it now will be the best prepared for this future, and not playing catch up. How can it be taught? Because coding is a computer language, we have found that it can be very successfully taught online, as well as tutored in person. The benefits of online include flexible timing, some fantastic online teaching resources, and greater independence for the student. Thomas, one of Minerva’s expert coding tutors, says: ‘I have had students that responded really well to online tuition whereas others needed for me to be next to them in order to help them understand particular concepts.’ Interview with a Minerva Coding tutor. Hi Thomas, what exactly is coding, and what can it be used for? Simplified, coding is the instructions given on computers in the form of symbols and letters in order to perform a particular task. All of the electronic systems surrounding us are programmed to function using coding, so it could be said that coding is implicitly involved in our everyday lives! As a coding tutor, do you mainly teach older students, or younger ones too? It’s usually taught to teenagers and older students, when they already have a basic mathematical background. However in the last few years a lot of parents – and some schools – have started getting their children to learn basic coding skills from ages 7 and above, as far as I am aware. I think that’s particularly useful since children at those ages learn much easier than adults, since they are at that age where they simply absorb information. Myself, I teach coding to children from ages from 7 to 17! Do you think coding is taught enough in schools? If technology is the future (and the present), could it not be as important as maths or English? Yes, I believe that coding should become an essential subject in schools. Whether we like it or not, technology is taking over in all of the sectors of industry and academia. Therefore if children start learning how to think in terms of coding from a young age, things will be much easier when they will have to use coding in university, or even work. Do you think coding will be mandatory for a lot of professions in a few years’ time? It’s already mandatory in a lot of professions to be fair, and requests for it are increasing exponentially in many sectors. Do you find it easier to teach coding online, or in person? Why? I teach both, although mostly in person, however this depends on the student. I have had students that responded really well to online tuition whereas others needed for me to be next to them in order to help them understand particular concepts. 5 Programming Languages to Help Kids Learn Coding. There’s a huge number of different programming languages around that can be used to learn coding. However, many of these are fairly complicated, requiring tricky syntax or appearing to be overly mathematical. Something that intimidating could easily put a child off trying to learn coding, so we’ve collated some of the best programming languages to get kids started with, that can set them on the way to acquiring an invaluable lifelong skill. Some programming languages are so beautifully simple – requiring minimal or no syntax and typing – that kids won’t even realise they’re learning a new skill. These are programming languages that work intuitively and are relatively easy for kids to pick up – whether they want to build games, make music or create an interactive story! Scratch is one of the favourite introductory programing languages, as it is easy to grasp for beginners while also including enough options and complexity for more experienced programmers. It can be used to make art, music, games and interactive stories, and is currently popular in schools, being available free online and usable on any operating system. There’s even an active online community of users who share their work, and help other users develop. Ruby is a dynamic, open-source coding language that has proven self-explanatory for a lot of children, with its focus on simplicity and productivity. It has some of the easiest syntax of any programming language, and is brilliant for learning the fundamentals that will teach children how to write good script. It produces sturdy code, and was originally used for Twitter, making it one of the best examples for how coding can be used to create something fantastically popular and important. Unlike more advanced programming languages, Python is written in plain English, rarely requiring one to add comments. It can help kids to think like a programmer without needing to first learn the intricacies of coding syntax. Python also has an extensive library of common coding functionalities, making it easy for users to turn a programming idea into something that can be interpreted by a computer. Hello, World! This is an ideal programming language for any children who love gaming, and can be convinced to turn that passion for playstation into something more productive. The basic version is free, and is a great way to introduce kids to the fundamentals of video-game design, through an epic adventure-based plot. Users can also publish and share their created games, but in the interest of online safety, the site closely monitors all published content, and there is no live chat feature. And parents can even set a time limit on how much a child can play the game in a day! Ideal. This language will teach you how to talk to robots, and is perfect for designing anything from desktop programmes to video games. From telling a machine what to do, to introducing conditional statements and creating functions, C++ can be taught to even young children and allow them to get to grips with the possibilities of coding. "Clarice is fab"Tessa L, Coding Parent "It’s going very well with Andrew, and Esme has a lot of fun during her lessons."Stephanie C, Coding Parent "My daughter Tatiana enjoys her classes with Betty very much. She says she learns more than in any other computer class she has taken. Betty is also a wonderful role model as a Hispanic woman in a STEM profession."Gloria C, Coding Parent
OPCFW_CODE