Document
stringlengths
395
24.5k
Source
stringclasses
6 values
You are interested in shaping the future of AI-based energy management? Then come join our great team and make the world greener with us! Etalytics is an award-winning software provider for energy intelligence solutions to achieve energy efficiency, less CO2 emissions and lower energy costs for multiple industries. With our software products, based on innovative IoT, data analytics and machine learning technologies, we offer solutions for manufacturing plants, smart cities, energy providers or office buildings to structure, analyze and optimize their energy systems. We at etalytics are currently looking for a Senior Backend Software Developer (m/f/d) for our office in Darmstadt, Germany. You will join our backend team, which builds our etaONE® platform for energy intelligence based on a modern microservice tech stack. You will be implementing and maintaining features from the Rest API down to the persistence layer. We value a clean and testable software design and are looking for candidates sharing these values. What you can expect: - Design and development of backend features using modern frameworks and tools such as Spring Boot (Java/Kotlin), Hibernate, Git, Gradle, PostgreSQL, AMQP - You are involved in ongoing development of the cloud native microservice architecture based on innovative technologies such as Spring, Docker, Kubernetes, CI/CD Pipelines and AWS Cloud Services - Technical and functional support in the development of new product and solution ideas, proactively contributing your own creative suggestions, solution approaches and technologies to the development process - Maintaining high quality code standards through test automation with Junit and Mockito, test-driven development and static code analysis - Close cooperation with other developers and specialist departments throughout the entire software lifecycle – from planning to development and rollout - An active developer community where you can get involved, exchange ideas with colleagues and keep up to date with your knowledge You have experience in building and maintaining large web applications; you know the Spring Boot framework, love talking about architecture and APIs, know your git and you are a proficient coder on your own. We are working as a team, but we are looking for a coder who knows how to get things done. - You have 3+ years of experience in software development with a Bachelor’s degree or equivalent in Computer Science, Informatics, Physics, Mathematics or Engineering - You have experience with automated software testing and continuous integration and you can and will write meaningful tests along with your implementation code - You write high-quality code and want to build a maintainable, stable, and well-tested system that you can deploy to production multiple times a day without worrying - You have the ability and willingness to learn quickly and in a self-guided way - You enjoy working with multiple people on the same codebase, know what that takes and are open to sharing and improving our work together - You want to take responsibility of your work and build sustainable solutions that stand the test of time - Good written and spoken German and English skills round off your profile - First experience with deployment and operation of applications on cloud native infrastructure (e.g. Kubernetes, AWS) would be a plus Are you looking for variety instead of daily routine? Team spirit instead of rigid hierarchies? You want to work on software problems that make a real difference in our society by helping to reduce CO2 emissions? Then you’ve come to the right place. We offer you exciting and responsible tasks, attractive career opportunities and perspectives: - A permanent full-time contract with perspective at our office in Darmstadt (Germany), - Flexible working hours and the possibility to work in a home office, - Open feedback culture, flat hierarchies and a motivated team, - An interesting field of activity with a modern tech stack, - Excellent development opportunities in a growing team, - A healthy work-life balance, diversity, a hands-on-mentality and agile work. Our modern office is located at HUB31 Co-working space in Darmstadt near Frankfurt am Main, which is easily accessible with good connections to public transport, the highway and the airport. You will be part of a highly motivated international team of specialists and will work in an environment that offers a wide range of individual development possibilities. Feel the startup mentality with options to play table soccer, meet other entrepreneurs or enjoy free coffee and tea! Interested in joining our team? Contribute your know-how, make our team even greater and help us making the world a greener place! Send us your electronic CV (.pdf), expected salary, and your earliest starting date. We’re looking forward to hearing from you! Send your application to email@example.com.
OPCFW_CODE
Question: I have a figure (a scientific graph), which I scanned. Now I want to edit the text on the graph. Could you tell me how I can do that? Answer: If you scanned the graph into a graphics program then the entire graph forms a single image. You can edit this graph in the graphics program just like any other image. The simplest way to edit the text is to remove the text from your image completely and then add it again using the text tool. For the purposes of demonstrating this I will assume that you're going to edit the image in Paint Shop Pro but other graphics programs will work similarly. The first step is to open the image in your graphics program. Now text in an image is no different from any other part of the image as far as your graphics program is concerned so the only way to edit the text is to completely remove the text and then enter new text into the graphic in its place. The first thing to do is to check that the background colour that you have set matches the background behind the text that you are going to remove. You can then use the selection tool (the button that looks like a dashed square in most graphics programs) to select the area around the text to be removed. Once the area containing the text is selected, pressing the delete key should replace the selected area with the background colour and the text will disappear. You repeat this process for each piece of text that you want to replace. Once all of the unwanted text has been removed from the image you can use the text tool (the button that has a letter on it in most graphics programs) to enter the replacement text. First make sure that the foreground colour is set to the colour that you want to use for the text. Once you have selected the appropriate font, size, entered your text, and selected OK the text will appear on your image. If you place the mouse over the text and left click you will be able to drag the text to the exact location where you want it to appear. Right clicking will then fix the text in place. If you position the text wrongly and decide to move it you can use the selection tool to select the area containing the text and then use the mouse to drag that area to the correct location. To place text on an angle first place it somewhere on your image where it doesn't interfere with anything else (you may want to use the 'Enlarge Canvas' option in the 'View' menu to give yourself more space around your image first). Select the area around the text to be rotated and then select 'Rotate' from the 'Image' menu to select the direction and angle of rotation. Now drag the rotated selection to where you want it and then select and delete the original. This article written by Stephen Chapman, Felgall Pty Ltd.
OPCFW_CODE
#!/usr/bin/python # Author: Peter Prettenhofer <peter.prettenhofer@gmail.com> # # License: BSD Style. """ The :mod:`bolt.model` module contains classes which represent parametric models supported by Bolt. Currently, the following models are supported: :class:`bolt.model.LinearModel`: a linear model for binary classification and regression. :class:`bolt.model.GeneralizedLinearModel`: a linear model for multi-class classification. """ __authors__ = [ '"Peter Prettenhofer" <peter.prettenhofer@gmail.com>' ] import numpy as np from io import sparsedtype, densedtype, dense2sparse try: from trainer.sgd import predict except ImportError: def predict(x, w, b): return np.dot(x, w) + b class LinearModel(object): """A linear model of the form :math:`z = \operatorname{sign}(\mathbf{w}^T \mathbf{x} + b)`. """ def __init__(self, m, biasterm=False): """Create a linear model with an m-dimensional vector :math:`w = [0,..,0]` and `b = 0`. :arg m: The dimensionality of the classification problem (i.e. the number of features). :type m: positive integer :arg biasterm: Whether or not a bias term (aka offset or intercept) is incorporated. :type biasterm: True or False """ if m <= 0: raise ValueError("Number of dimensions must be larger than 0.") self.m = m """The number of features. """ self.w = np.zeros((m), dtype=np.float64, order = "c") """A vector of size `m` which parameterizes the model. """ self.bias = 0.0 """The value of the bias term.""" self.biasterm = biasterm """Whether or not the biasterm is used.""" def __call__(self, x, confidence=False): """Predicts the target value for the given example. :arg x: An instance in dense or sparse representation. :arg confidence: whether to output confidence scores. :returns: The class assignment and optionally a confidence score. """ if x.dtype != sparsedtype: x = dense2sparse(x) p = predict(x, self.w, self.bias) if confidence: return np.sign(p), 1.0/(1.0+np.exp(-p)) else: return np.sign(p) def predict(self, instances, confidence=False): """Evaluates :math:`y = sign(w^T \mathbf{x} + b)` for each instance x in `instances`. Optionally, gives confidence score to each prediction if `confidence` is `True`. This method yields :meth:`LinearModel.__call__` for each instance in `instances`. :arg instances: a sequence of instances. :arg confidence: whether to output confidence scores. :returns: a generator over the class assignments and optionally a confidence score. """ for x in instances: yield self.__call__(x, confidence) class GeneralizedLinearModel(object): """A generalized linear model of the form :math:`z = \operatorname*{arg\,max}_y \mathbf{w}^T \Phi(\mathbf{x},y) + b_y`. """ def __init__(self, m, k, biasterm=False): """Create a generalized linear model for classification problems with `k` classes. :arg m: The dimensionality of the input data (i.e., the number of features). :arg k: The number of classes. """ if m <= 0: raise ValueError("Number of dimensions must be larger than 0.") if k <= 1: raise ValueError("Number of classes must be larger than 2 "\ "(if 2 use `LinearModel`.)") self.m = m """The number of features.""" self.k = k """The number of classes.""" self.W = np.zeros((k,m), dtype=np.float64, order = "c") """A matrix which contains a `m`-dimensional weight vector for each class. Use `W[i]` to access the `i`-th weight vector.""" self.biasterm = biasterm """Whether or not the bias term is used. """ self.b = np.zeros((k,), dtype=np.float64, order = "c") """A vector of bias terms. """ def __call__(self,x, confidence=False): """Predicts the class for the instance `x`. Evaluates :math:`z = argmax_y w^T f(x,y) + b_y`. :arg confidence: whether to output confidence scores. :return: the class index of the predicted class and optionally a confidence value. """ return self._predict(x, confidence) def predict(self, instances, confidence=False): """Predicts class of each instances in `instances`. Optionally, gives confidence score to each prediction if `confidence` is `True`. This method yields :meth:`GeneralizedLinearModel.__call__` for each instance in `instances`. :arg confidence: whether to output confidence scores. :arg instances: a sequence of instances. :return: a generator over the class assignments and optionally a confidence score. """ for x in instances: yield self.__call__(x, confidence) def _predict(self, x, confidence=False): ps = np.array([predict(x, self.W[i], self.b[i]) for i in range(self.k)]) c = np.argmax(ps) if confidence: return c, ps[c] else: return c def probdist(self, x): """The probability distribution of class assignment. Transforms the confidence scores into a probability via a logit function :math:`\exp{\mathbf{w}^T \mathbf{x} + b} / Z`. :return: a `k`-dimensional probability vector. """ ps = np.array([np.exp(predict(x, self.W[i], self.b[i])) for i in range(self.k)]) Z = np.sum(ps) return ps / Z
STACK_EDU
Possibility of current flowing through this circuit I am having some problems in solving this circuit. I will share my ideas here. Please correct my steps which are incorrect. We emit current only from battery $V_1$ and let that current be $I$. 2)Now $I$ gets divided into $I_1$ and $I_2$ at point $A$ where $I_1$ flows through $AC$ and $I_2$ flows through $AB$. 3)At point $D$, $I_1$ further gets divided into $I_3$ and $I_4$ with $I_3$ going through $DB$ and $I_4$ going into the third loop. Now,i am in a great confusion. 1st of all from step $3$,the current returning to the battery is $I_2+I_3<I$ but we know that the same amount of current should return to the battery but here return amount is less than $I$. 2nd of all,the $I_4$ current which went into third loop,it goes and at $E$,it sees 2 junctions, so if $I_4$ further divides,then at a point we shall see two currents of opposite direction colliding each other,which cannot be possible. Where am i making the mistake?Apparently the calculations are really easy once we assign appropriate currents,but here i am struggling with distributing currents. Hint: You have three separate simple circuits! Hint: No current flows from D to E. If current flows from D to E, there is no path for it to return. This violates Kirchoff Current Law... I actually wanted to know where i am making mistakes due to misconception instead of hints. @madness : You have not realized that $I_4$ is zero! Why do you say $I_2+I_3<I$? That's really where you first got in trouble. Is it because $I_2+I_3+I=I$? If so, your original statement is only true if $I_4 > 0$. But you haven't proven that. In fact, you will find it false -- $I_4 = 0$. The question is now closed, so there won’t be any more posted answers beyond my correct answer and the other two. Unfortunately, trick questions are sometimes assigned as homework or are even given on exams. I never did that when I taught my department’s entry level electronics class for first year chemistry grad students and senior undergrads of various majors. It is basically a bit cruel, but some people do it anyway. Here is another trick question over at the electrical engineering stack exchange: https://electronics.stackexchange.com/q/84447/223146. Possibly because of a persistent distraction, I neglected to upvote your question. You certainly showed significant effort in dealing with it, so have an upvote. Thanks! Out of curiosity, I searched for “trick question” over at electrical engineering stack exchange and found this diabolical one: https://electronics.stackexchange.com/q/17644/223146. That is both really mean and, in a strange way, hard not to admire. Scroll down to see the experimental verification of my answer. As in the hint I gave initially, this is three separate simple circuits. It is a trick question, such as might appear on a homework assignment or exam. The re-drawn diagram is: There is no current flow in the ideal wire between points D and E: direct current cannot go both ways through that wire. So the wire between D and E does nothing at all and can be clipped, resulting in: Points A and B are at the same potential due to the ideal wire connecting them. So shrink that wire to a point: Now pull that point apart: This is the same as the two previous diagrams: points A and B are still the same potential. Now clip the new wire between A and B, since there is no current flow through it: Result: three simple loops with the obvious currents around them. This is a trivially simple circuit to make and test, so I did. First, I built the OP’s circuit, using a standard breadboard, three fresh 9 V batteries, five 1% precision metal film resistors of 10 k ohms each, and a DMM in voltmeter and ammeter modes, as necessary. The circuit is shown here: Sorry about the alligator clips. (I hate alligator clips.) Here is a closer view with annotation: This is exactly as per the OP’s circuit. The individually measured resistances are shown at upper right. All three batteries measured 9.58 V. The measured currents were 0.46 mA, 0.95 mA and 0.47 mA, left to right. Now the next figure shows the final state as three separate simple loops: I re-did the current measurements and got the same values. These are shown in the figure. This is wrong: "There is no current flow in the ideal wire between points D and E: direct current cannot go both ways through that wire. So the wire between D and E does nothing at all." Sorry, I don't have the lifespan to build every possible configuration of the circuit in order to exhaustively verify a negative. That's why we have theory. This is a theoretical question anyway. Set $V_1 = 0$, so that the left-hand part of the circuit doesn't contribute. Let $V_2$ and $V_3$ have the same potential between the terminals, but with both of $V_2$'s terminals at a higher potential/voltage than both of $V_3$'s terminals. Edit: remove $V_1$ from circuit, rather than setting its potentials to $0$. Just for simplicity. If I am incorrect, then please explain the theoretical basis for that. Experiment is not helpful here. @EdV this is a good answer and you are correct. The downvotes are very unfortunate +1 on my end @Myridium the fact that there is no current in wire DE can be seen as follows: the current going down through R5 must be equal to the current going up through V3, and the current going up through V3 must be equal to the current going left through R4. Since the current going down through R5 is equal to the current left through R4, there is no current remaining to go through the wire DE. This answer is correct @Dale - This is not generally correct. You are assuming that the batteries maintain a constant total charge, in order to, in Ed V's ironic words, 'leap to Kirchoff's laws, etc [without thinking]'. I'm not a chemist, but I know that if the terminals of one battery are more electronegative than the other, the electrons will flow there. That's what I said before, about setting the potential of $V_2$ to be higher than that of $V_3$, which Ed V apparently ignored (he likes to experiment, rather than stopping and thinking, to use his own words). What happens in practice is that the more electronegative battery terminals (lower voltage) will quickly accumulate electrons, increasing their potential (higher voltage) so that the voltages balance out in such a way that current does not flow between D and E in the steady state, assuming no oscillatory effects. Ed V will be able to observe this in experiment if he is more precise and thorough. Upon initial connection, batteries may may have a very brief spike in current. Not sure if that's visible through a consumer oscilloscope. It's totally fine to assume that we're talking about the steady state, once those transient effects have passed. But then you need to explain that you assume the total charge in batteries is conserved, don't you? That's a theoretical assumption/explanation as a basis for the answer, which is what I asked for. You cannot experimentally prove a negative, this is an absurdity. I should be more precise: you can't experimentally prove the negation of an existence. You can show that a phenomenon happens, but not that it happens under all experimental parameters, because you can't exhaustively test everything (e.g. different battery chemistries etc.) The theory behind these circuits is the culmination of vast experimental verification, and the domain of validity is known. Accepted theories, applied correctly within their domain of validity, are a concise encoding of the culmination of generations of experimenters. It's a lot more powerful than one home experiment. This is the reason people go to school (I assume the OP is a high school student) and learn such theories. It's more efficient and productive than trying to reproduce the work of many generations of experimenters. And the OP is asking for help in applying this theory. I don't know what trick you're referring to here. The obvious way to solve this problem is to apply the theoretical assumption that the current into the battery equals the current out of the battery. There's nothing wrong with that, but you didn't state it in your answer! So you have not conveyed the information that the OP needs to know about the theory! Oh well, I can see it is not productive responding to you. Hopefully these clarifying comments help another reader. @Myridium said “You are assuming that the batteries maintain a constant total charge”. Absolutely, I definitely assumed that. It is one of the three foundational assumptions of circuit theory. If you are not making that assumption then you are not doing circuit theory I guess I'd have a couple of questions before trying to respond. When you state, "We emit current only from battery V1" are you suggesting that batteries V2 and V3 are essentially dead batteries? If so, they will have internal resistance which will provide the only current path through that region of the circuit, which will be high enough to limit current through those branches of the circuit to a level that is effectively negligible. If all batteries are providing a charge, the voltage of each (are they equal?) and the amount of resistance in each resistor would determine current flow. Since this is a DC (direct current) circuit: apart from a transient spike in current when the circuit is first assembled, the batteries will more-or-less retain a constant total charge (i.e. they won't gain an electrostatic charge). This is because any imbalances in the battery potentials will quickly equalise-- when electrons accumulate on a battery terminal due to current flow, it reduces the voltage of that terminal due to the electrostatic charge, much like a capacitor. For this reason, the voltages on the battery terminals quickly equilibrate in such a way that the total electric charge contained in the batteries remains constant (this probably happens within nanoseconds or less; I don't know). Therefore, you may assume that the current entering each battery is equal to the current which leaves. Using the conservation of current (Kirchoff's current law) at connection points between wires, you will quickly find that the $DE$ current must be zero, which means the right-hand loop can be treated as a separate circuit from the rest. Likewise, you can show that the remaining two loops more-or-less separate into their own circuits. ($BA$ carries the total current from both loops, but otherwise the two loops don't affect each other).
STACK_EXCHANGE
Signify works by using exactly the same JSON format for data in all places, which makes it less difficult and saves time reformatting since it passes via each layer. Moreover, JSON’s ubiquity throughout the Signify stack would make dealing with exterior APIs that much simpler: GET, manipulate, current, Publish, and store all with one particular format. JAVA programming is amazingly tricky for the Students who absence the expertise in item-oriented programming and thus, most of the College learners find programming assignment help With this. Groovy also supports the Java colon variation with colons: for (char c : text) , in which the type of the variable is mandatory. whilst loop Access the totally free samples of various programming assignments that happen to be drafted by our fantastic gurus and obtain a transparent idea about the type of do the job you are able to obtain by availing our top rated notch-services. I've been given This system and thanks a great deal for your help I actually enjoy it. I will for sure advise this to my friends it can be very easy to perform and economical. Within this module you might established factors up so you're able to create Python courses. Not all functions With this module are essential for this class so please read through the "Employing Python Within this Class" content for information. Once you have a good comprehension of facts buildings, Handle move, and the features of your respective chosen programming language, you'll be able to try to tackle something more complicated. It is quite widespread in dynamic languages for code such as the earlier mentioned check my reference illustration not to toss any mistake. How can this be? In Java, this would typically fall short at compile time. Having said that, in Groovy, it will never fail at compile time, and when coded correctly, can even not fail at runtime. Get beneficial help on working with numerous application purposes and cross-platform environments within the this Java language courses. Our supreme top quality support can help you solve Java oriented problems. Java homework help is a great way to enhance your know-how, have an understanding of certain subjects and dive into the world of Java with no challenges. Our Expert writers won't only help you Along with the assignment but also present with suggestions. So my recommendations for resolving this kind of Java Assignment. Be sure to exercise the binary file click to find out more input-output exercise. Then begin resolving your Java this hyperlink Homework. I am sure you may ready to resolve your difficulty. UnsupportedOperationException Should the arguments of the call match one of several overloaded ways of the interface/class It is a language, which is known for its loaded library, a lot of designs and instruments that is why a lot of learners obtain it fairly challenging to master C++ devoid of additional aid. For solving Java Function-Driven Assignment, you must have logical contemplating. Before solving your celebration-driven assignment trouble, You will need to Imagine 2 times and prepare all the things like where event in which our flow of plan will go.
OPCFW_CODE
This being the last option available when it comes to robotize your voice or for sound manipulation of other sound sources... So there is somethings that could be good to say about this, but before lets take a look at the video that got made by Mr Nightradio. In the above picture it is a synthesizer module set with a sawtooth as a carrier / modulator. For you that is not familiar with how a vocoder works you also have to program notes for the carrier to get the sound source / voice to be affected melodically. There is room for a lot of experimentation with both carrier and modulator. It is better to use with voices but could be anything both as carrier and modulator. But it seems that as carrier it is better to use a synthesizer with the saw tooth. There is a earlier vocoder made as metamodule for SunVox made by Gilzad that you can download in the "download" sub header on the top of the website. It is less effective but to his defense he did not have access to the fantastic new module sound2ctl that Mr Nigthradio uses extensively in his Vocoder. But it can be interesting to download both and see how they did the metamodules and compare etc. Mr Nightradio has also provided us with the SunVox file before it become packed into a metamodule so that gives you another opportunity to go in and modify to your hearts content and expand on the concept. If you are really interested in vocoding I would suggest two more things that can be interesting if you have something to record into. One would be to use Caustics Vocoder module as it is easier to get good results with but you would have to go outside of the self contained workflow of one device and record out of one into for example the sampler of SunVox on another device preferably with a cable to not get the recording to noisy. Or into a multitrack recorder. To use Caustics Vocoder you would not have to buy it as you could use the Demo version which would be without saving but in a case like this you would not be needing to save a song file anyway. But the coolest thing you could do is to make a talkbox! Not exactly the same as a vocoder but in my ears coming with a much more interesting and organic sound albeit less robotic. Have posted a video before that shows you how to very easily and cheaply make one but will post it again now. Remember that in this video he is using a synthesizer but you can use any sound source, for example a Android device... There is a lot of videos with different designs and how to make them but this shows clearly how it works. Another video I like is a guy using his guitar as a soundsource.
OPCFW_CODE
Apache Cassandra is Open Source, NoSQL, distributed,Peer-to-Peer, massively scalable data store. Mumbo-jumbo? Let’s take a closer look. 1. NoSQL – A fancy term for “post-relational” database management systems, NoSQL databases are very different from Relational databases like MySQL, or Oracle. By and large, NoSQL databases are magnificent Key/Value stores (think HashMaps in Java or Dictionaries in Python) designed to store very large quantities of data and scale horizontally. Unlike Relational Databases like MySQL or Oracle, they don’t have ACIDic support, schemas are flexible, rows are non-homogeneous and support millions of columns. 2. Distributed – Runs and stores data on multiple interconnected nodes. 3. Peer-To-Peer – Cassandra forgoes the traditional Master-Slave architecture. In Cassandra clusters, all nodes are absolutely equal. Client applications could write to any node in the cluster and vice versa, could read from any node. 4. Scalable – Designed for horizontal scaling; adding additional nodes is a breeze. 5. Fault-Tolerant & Highly available – What so great about Peer-to-Peer architecture? There is no single point of failure. In fact, Cassandra was designed based on the assumption that hardware failures are common and do occur. Cassandra was designed and developed by Facebook from grounds up for their Inbox search feature. The architecture is based on Google’s BigTable and Amazon’s Dynamo. Cassandra in simplest terms is a distributed, key-value data store. It stores data on multiple nodes so it can scale with the load increases. Data replication (storing multiple copies of the same data on different nodes) ensures that the data is not lost when a node fails. Where to obtain it To obtain a copy of Cassandra, please vist: http://cassandra.apache.org/. If you are looking to deploy Cassandra in an enterprise grade production environment and need paid support, I would recommend these guys: http://www.datastax.com/ Use Case #1: Using Cassandra for Real-time Counters FastBook is a global social media company with 500 million active users on any given day. The application servers which serve user requests are spread across 4 geographically separated data centres. Requests are evenly load-balanced amongst the 4 data centres, so an incoming request from a user could get routed to any data centre at random. You are asked to create two features: 1. Each user can be “Poke’d” by other users a maximum of 1000 times a day. 2. Allow users to send a maximum of 3000 messages per day The solution to both requirement is simple: Store following counters associated with each user account: The application can update these counters as usage occurs, and once the limits are reached, deny any further usage for the day. Then at the end of the day (or at some point), you reset these counters to 0, since the counters are daily limits. Sounds simple? It is, at least in theory. Here is the catch: You are doing this for a system that has 500+ million users sending thousands of events a second. One possible solution is to have a centralized MySQL server which stores the 2 counters in a table. But a single server solution is not feasible since the service operates in multiple data centers. Taking a trip across data centers for every request to read or update counters will be too costly. You can setup replication, sharding/partitioning in MySQL, but it is not good enough for real-time queries and will effect performance. Replication and sharding add extra layer of complexity on MySQL and makes thing much more complex. I have nothing against MySQL or other cool RDBMS solutions, but they are not suited for this problem. Another solution is to use a real-time key value datastore such as Redis or Membase. Redis is a phenomenal system. It has the best speed out of all NoSQL solutions I have evaluated. However, Redis is not suited for multi data center replication, where an event can be randomly handled at any data center. This is the type of problem which Cassandra claims to be perfectly suited for. Recall: Cassandra is a real-time read anywhere, write anywhere (p2p, distributed) data store. Cassandra can be setup on nodes in each data center and then as an event arrives, any Cassandra node can be updated and that update is seen by other nodes in other data centers automatically. This sounds very simple. The programming paradigm is indeed very simple. However, the architecture of Cassandra deployment needs to be well thought. Distributed systems have to ensure that all nodes see the same data at the same time and that the system should tolerate partial failures. Imagine the following two scenarios that may hurt these goals: - Links connecting data centers are so slow that updates take a long to reach other data centers hence some data centers continue to see stale values for some time - Two critical racks become offline This is a well-studied problem in Computer Science . Cassandra is designed to solve these problems albeit with proper tuning. If you are interested, you can read about the CAP theorem by Eric Brewer which essentially talks about inherent difficulties in distributed computing systems like Cassandra. This is it for this post. In this post, we looked at the definition of Cassandra and some of it’s key features. We also considered a use-case and a hypothetical situation where we can apply Cassandra. In the next post, I will talk about the architecture of Cassandra and it’s inner workings.
OPCFW_CODE
Achieving a basic proficiency in a new skill requires an investment of conscious cognitive effort, i.e., thinking a lot. Students are constantly in the process of achieving basic proficiency in new skills and conclude that thinking is required for all intellectual activities (an incorrect assumption also held by many teachers). To get past the conscious thinking stage lots of time has to be spent performing the skill. Repetition provides the opportunity for performance via conscious thought to migrate to subconscious performance (driving being a common example). Real-time performance requires fluency, that is, being able to handle technical details without having to think about them. Thinking (i.e., conscious thought) is slow and requires lots of effort. It is best held in reserve for the important stuff. To paraphrase Alfred Whitehead: “Software development advances by extending the number of important operations which we can perform without thinking about them.” Somebody who has spent 100 hours or so (an hour or two a week for a year) learning to code has the same level of fluency as I have in communicating in a foreign language using a phrase book, or Google translate. After a 1,000 hours of programming a person should be a very fluent coder. It is said that becoming an expert requires 10,000 hours of practice. The kind of practice involved is deliberate practice, not unconscious use of what is already known. Becoming an expert requires learning lots of new things, not constantly applying what is already known. Old habits have to be broken and new ones acquired. Programming is not Zen, although it contains elements that are. Why would a developer want to create a program without conscious thought (that is what scripts are for)? I used to run ‘advanced’ programming courses for professional developers with 2+ years in industry. In many ways the material was a rerun of what they had learned at the start of their programming career. The difference was that this time around they could ignore the mechanics of writing code, now an ingrained habit, and concentrate on the higher level stuff. The course had to have advanced in its title because experienced developers would never sign up for an introductory course. Most of my one-on-one tutoring effort went on talking people out of bad habits they had picked up over time. Perhaps live coding can be done with a Zen mind, probably why I don’t regard it as real programming (which I think requires some conscious thought). Talking about details and high level material in the same breath is what beginners do because they have not yet learned to tell the two apart and be able to ignore one of them. Like life, programs are mostly built from sequences of commonly occurring patterns. Our minds have evolved to subconsciously detect and take advantage of patterns. Programmers don’t know what the common source code patterns are any more than a native speaker can specify the syntax rules of the language they speak.
OPCFW_CODE
Survey: Machine Learning/Data Science Propel Python Past Java - By David Ramel - April 11, 2019 A big new developer survey shows that Python has finally passed Java in the programming language popularity wars, propelled by its heavy use in machine learning and data science projects. "Python has reached 8.2 million active developers and has taken the No. 2 spot, surpassing Java in terms of popularity," says the brand-new "Developer Economics State of the Developer Nation 16th Edition" report in which SlashData Ltd. polled more than 19,000 developers in 165 countries. A previous edition of the survey last fall predicted that Python would overtake Java, stating: "Python has reached 7 million active developers and is closing in on Java in terms of popularity, thanks to 62 percent of machine learning developers and data scientists who now use Python." The new report sees that "closing in" prediction coming true, noting that Python "is the second-fastest growing language community in absolute terms with 2.2 million net new Python developers in 2018. The rise of machine learning is a clear factor in its popularity. A whopping 69 percent of machine learning developers and data scientists now use Python (compared to 24 percent of them using R)." And falling to No. 3 wasn't all of the bad news in the report for the venerable Java language, always grouped near the top of similar popularity rankings, as SlashData also noted its comparatively slower growth. "Java (7.6 million active developers), C# (6.7 million), and C/C++ (6.3 million) are fairly close together in terms of community size and are certainly well established languages. However, all three are now growing at a slower rate than the general developer population. While they are not exactly stagnating, they are no longer the first languages that (new) developers look to." Here's a graphic from last fall's report that lists the fastest-growing languages at the time: Besides programming language rankings, the report by SlashData -- an analyst firm focusing on the developer economy -- highlighted five other main themes: ethics in AI; the gender wars; emerging technology; cloud native; and an agile software world. Key highlights of the report as presented by SlashData include: - Developers agree that they should not only ask for user consent to collect data and follow security and data protection laws but that they should also go above and beyond legal requirements - 72 percent of developers told us so. - Blockchain and cryptocurrencies have been hyped as having great potential to be disruptive but for developers, they appear to have reached a plateau. We found that just 3 percent have adopted projects in either of these fields. - More than half (58 percent) of developers say they follow a project management methodology that can be classified as agile. Scrum is the leading agile framework, used by 37 percent of developers. - The once ruling waterfall methodology is currently used by only 15 percent of developers. - Half of developers who teach AI, ML or data science have favorable views towards the ability of AI to behave in a moral and human-friendly way. - Only around 30 percent of ML developers who develop algorithms for search engines or customer support management believe AI should not be used to replace human jobs as opposed to around 50 percent of those who develop stock market predictions or image classification/object recognition algorithms. - Of the developers using orchestration tools or management platforms, 57 percent are working on DevOps. This compares to just 17 percent of the general developer population. - The technology industry is still dominated by men. Women developers responding to our survey were outnumbered by men by a ratio of 1 to 10 (9 percent women and 91 percent men). This suggests a global population of 1.7 million women developers and 17 million men. The methodology behind the report -- conducted November 2018 - February 2019 -- is detailed in the PDF and more generally here. David Ramel is an editor and writer for Converge360.
OPCFW_CODE
We have a number of roaming and multi-desktop users which need to be logged into their account from multiple machines somewhat like how Skype works. The conflict policy seems to suggest that this can be done: ***XMPP allows multiple logins to the same user account by assigning a unique “resource name” to each connection. ***If a connection requests a resource name that is already in use, the server must decide how to handle the conflict. The options on this page allow you to determine if the server always kicks off existing connections, never kicks off existing connections, or sets the number of login attempts that should be rejected before kicking off an existing connection. The last option allows users to receive an error when logging in that allows them to request a different resource name. But there is no option to allow for simultaneous logins. Only to kick/ignore the authentication request. Are simultaneous logins supported? The text on that page is wrong/confusing. If you set it to never kick it should allow multiple logins (I believe). I’d say it is a little confusing for sure… - If there is a resource conflict, immediately kick the other resource. - If there is a resource conflict, don’t allow the new resource to log in. - If there is a resource conflict, report an error one time but don’t kick the existing connection. - Specify the number of login attempts allowed before conflicting resources are kicked. You must specify a number greater than one. The ‘’ sure doesn’t seem like the right one, I’ll give it a shot. Now on the second login it displays “Unable to login due to account already signed in.” in the client you need a unique resource. For instance the default resource spark uses is Spark. This can be changed, by clicking the advanced button prior to login. The server does not set the resource. Shouldn’t the Spark client be smart enough to suggest changing the resource name? Or should the resource name always be random or using the desktops name? Either way thank you that did the trick. I’ve set the Conflict Policy to ‘Never Kick’ and people will need to be trained on the resource conflict thing. Changing the resource name definitely works for allowing multiple logins, but it certainly isn’t a user-friendly solution. I like the way MSN Messenger handles this situation: when it detects a duplicate login, it gives you the option to force any other logins out, or to keep all logins active. This would be a useful addition to the new Spark/Openfire versions. I have two desktops and two portables that I use with Spark. This kind of setup should be more intuitive. hmm. Another problem with the handling of multiple logins is that messages do not seem to get delivered to all instances of a user that is logged in … a reply to my message is only sent to the PC from which I sent the original message.
OPCFW_CODE
The following article summarizes a multi-part series I’m writing on standing-up an open source Security Incident Response Platform. This platform allows for log retention and analysis, alert generation, IoC enrichment, and case management. As the Identity and Authentication source of most Enterprises, Active Directory is the backbone of local and federated authentication. Coupled with the prevalence of Cloud computing, organizations are depending more-and-more on federated authentication and expanding their Active Directory into the Cloud. Rclone brands themselves as “rsync for cloud storage”, and with its versatility and the number of providers it supports I’m inclined to believe them. This is a list of all the User Rights Assignments available on a Windows network along with a brief description and default values. The definitions are taken from the Microsoft documentation. In May 2018 the Australian Cyber Security Center published an updated list of recommended configuration settings for hardening Windows 10 version 1709. This 50 page guide provides easily readable recommendations along with explanations of why the settings should be changed. While investigating the demise of NetBIOS and how to fully remove it from a network I came across and interesting observation. A client recently called in with an interesting problem. When users would create a new folder on a network share, four folders would appear instead of one. Even more interesting is that this was only happening for those users when connecting to the share from a Windows 10 workstation. The same user accessing the share from Windows 7 would only create a single folder. Policy Analyzer is one of the tools included as part of the Microsoft Security Compliance Toolkit, which Microsoft describes as “a set of tools that allows enterprise security administrators to download, analyze, test, edit, and store Microsoft-recommended security configuration baselines for Windows and other Microsoft products.” BloodHound uses graph theory to reveal the hidden and often unintended relationships within an Active Directory environment. In short, it analyzes group membership, GPOs, permissions, and currently logged-on sessions to visually displays links between objects in order to identify misconfigurations and easy paths to compromise. This tool is not for analyzing the permissions on a single server, but rather for identifying the path of least resistance to gaining elevated Domain permissions. In preparation for an upcoming post, I recently dove into my notes on installing the Prometheus monitoring server. My last time setting up Prometheus was on an Ubuntu server and the repository version was at least the same major revision version as the current release. This time I’m installing on Debian 9 and currently the latest Prometheus version is 2.3.2 while the Debian repository is offering 1.5.2. That’s unacceptable. While the sid repository does contain 2.3.2, I decided to take the opportunity to deploy in a cleaner (and less permanent) manner through Docker. Prometheus is well supported in Docker environments and it gave me an opportunity to brush up on my container deploying skills.
OPCFW_CODE
Dear all Refer to http://guides.rails.info/association_basics.html Why :physician and :patient some with (s), and some not...? It actually refer to the class name or table name? ** really confuse >_<. Can someone explain it? Is it a convention of rail? Thank you. class Physician < ActiveRecord::Base has_many :appointments has_many :patients, :through => :appointments end class Appointment < ActiveRecord::Base belongs_to :physician belongs_to :patient end class Patient < ActiveRecord::Base has_many :appointments has_many :physicians, :through => :appointments end Many thanks Valentino on 2009-02-11 18:19 on 2009-02-11 18:45 It is the convention in Rails A Physician (singular) has many appointments (plural) An Appointment (singular) belongs to one patient (singular) 2009/2/11 Valentino L. <firstname.lastname@example.org> on 2009-02-11 20:45 Valentino L. wrote: > Dear all > > Refer to http://guides.rails.info/association_basics.html > > Why :physician and :patient some with (s), and some not...? It actually > refer to the class name or table name? ** really confuse >_<. Can > someone explain it? Is it a convention of rail? Thank you. My guess is that English is not your native language. This might be why you are confused by the Rails conventions, which follows English singular/plural conventions. Here's the scoop. I hope you can follow along: First let's look at a Rails model object. The model class is like a prototype (or template) representing a single row in a database table. This is why the model class name is singular. Example: class Physician < ActiveRecord::Base ... end The database table that stores each physician contains a collection of physicians, which is why the table name is plural. Rails associations can represent either one or many model objects. Associations described as either has_one or belongs_to represent one object and therefore use the singular form: Examples: class Appointment < ActiveRecord::Base has_one :physician # This is one Physician object (singular) end class Appointment < ActiveRecord::Base belongs_to :physician # This is one Physician object (singular) end Note: Notice that has_one is used for one side of a one-to-one association. There other side would use belongs_to. Example: class Unicycle << ActiveRecord::Base has_one :wheel end class Wheel << ActiveRecord::Base belongs_to :unicycle end Associations described as has_many refer to a collection (array) of objects and therefore use the plural form: Examples: class Physician < ActiveRecord::Base has_many :appointments # This is an array of Appointment objects (plural) end I hope this makes things a little more clear to you. on 2009-02-12 03:22 It depends whether there are more than one or not. A class is the blueprint of an instance object. Thus a physician class describes one object, of class physician. Has many means one physician has a collection of some other kind of object. Belongs to means a physician can be part of A collection of physicians for another class of object and is therefore the other 'side' if you will. Blog: http://random8.zenunit.com/ Learn rails: http://sensei.zenunit.com/ On 12/02/2009, at 3:19 AM, Valentino L. <email@example.com
OPCFW_CODE
....comes to an end. But before I go there - first my apologies at the lack of posts in January. As a twitter friend has been reminding me (daily, Maria!), I already blew my New Year's resolution to post weekly. Hey, I do have an excuse: I was moving cities again! (This time to San Francisco - and here I'll stay...well, at least for a while.) Now, back to the end of the Rocket Zune. As everyone who reads this little diatribe knows, despite all the jeers and being the butt of all jokes on this topic, I'm a Zune fan. What's more, I enjoy the Zune Marketplace a great deal - it's a pleasure to use, and gives me access to millions of DRM free MP3 files, and has a great interface for dealing with podcasts. Also, for as much maligning as "Welcome to the Social" has taken, the Zune is - well - extremely social. The built in social networking aspect of the Zune actually works well. No, I've never "squirted" (ew), but I have taken advantage of the LastFM-esque aspects of the Zune Marketplace. Discovered a lot of good tunes that way. So, if it's working for me, why stop now? My current Zune is a Zune 80Gig model - works great, updated to version 3.0 of ZM without a hitch, and all the cool new features came along for the ride. However, my music collection has grown, as has my appetite for video-on-the-go: all of which has pushed me to upgrade to a higher capacity model: the Zune 120. So, when it came down to another $250 outlay, I had to think carefully... First, there was the bad news from Zuneland this past quarter: Zune revenue declined by a frightening 54%. You might be tempted to blame that on the ailing world economy, until you realize that Apple's iPod sales increase 3% during the same time frame. (I haven't sat down to work out the math, but I bet the numbers come close to balancing out.) People have jumped ship - or, rather, not gotten on board the ship - in record numbers. As a WSJ editorial states, the Zune's market share is now flirting with 0%. I have a theory about the decline, BTW: it corresponded with the release of the Zune Marketplace 3.0, and corresponding firmware upgrade, at the end of Q3 '08. Unlike Apple or any other media players on the market, Microsoft did not force you to buy a new Zune. All Zunes could be upgraded with the new software, and worked perfectly within the range of their older hardware limitations. (The equalizer software didn't work on the first gen Zunes, for instance, because they had no hardware to support it.) Everything worked: wireless synching, OTA buys from the Zune Marketplace, clicking on FM songs to purchase... all of it. And that may have been the problem... By respecting their current user base and applying the backward-compatibility ethos which, like it or not, worked as a strategy for PCs, Microsoft may have shot itself in the foot. Who would spend another $250 on a new Zune if you didn't need increased storage capacity and you could get all the cool new features for free? Turns out: no one. At any rate, even without the sales figure decline, I probably would have made the same call: the weight of the overwhelming market share of the iPod was taking it's toll: my cars have iPod ports, not Zune ports, for instance...and getting something as simple as an armband for the gym was problematic. (As it turns out, the armbands for the iPhones work perfectly with the Zunes...who says we all can't get along?) So, with a $250 upgrade to make, I set the Zune aside (I won't sell it, I will keep it in a nice little shrine) and headed over to the Apple store to pick up a 120gig 6th generation iPod. (The iPod touch stalled out at 32gig? I crap bigger than 32gig!) I sat down at my laptop, cleaned up my music collection, transferred my podcast subscriptions over to iTunes 8.x, sync'ed it and fired it up. There it was: my shiny new iPod looking all... well, iPod-ish. After a year of absence, its depressingly the same. Sure, there's cover flow and the sync icon is now orange (ooooo!), but other than that: the system is basically exactly the same. No wifi, no stereo bluetooth, no FM radio... no real changes of any kind. (The damn font still looks like it came from the first generation 64K Macintoshes from the 80's.) Moving from the Zune interface and feature set back to the iPod is, well, a step backwards in look-n-feel and features. ...and then there is iTunes. The "music management" system, and front end to the iTunes store, still looks like it was written by a first year college engineering student as a final project. Same old interface. Oh, sorry, it has "cover flow" too...right. (Do you really use cover flow to find albums, people? Really? I doubt it.) It also has "Genius" now, which doesn't seem to be using the information from the music genome project, like Pandora does, to get its relationships between songs. As best as I can tell does a simple stochastic match between what you've got in your library and what other people have in their libraries to determine what songs you have that possibly sound like other songs you have. (What's a good playlist that sounds like "Dani California?" Well, here's the union of songs that you have in your collection with songs other people have in playlists containing "Dani California." Genius.) The final affront to my logic centers? iTunes is on Version 8, and it still can't tell that you've put new music into a watched directory. Moving from the Zune Marketplace to iTunes is like trading in the Porsche for a Volkswagen - sure, they are both German cars, but...come on. Seriously? I'm not the only one who thinks so - there's been a lot of articles about ZM lately, such as David Chartier's excellent piece in ARS Technica last week. (David: you almost had me reversing my decision.) So, market forces win (remember when market competition was a good thing?) and I turn my back on the Zune to move back in with my old girlfriend, Apple. She has a new dress on, and pretty shoes - but I suspect she still can't dance - but everyone seems to think she's just awesome and she's kinda the only one at the party, so I'll give her one more chance. ...hmmm...wait, who's the iRiver girl over there by the bar...? #goodmorning (at Hellman Hollow, Golden Gate Park) - #goodmorning (at Hellman Hollow, Golden Gate Park) 2 years ago
OPCFW_CODE
Why traditional scheduling never works the way it should. There are plenty of different ways to approach scheduling, some Project Managers just let chaos reign and hope that everyone gets their work done on time, whilst others like to carefully plan out everyone’s day in a scarily complex document, which even takes toilet breaks into account. Although these are two extremes, you’ll probably know from personal experience that PM’s tend to favour a softened version of the latter method, and in theory, it makes sense. You have a fixed amount of time, and a fixed amount of space. Make the time fit the space and ‘Hey Presto!’ the work is done, and we all get to go home on time to watch box sets. Right…? Why doesn't project management work this way? It turns out that Project Managers still don’t have the ability to see into the future (what a shocker). Does someone need more time because their computer unexpectedly broke? Ok we need more time. Hmmm project B looks like it’s ahead of schedule, so I can pull some time from project B and put it into project A. Great, Project A is back on track! But wait a sec, why hasn’t project B finished? What do you mean there’s a better way of doing it? Ok well maybe if we get some time back from ‘project A’… And so on and so forth. What you end up with is Project Managers having nervous breakdowns in the corner of the office and employees feeling under pressure to deliver results in response to estimates that were given 2 months ago. Basically everyone spends more time planning when they’re going to do the work rather than actually doing the work, which inevitably suffers as a result. The first problem? In the real world, the work that the teams were doing bore little if no resemblance to the schedule. So the project managers would continue to tweak the schedule to bring it back into line. In its worst excess people started to change the schedule retrospectively so that it looked like what had happened rather than what was planned. Utter madness. The second problem? We were using the schedule as a communications tool. The ability to add notes to a set of time is a brilliant feature. As a PM I could add little notes to other PM’s for when they were doing a bit schedule fiddling. Hey we even used acronyms to show what can and can’t be fiddled with. A whole new scheduling dialect. Unfortunately we started to rely on it and before we knew it, our digital dialect had replaced proper briefing. We assumed that everyone had read our notes, and expected them to understand it. Why we made the switch. Our project managers controlled our schedule and it became obvious that they were playing the schedule shuffle a little too frequently, horse trading people’s time like it was potatoes. To schedule like this in a digital production environment is insane.There is no way that you can take account of all of the things that can affect a system. It’s like predicting the weather. If you want to work on the edge of technology you are (or should be) constantly dealing with the unknown. So if we can accurately predict exactly how many hours a task will take, something isn’t right. We wanted to make it harder to move things around. Changes to a plan should be an exception rather than the norm. Our thinking was that by making the schedule physical rather than digital, it should help to change behaviour. So we went cold turkey. We took it off the computer, out of the screen and put it onto the wall. Our (rather large) Lego schedule. We decided to stick our very own massive plastic schedule on the central wall of our office, for everyone to see. That way, if someone is making changes, they’re doing it in full view. This gives teams the ability to decide who needs to do what, and by when it needs to be done by. This shift to a bottom-up scheduling method as opposed to the traditional top-down approach means that we are able to focus on the work itself, allowing us to get the job done better and faster.
OPCFW_CODE
Searching excel for a specific patterns. Pivots don't work I have a comically large excel spreadsheet and I need to find specific data sets based on a pattern. I have no idea where to start with this. To simplify, each line is a transaction that lists a Person (P) a date (t) and an Item that can be X, Y, A, B or C. What I would like to do is search that data for the following. If person P on Date T(+/- 7 days) Received Items X and Y and at least one of Items A, B and Or C, To display that Person with each date and item underneath them. A point in a good starting place would be appreciated because I feel like I am over my head here. The easiest method would be to use conditional formatting, assign your criteria to a fill of any color, then to use "Filter" to filter only by that color. Assuming that each item has it's own column that can be individually checked (sorry, you did not specify so I am not sure if this is your case), then conditional formatting would be pretty simple. Let's say your columns are as follows: A. =P(erson) B. =T [Date] C. =X [Mandatory item #1] D. =Y [Mandatory item #2] E. =A [Optional item #1] F. =B [Optional item #2] G. =C [Optional item #3] And let's assume that you type in "X" under each of your item's columns for all that the person has for that date... Your conditional formatting would be: =AND( $A1=$H$1 , IF($H$1<>"", OR($B1>=$I$1+7, $B1<=$I$1-7), $B1>=TODAY()+7 ,$B1<=TODAY()-7 , $C1="X" , $D1="X" , OR($E1="X",$F1="X",$G1="X") ) So you can leave a blank cell that would be used to type in a person's name, let's just say H1 and a cell for the date you'd like, I1. You can then type in anyone's name in H1 and it would automatically highlight all of the rows that meets your criteria you specified. Once conditional formatting has highlighted your criteria, you can then go to Ribbon > Data > Filter, then Filter By Color, and there is your list. But again, you were not 100% specific so I am not exactly sure if this is what you were looking for. So just remember to make H1 your cell with your name, and I1 your cell with the date you want to check +/- 7 days against. Sorry I wasnt more specific. The columns are Date, Customer and Item. The item column can contain only one item. If a customer makes multiple purchases in one day, they will each appear on a separate line with only the Item Column being different between them. It's quite alright. Let me know if this will work or how I can adjust it. Basically something like 01/01/2001 Smith, Bob Socket Wrench upon further research, It appears that I may have to convert this to ACCESS and make some SQL functions to get what I am looking for.... :-/ also, I need to learn how to use the formatting in the forums better.
STACK_EXCHANGE
Typescript https://github.com/aurelia/validation/pull/122 changes the way aurelia-validation is compiled by concatenating all files, and refactoring is needed in order to get this working. This PR also concatenates all validation files, but does this only to generate the d.ts file. The concatenated file is never outputted. The generated d.ts file is copied to all dist directories (commonjs, amd etc). A few notes, index.js has to be excluded from the concatenation because it was throwing in a few imports in the generated d.ts which caused it to be invalid. so the configure method in index.js can't be typed using this approach. I also don't know what .pipe(tools.sortFiles()) does, but it's throwing an error when enabling it. Everything still works when this is disabled though. https://github.com/aurelia/validation/commit/7579adf14bd0c65e721013fcf44b0749c236cf35#diff-40e62be8220b9aeab02f5a664980bd73R21 Error is here: https://gist.github.com/JeroenVinke/58a108b43eb110e154bd It's not possible to change the outdir of the babel-dts-generator, so the gulp task is a bit complexer because of this. Currently the d.ts file is outputted to dist/aurelia-validation/aurelia-validation.d.ts. This directory is removed at the end of the build task, after the definition file has been copied to all dist directories This is the output if you apply the changes in this PR: output. and this is the generated definition: https://gist.github.com/JeroenVinke/729d9c12f8de7b8396f2 @janvanderhaegen this PR is basically a quick fix to get typescript support in, so the typescript developers can start using them, and so that the community is able to add the types in the source. This way we won't have to wait for the refactoring of the resources as suggested in https://github.com/aurelia/validation/issues/129. @JeroenVinke @nomack84 We really need you all to have a unified front on the .d.ts files - can you get together to decided which PR is best to merge? If we need an arbitrator let me know and I'm sure @cmichaelgraham can help decide which approach is best, but lets get the type definitions in place ASAP. Also there were a ton of issues with linting and such that had to be resolved, is it feasible to re-base these changes on top of those? If not and you all can come up with an agreed upon method I will rebase those changes on these. @PWKad Thanks for dedicating some time on this The problem with https://github.com/aurelia/validation/pull/122 is that it needs changes to the way translation files are loaded and used (discussed in https://github.com/aurelia/validation/issues/125). This PR is not perfect (eg. stuff in index.js can't be typed), but it doesn't need more changes than you see in this PR. If we can get the necessary changes in to allow for a single file export during build quickly, I vote for https://github.com/aurelia/validation/pull/122, and if not we can use the approach in this PR in the mean time. @JeroenVinke Understood - I'm in the middle of cleaning up all of the lint issues in the repo that needed to be addressed to allow the TS build tools to run anyway. Unfortunately this means it's breaking this PR's ability to merge. In the interest of time the two best solutions I can think of are to wait until those changes are applied and then have this be re-based on top (which is going to be a pain unfortunately due to formatting / spacing issues requiring lots of changes) or I can try to merge it afterwards. I'm not entirely familiar with TypeScript so if you think you would have the time to do so (I hope to finish with linting issues today) after that we would greatly appreciate it :) Let me know when I can create a new PR. @JeroenVinke the linting work is finished, should be stable now.
GITHUB_ARCHIVE
Integrate a dataset over specified direction(s). $ pgkyl integrate --help Usage: pgkyl integr [OPTIONS] AXIS Integrate data over a specified axis or axes Options: -u, --use TEXT Specify the tag to integrate -t, --tag TEXT Optional tag for the resulting array -h, --help Show this message and exit. Consider the gyrokinetic simulation of an ion acoustic wave as an example. It outputs the integrated particle density over time, which can plot as follows: pgkyl gk-ionSound-1x2v-p1_ion_intM0.bp pl We can see from the values on the y-axis that the total number of particles is 12.566. The number of particles should be conserved, to machine precision. We can check this another way by integrating the particle density along \(x\) (the 0th dimension) at the end of the simulation with pgkyl gk-ionSound-1x2v-p1_ion_GkM0_1.bp interp integr 0 print where we have abbreviated integr, and we use the print command to print the result of the integral to screen. The output of this command is simply The integrate command can also be used to integrate higher dimensional datasets in one or more directions. We could take the ion distribution function and integrated along the \(v_\parallel\) and \(\mu\) directions (1st and 2nd dimensions, respectively) with pgkyl gk-ionSound-1x2v-p1_ion_0.bp gk-ionSound-1x2v-p1_ion_GkM0_0.bp interp \ activate -i 0 integr 1,2 ev -l 'integrate 1,2' 'f 6.283185 *' \ activate -i1,2 pl -f0 -x 'x' -y 'Number density, $n$' In this command we: - First load the ion distribution function (*_ion_0.bp) and its number density (*_ion_GkM0_0.bp) at \(t=0\). - Integrate the distribution function over velocity space with activate -i 0 integr 1,2. - Multiply such integral by \(2\pi B_0/m_i\) (\(B_0=m_i=1\) here) with ev -l 'integrate 1,2' 'f 6.283185 *'. - Activate the number denstiy and integrated distribution function data sets and plot them with activate -i1,2 pl -f0. and this should give approximately the same number density as the GkM0 diagnostic outputted by the simulation, as shown below. Another useful application of the integrate command is to integrate, or average, over time (although note that the ev command has a avg operation that may make this easier). Usually this requires collecting multiple frames into a single dataset with the collect command, and then integrating over the 0th dimension (time). So if we increase the tEnd of the gyrokinetic ion sound wave simulation to 10 and the number of frames to 50 we could plot the electrostatic potential as a function of time and position pgkyl "gk-ionSound-1x2v-p1_phi_[0-9]*.bp" interp collect pl -x 'time' -y 'x' --clabel '$\phi$' We can integrate this potential in time and plot it on top of the initial potential with pgkyl gk-ionSound-1x2v-p1_phi_0.bp -l '$t=0$' -t phi0 \ "gk-ionSound-1x2v-p1_phi_[0-9]*.bp" -t phis interp collect -u phis -t phiC \ integrate -u phiC -t phiInt 0 ev -l 'Time average' -t phiAvg 'phiInt 10. /' \ activate -t phi0,phiAvg pl -f0 -x '$x$' -y '$\phi$' This command uses tags to select which dataset to perform an operation on. The end result is the plot below showing that the time averaged potential is lower amplitude due to the collisionless Landau damping of the wave.
OPCFW_CODE
RenderReturn: a controller Exception Easily end a controller action without risking the double render error Skip to tl;dr Render and/or redirect were called multiple times in this action. Please note that you may only call render OR redirect, and at most once per action. Also note that neither redirect nor render terminate execution of the action, so if you want to exit an action after redirecting, you need to do something like "redirect_to(...) and return". If you’ve been around the block with Rails, you’ve probably seen this error. The error message is sufficiently explanatory: don’t call redirect_to multiple times. Often this happens because you tried to extract a redirect_to call into some private method and failed to completely exit out of the controller action. Here, in this message, Rails itself is suggesting violating the Ruby Style Guide by suggesting redirect_to(...) and return. Now, I violate this aspect of the style guide myself: I will never give up construct such as x = find_the_thing or raise (...). But here it’s a little more sinful: it implicitly relies on the return value of redirect_to being truthy. The docs don’t even specify what the return value of redirect_to should be. This is a bad pattern to follow. Moreover, this doesn’t even solve the nested-method problem I alluded to above: class MyController < ActionController::Base def show check_for_bad_stuff # ... render json: the_data end private def check_for_bad_stuff redirect_to :error_page if bad_condition? log "Checked the thing" end end redirect_to and return won’t work here for obvious reasons: the return simply goes back to the controller. So you could bubble the return up to the check_for_bad_stuff and return. But this requires different return values in check_for_bad_stuff. Ok, let’s bite: def check_for_bad_stuff if bad_condition? log "Nope." redirect_to :error_page return false else log "It checks out." return true end end Except it’s now it’s redirect_to(...) or return. Fine. It all makes sense, and is easy to follow when it’s the only thing you’re looking at, but this simple concept of “redirect and get out of here” has already taken up for more of our attention than it deserves. Other similarly unconvincing blogs give a short list of working but frankly similarly bad or, worse, a touch cryptic, solutions. So here’s mine. This, to me, screams out as a use case for raise, coupled with rescue_from. That is, “get out of here completely, no matter how buried down the stack you are.” Ultimately, that’s what you want after some of these redirect_tos. I’ll briefly mention throw :halt works too, as I learned from a comment in the above linked blog, but that’s not as of yet well documented Rails behavior (read: not guaranteed to work), and frankly I dislike the potential for naming conflicts with throw. Exceptions work perfectly well in this case. class ApplicationController < ActionController::Base rescue_from RenderReturnException, with: :render_return def render_return # Do nothing. end end def check_for_bad_stuff if bad_condition? log "Nope." redirect_to :error_page raise RenderReturnException end log "It checks out." end RenderReturnException will immediately halt the action, and the rescue_from catches (to use Java and distinctly non-Ruby terminology) that exception in a method that does nothing so that you don’t get any other error handling such as Rails’ standard 500 error. There are no implicit return values to keep track of, no boolean flipping, no hidden bugs just because you extracted code into a different method. You have only a single convention added to your code toolbelt to learn: RenderReturnException is a safe exception to throw to stop a controller action. Which is great, because now that’s a tool you can use throughout your controllers. If you wanted to, you could make it even more explicitly named using a method called, say halt_controller_action, which you can call by name anywhere in your controllers: class ApplicationController < ActionController::Base rescue_from RenderReturnException, with: :render_return def render_return # Do nothing. end def halt_controller_action raise RenderReturnException end end Wasn’t that simple?
OPCFW_CODE
#!/usr/bin/env ruby require 'highline' require 'inifile' require 'rbconfig' require 'rest-client' def config @config ||= IniFile.load(config_file) end def main_config config['twtxt'] || {} end def tweetfile main_config['twtfile'] end def timeline_limit main_config['limit_timeline'] || 20 end def timeline_sort main_config['sorting'] || 'descending' end def my_info { main_config['nick'] => main_config['twturl'] } end def config_file File.expand_path(config_dir + "config") end def config_dir # macosx: ~/Library/Application Support/twtxt # linux: ~/.config/twtxt # windows: who cares? macos? ? '~/Library/Application Support/twtxt/' : '~/.config/twtxt/' end def macos? RbConfig::CONFIG['host_os'] =~ /darwin/ end # timelines to follow as a hash: nick = twtxt_url def following config['following'] end # add nick = url to [following] config def follow(nick, url) config['following'][nick] = url config.save end # remove nick from from [following] config def unfollow(nick) config['following'].delete(nick) config.save end # tweet: post a tweet def tweet(text, at = current_timestamp) return unless check_length(text.length) open(tweetfile, 'a') do |f| f.puts "#{at}\t#{text}" end post_tweet_hook end # require confirmation if text is longer than 140 def check_length(length) return true if length <= 140 msg = "tweet is longer than 140 characters (#{length}). are you sure? (y/N)" ans = HighLine.new.ask(msg) return ans.downcase == 'y' end def current_timestamp Time.new.strftime('%FT%T%z') end # command to run after posting a tweet def post_tweet_hook exec main_config['post_tweet_hook'] if main_config['post_tweet_hook'] end # timeline: show list of tweets def timeline tweets = [] following.merge(my_info).each do |nick,url| tweets.concat timeline_for_user(nick,url) end tweeets = tweets[-timeline_limit, timeline_limit].sort_by { |h| h[:date] } (timeline_sort == 'descending') ? tweets.reverse : tweets end def timeline_for_user(nick, url) RestClient.get(url).split("\n").map do |line| parts = line.split("\t") { from: nick, date: parts[0], text: parts[1] } end end # quickstart: wizard to create initial config def quickstart if File.exist?(config_file) puts "config file already exists: #{config_file}" return end cli = HighLine.new nick = cli.ask("Username:") file = cli.ask("Full local path to twtxt file:") url = cli.ask("URL where txtxt will be published:") open(config_file, 'w') do |f| f.puts "[twtxt]" f.puts "nick = #{nick}" f.puts "twturl = #{url}" f.puts "twtfile = #{file}" f.puts "#check_following = True" f.puts "#use_pager = False" f.puts "#limit_timeline = 20" f.puts "#sorting = descending" f.puts "#post_tweet_hook = \"scp tw.txt bob@example.com:~/public_html/twtxt.txt\"" end puts "example config written to #{config_file}" end def usage puts <<"USAGE" usage: tweetext command [args] a ruby reimplementation of twtxt commands: follow [nick] [twturl] follow a new user following list users you are following quickstart generate a basic configuration timeline show your timeline tweet [text] (date) post a tweet (optional: date to use instead of now) unfollow [nick] unfollow a user USAGE end # command line options if ARGV[0] == 'follow' && ARGV[1] && ARGV[2] follow ARGV[1], ARGV[2] elsif ARGV[0] == 'following' following.sort.each do |nick, url| puts "#{nick} @ #{url}" end elsif ARGV[0] == 'quickstart' quickstart elsif ARGV[0] == 'timeline' timeline.each do |tweet| puts "#{tweet[:from]} (#{tweet[:date]}):" puts tweet[:text] puts end elsif ARGV[0] == 'tweet' && ARGV[1] tweet ARGV[1], ARGV[2] elsif ARGV[0] == 'unfollow' && ARGV[1] unfollow ARGV[1] else usage end
STACK_EDU
We cut off some squares from the bottom row (rank) of a 2k x 2k board. We should prove that we can cover the board with 2 x 1 dominoes if and only if the number of the cut black and white squares are equal. As far as I know proving 'only if' part is easy because a dominoe always cover one white square and one black square. There was even (2k*2k) square, so it is obvious than we should have cut off equal black as white squares. (If we had cut of more black squares, there would have been more white squares, so we couldn't cover the board.) So I proved that it is necessity. -Am I right? But how can we prove the 'if' part? I don't see why it is a possibility? I would really appreciate any help. I think I would try induction on k. Going from k to k+1, take the 2k+2 by 2k+2 board and trim off a border 2 squares wide along the top and right side. In so doing you have trimmed off the rightmost two squares along the bottom edge. These squares have opposite colors, so either both were included in the original board or both were cut off in the original board. See if you can work this around to showing that the 2k+2 by 2k+2 board can be covered by dominoes. [Edit]I guess it's clear from the above that I disagree with Tonio, I think you have to demonstrate that a covering with dominoes exists. [/edit] In fact, the main problem here is, I believe, that people have to know what is a chess-like board and what are domino pieces. I think this algorithm successfully covers the board. I am not providing detailed analysis of each step, assuming that readers will be able to work those out themselves : We will represent the white and black squares by W and B respectively. The squares which we cut off will be represented by E (empty). Assume the chessboard's last row is of the form: WBWBWB... 1) We traverse the last row once from the left, and cover the sequences WB or BW of unremoved squares trivially by horizontal dominoes. 2) We again traverse the last row from the left, until we encounter the first uncovered square. Until then, for every pair of removed/ covered squares WB, we cover the whole column above them with horizontal dominoes stacked upon each other. Clearly the only time at which this step will stop is when we have EB at the last row. 3) We create a partial red pyramid of height two as in figure 1. 4) Till we encounter any more uncovered squares, we proceed by extending the frontier of the covered block and pyramid by the horizontal green and blue dominoes respectively. 5) If we encounter an uncovered black square next, we cover as shown in figure 5, to increase the height of the pyramid by two. 6) If we encounter an uncovered white square next, first we cover as in figure 3. cover the portion of the penultimate row stretching from the black to the white square with horizontal dominoes ( this is possible because the length concerned is even ), and then we cover the bottom two uncovered squares of each column to the right with vertical dominoes (coloured yellow in figure 6) . This will bring up the whole floor of the board and decrease the effective height of the pyramid by two. 7) If the number of uncovered white squares encountered becomes equal to the number of uncovered black squares encountered, then we use the very next column to the right to create a block of even breadth as in figure 4. This can be easily covered with horizontal dominoes. 8) If we have not encountered the last uncovered square yet, we go back to step 2. P.S. If this is written too badly to understand, then let me know, I will write it afresh.
OPCFW_CODE
Collations and Case Sensitivity Text processing in databases can be complex, and requires more user attention than one would suspect. For one thing, databases vary considerably in how they handle text; for example, while some databases are case-sensitive by default (e.g. Sqlite, PostgreSQL), others are case-insensitive (SQL Server, MySQL). In addition, because of index usage, case-sensitivity and similar aspects can have a far-reaching impact on query performance: while it may be tempting to use string.ToLower to force a case-insensitive comparison in a case-sensitive database, doing so may prevent your application from using indexes. This page details how to configure case sensitivity, or more generally, collations, and how to do so in an efficient way without compromising query performance. Introduction to collations A fundamental concept in text processing is the collation, which is a set of rules determining how text values are ordered and compared for equality. For example, while a case-insensitive collation disregards differences between upper- and lower-case letters for the purposes of equality comparison, a case-sensitive collation does not. However, since case-sensitivity is culture-sensitive (e.g. I represent different letters in Turkish), there exist multiple case-insensitive collations, each with its own set of rules. The scope of collations also extends beyond case-sensitivity, to other aspects of character data; in German, for example, it is sometimes (but not always) desirable to treat ae as identical. Finally, collations also define how text values are ordered: while German places a, Swedish places it at the end of the alphabet. All text operations in a database use a collation - whether explicitly or implicitly - to determine how the operation compares and orders strings. The actual list of available collations and their naming schemes is database-specific; consult the section below for links to relevant documentation pages of various databases. Fortunately, databases do generally allow a default collation to be defined at the database or column level, and to explicitly specify which collation should be used for specific operations in a query. In most database systems, a default collation is defined at the database level; unless overridden, that collation implicitly applies to all text operations occurring within that database. The database collation is typically set at database creation time (via the CREATE DATABASE DDL statement), and if not specified, defaults to a some server-level value determined at setup time. For example, the default server-level collation in SQL Server for the "English (United States)" machine locale is SQL_Latin1_General_CP1_CI_AS, which is a case-insensitive, accent-sensitive collation. Although database systems usually do permit altering the collation of an existing database, doing so can lead to complications; it is recommended to pick a collation before database creation. When using EF Core migrations to manage your database schema, the following in your model's OnModelCreating method configures a SQL Server database to use a case-sensitive collation: Collations can also be defined on text columns, overriding the database default. This can be useful if certain columns need to be case-insensitive, while the rest of the database needs to be case-sensitive. When using EF Core migrations to manage your database schema, the following configures the column for the Name property to be case-insensitive in a database that is otherwise configured to be case-sensitive: modelBuilder.Entity<Customer>().Property(c => c.Name) .UseCollation("SQL_Latin1_General_CP1_CI_AS"); Explicit collation in a query In some cases, the same column needs to be queried using different collations by different queries. For example, one query may need to perform a case-sensitive comparison on a column, while another may need to perform a case-insensitive comparison on the same column. This can be accomplished by explicitly specifying a collation within the query itself: var customers = context.Customers .Where(c => EF.Functions.Collate(c.Name, "SQL_Latin1_General_CP1_CS_AS") == "John") .ToList(); This generates a COLLATE clause in the SQL query, which applies a case-sensitive collation regardless of the collation defined at the column or database level: SELECT [c].[Id], [c].[Name] FROM [Customers] AS [c] WHERE [c].[Name] COLLATE SQL_Latin1_General_CP1_CS_AS = N'John' Explicit collations and indexes Indexes are one of the most important factors in database performance - a query that runs efficiently with an index can grind to a halt without that index. Indexes implicitly inherit the collation of their column; this means that all queries on the column are automatically eligible to use indexes defined on that column - provided that the query doesn't specify a different collation. Specifying an explicit collation in a query will generally prevent that query from using an index defined on that column, since the collations would no longer match; it is therefore recommended to exercise caution when using this feature. It is always preferable to define the collation at the column (or database) level, allowing all queries to implicitly use that collation and benefit from any index. Note that some databases allow the collation to be defined when creating an index (e.g. PostgreSQL, Sqlite). This allows multiple indexes to be defined on the same column, speeding up operations with different collations (e.g. both case-sensitive and case-insensitive comparisons). Consult your database provider's documentation for more details. Always inspect the query plans of your queries, and make sure the proper indexes are being used in performance-critical queries executing over large amounts of data. Overriding case-sensitivity in a query via EF.Functions.Collate (or by calling string.ToLower) can have a very significant impact on your application's performance. Translation of built-in .NET string operations In .NET, string equality is case-sensitive by default: s1 == s2 performs an ordinal comparison that requires the strings to be identical. Because the default collation of databases varies, and because it is desirable for simple equality to use indexes, EF Core makes no attempt to translate simple equality to a database case-sensitive operation: C# equality is translated directly to SQL equality, which may or may not be case-sensitive, depending on the specific database in use and its collation configuration. In addition, .NET provides overloads of string.Equals accepting a StringComparison enum, which allows specifying case-sensitivity and a culture for the comparison. By design, EF Core refrains from translating these overloads to SQL, and attempting to use them will result in an exception. For one thing, EF Core does not know which case-sensitive or case-insensitive collation should be used. More importantly, applying a collation would in most cases prevent index usage, significantly impacting performance for a very basic and commonly-used .NET construct. To force a query to use case-sensitive or case-insensitive comparison, specify a collation explicitly via EF.Functions.Collate as detailed above. - SQL Server documentation on collations. - Microsoft.Data.Sqlite documentation on collations. - PostgreSQL documentation on collations. - MySQL documentation on collations. - .NET Data Community Standup session, introducing collations and exploring perf and indexing aspects.
OPCFW_CODE
Get your own custom made, responsive, visually appealing website with a full user account system built in! I've been working with HTML, CSS, JS, and PHP for a very long time, and I feel fairly familiar with all of them. I've created everything you see here on this website. If you'd like to see my web development style, look no further than this site itself! Something that I think sets me apart is my knowledge of PHP. Instead of relying on third parties, I build all services and back-ends for V0LT myself. This includes the account system I've made, the privacy-respecting analytics system, the user preferences system, the instant messaging system, and much more! Again, if you'd like to learn more about these, don't hestitate to explore the site! For $25, I'll build you an expandable site similar to this one. I'll implement an account system that can be added on to later by either another developer, or myself. This system allows you and your users to sign up for and use their own personal accounts. You as the website owner can given certain users access to certain pages, or allow users to customize their experience with a settings system. Below is a list of what I'll include in a website for $25. I'll implement a secure account system that uses a PHP database to securely store user information. I leave usernames in plain text, but passwords are hashed so your users' information won't be leaked in the event of a data breach. I'll develop both a light and dark mode for your website. You can choose which theme you'd like to show your users, or even implement a system to allow users to choose their preferred theme for themselves. I'll fully comment all the code I write so any developer, including yourself, can make changes to the site without confusion. This should make it easier to add your own features that piggy-back off of the account system. I'll make sure your website will adapt to any screensize, so your site will always look good, reagardless of whether your users are viewing it on a desktop or mobile device. This webpage itself is an example! If you're on a desktop, try resizing the window to see how the elements move around and adapt in real-time. Whenever I design a site, I try to make sure everything looks clean and modern. I use simplistic fonts, smooth gradients, and only place images in places where it makes sense. I do my best to implement slideshow systems where possible so the user isn't overwhelmed by images, but can still easily view them all. You can see an example of this sytem on the Cruze 6 page On my personal website, I almost never implement third party services by default. This ensures the user's privacy is respected, and that third parties can't use tracking software. This also ensures that aspects of the website won't be hindered if a 3rd party service goes down. I'll make sure the same is true for any website I build for another person. However, if you'd prefer to have a third party service installed, I'd be happy to do so. While I personally can't develop an e-commerce system, websites I build will work with services like Paddle and Itch. If you want me to implement one of these third party services, just let me know! The account system is deliberately simple and easy to understand. It only takes one line of code to check if a user is signed in, and only one more to get their username. This makes it easy for even inexperienced web developers to work with the account system. If you're interested in getting a custom website, or have any questions, don't hesitate to contact me at email@example.com.
OPCFW_CODE
Recently the OpenAI team made news again by releasing a 335-million parameter pre-trained natural language model. This model, using Python and TensorFlow , can generate text based on preceding text with such impressive capabilities that is can be used to translate and answer questions. This team actually has models several times larger, but have not yet released them due to risks of abuse. Today I have released my small contribution to this awesome project - A deployable TensorFlow model and Java-based reference implementation which uses only the core (i. One artificial intelligence tool that I’ve been playing with lately is an algorithm called word2vec. The basic idea is that words are given positions in high dimensional space, and the positions are optimized such that word distance indicates how often words are seen together. These numbers can then be used in a variety of ways, from a simple word similarity search to recurrent neural networks. In this article I will outline some uses of this amazing approach, along with links to sample code and results. Text classification is a common machine learning task which is known in various contexts as sentiment analysis, language detection, and category tagging. Many standard AI tools can be used on text given an appropriate feature selection function, which essentially transforms text down into a high-dimensional vector. However there are also certain techniques that work directly on the text, and this article is about a couple of those techniques that are enabled and demonstrated by the new release of the CharTrie component of the SimiaCryptus utilities library. The recent wave of publishing and releases included a particularly interesting text analysis component that I’d like to talk about today. There are many possible uses, including text classification, clustering, compression, and creation. Most people would most likely recognize this as the data structure behind Markov strings or full text indexes. This new component is logically a Trie Map that counts n-grams. The idea is that we can break text down into a number of overlapping n-grams, ie N-character strings like the 4-grams “frog” or “n th”. Happy Friday! I’ve just finished reviewing and updating the next project in my backlog of old research to publish. It is an experiment in how to efficiently serialize a Markov tree. I got interested in the idea when exploring some of the curious properties of a Markov tree, specifically one based off a fixed population of N-grams derived from a continuous string. It turns out that most of the data in a piece of text, if not all, can be absorbed into the Markov tree structure and then encoded in the tree’s serialized form in a more efficient manner than is obvious for the string itself! <charsequence, charsequence="“><charsequence, charsequence="“><charsequence, charsequence="“> </charsequence,></charsequence,></charsequence,> This can be translated into a grammar very simply: Grammargrammar = GrammarBean.get(XmlTree.class); This translation happens according to a number of rules to translate various java types into grammar structures: * __Terminal Classes__ – Java classes are converted into sequence elements, where each field in the class is an element in the sequence. * __Super Classes__ – Java classes with the @Subclasses annotation become choice elements.
OPCFW_CODE
What is endurance testing?16 54478 what is verification and validation?34 79463 what is the difference between test case and test scenario.Explain with example?19 43198 How to write Negative test cases?12 36609 what is statergy?3 7970 hi all friends, i want to know about certification of testing.this is online exam.tell me the fee of this exam also. any boday tell me the institute who conduct this in delhi ncr.& who give some saurity also. praveen test engineer1 2321 when will u make update and modify the test object properties in the repository?3 4955 Explain the STLC?32 108716 Who will prepare FRS(functional requirement documents)? What is the importent of FRS?2 7541 Explain three tier architechture of the java project? what is the web server and what is the database server?1 7061 Can you tell me some thing about source code testing tools?2 4781 when the tester actually involves in Testing?(at which stage of SDLC)10 11188 what are general aspects of security testing?3 6487 Can we make activex dll also ti execute in some process as that of client ? How can we do? What is higher IMP or BHR How to change replication factor of files already stored in HDFS? Can you refer me to other entrepreneurs you have worked with? - Venture Capitalists please send jindal steel and power model question papers If company invest Rs. 3,00,000 in Chit & Fund and later received with Profit Rs. 3,50,0000 = (Rs. 3,00,000 + 50,000 profit amount). Pls advise how to pass the profit amount...and in which head the profit amount goes i got a backdoor offer in process global,Bangalore..Can i work with it? HOW to click on elements under moving banner, in selenium webdriver Star Delta Diagram with full description How do you culture N. gonorrhea on what type of media? I want technical questions of previous reqruitment examination Hi All, Is there any free automation tool for windows application and it's easy to use? I usually take a lot of time to regression test my application when there is a new build on live environments (about same 10 environments) I wish I know a automation tool to regression test and ofcourse it's free, easy to use (maybe using C#) Could anybody can advise to me a tool like that? I very appreciate ^^ Why do we have earthing cable in branch cable and shield in the same cable? Please also send me details about CRM 5 and CRM 7 security issues and scenarios. Write a short note on ISDN?
OPCFW_CODE
Hint 2: A normal vector to the plane ax + by + cz = d is (a, b, c). Hint 3: The projection of m onto the direction of N is (m, N) / |N| (here (m, N) is the scalar, or dot, product). Let be the projection of the three-dimensional space onto the plane . Find the matrix of this linear transformation in the standard basis i, j, k. [Hint: Find the coordinates of the normal vector N to the plane. Then use the fact that the projection of a basis vector m (m = i, j, or k) onto the plane is m - a where a is the projection of m onto the direction of N.] You may wish to review the material about projections and scalar (dot) product. I am sorry, hint 3 from post #2 should be as follows. Hint 3: The projection of m onto the direction of N is ((m, N) / |N|2) N, where (m, N) is the scalar, or dot, product. Previously I said that the projection was (m, N) / |N|, but this is the scalar projection, not a vector. To get the vector projection onto the direction of N, the scalar projection must be multiplied by the unit vector in the direction of N, which is N / |N|. This article in Wikipedia describes projections well. (m, N) / |N|2) N = m, N) / 3) N. Now for m = i = (1,0,0) we have . Now for m = j = (0,1,0) we have . Now for m = k = (0,0,1) we have . so . From here we should do m - a. Do I do (i, j, k) - to give me a 3x3 matrix? I'll drop the bold font. ((i, N) / |N|²) N is the projection of i onto N (the result is a vector), and similarly for j and k. You subtract that projection from the corresponding original vector (i, j or k), and you get the projection of that vector onto the plane. I is a basis for U and is a basis for V, A is a linear transformation from U to V, then the "matrix representation" of A in those two bases requires calculating [tex]A(u_i)[tex] and writing as a linear combination of the vectors. The coefficents of in form the column of matrix. So what are (1, 0, 0), (0, 1, 0), and (0, 0, 1) projected to?
OPCFW_CODE
I am going to completely disagree with any of the answers here who claim that "QA is where incompetent programmers go". Testing in the industry is a rapidly evolving beast. There was, perhaps, a time when Quality was solely the realm of folks who clicked on things or manually tested(though I think the attitude that QA is 'failed' devs is still inaccurate.) Today the industry is moving towards test automation but job titles and expectations haven't really caught up. Automated testing, in it's variety of forms, is in some ways more challenging than software development. Depending on the type of testing being done, QA(or QE as is often being called now) typically must have a really good understanding not only of the product(s) they test, but the products that those targeted products integrate with AND the infrastructure and use cases for all of those products. When I dev I have a product and a feature to implement(in a very simplistic way.) When a QE is working they have a feature to be tested - but this means they need to consider all of the ways that feature might be used, edge cases, integration points and potential future directions that feature might be used. Additionally they need to keep, in some way, an understanding of their current regression suite(s) and know where this new feature and it's changes should go and what changes need to be made in the regression suite(s) in order to facilitate that. Additionally they need to know the code and environment(infrastructure/platform/etc) well enough to provide useful, meaningful feedback. TL;DR - QE(and much of QA) is no longer pushing a button and saying "Herf Derf this didn't work". Rather QE is writing the frameworks, expectations and doing the ground work to make testing that lets the Devs and Product Owners test stuff themselves. So QA vs QE. QA is Quality Assurance which is where any of the remnants of manual testers would be. But it's blending with Quality Engineering - where the focus is on developers who can automate complex system interactions in order to provide a health check for any changes in code, any environmental changes or for systems in general. I think there is definitely a stereotype against QA, as evidenced in answers about QA on 'The Workplace' and that may be something you don't want to fight against. In which case, especially if you are implementing automated testing, I would recommend talking to your supervisors about the role 'Quality Engineer' as it tends to convey the technical aspects of the job a bit more. That being said. Yeah, there are some folks, again as demonstrated any time QA comes up in the various Stackexchange communities, that will think less of you for having anything to do with QA. These often, but not always, are the same folks that think having a separate Quality team test or inspect their code is useless("Because devs can write unit tests!") To be honest, good companies and good people to work for should be interested in what you did rather than a title. If you were training others, doing testing plans and strategies and creating a Quality org from the ground up in your company(or played a part in doing that) and you were working to automate test coverage, triaging, reporting and tools to support that then you've got some serious chops to add to your resume. Consider, for a moment, the Big 4. They all have pretty serious quality organizations within their company. At least two push to have 2:1 or 1:1 ratio of Dev to QE for all of their products. The other two, from what I know, go with the rough 'rule of thumb' of 4:1(dev:QE). This shows in what they do. They aren't going to look down on you for having 'QA' on your resume especially when it's paired with the work you are doing(which isn't easy by any stretch of the imagination.) To leave aside my obvious gripes with how folks like to talk about QA. QA is just another role that brings with it challenges that companies are very interested in solving. Having experiences doing that makes you more valuable. Having a background in development(as many QA have and all QE have) means that you are better poised to bring automated solutions to these challenges. QA expands your pool of opportunities, it will only limit it with companies that are uninterested in Quality as a metric for their products and who, thus, place QA experience at a lower 'rank' than developer positions.
OPCFW_CODE
Wouldn’t it be wonderful if every piece of software also downloaded and installed the prerequisites? After installation, a setup wizard walks you through additional configuration options. These can be set later under the options menu, but it’s easier here. The first time you launch Shavlik Netchk Protect 7’s management console, the home page greets you with quick links to common tasks. After clicking a link on the quick start page, help opens up to guide you through the tasks. Once set up, tasks can be run manually or scheduled to run automatically. Within minutes of installing Netchk Protect 7, I had detailed patch-related information for my test network. I got info on which patches needed to be downloaded and which machines need to be patched first. Here are summary results of a security patch scan on an environment comprising physical and virtual machines. The machines with “VM” in their name are virtual. Nnote how a distinction is now made between an on or off VM. Shown is a very useful chart that lists the top malware threats found on my test network. 9Top Threats by OS A similarly useful chart shows the top malware threats by OS. This could be used to prioritize patches. Before deploying agents to manage patches and threats, it is necessary to configure an agent policy. From the Threat Tasks tab, I added “scan archived files” by clicking on the check box. It is important to classify the risk presented by each category of threat and then decide how that threat should be treated if discovered. I found it useful to click the Default Action for all Threats button and select Quarantine. After a scan detects a threat and reports back to the management console, you can allow the threat and then push the agent policy out to the clients. The Sunbelt VIPRE engine identified and quarantined threats found on an infected machine. The threats were removed without compromising system stability. I created a restricted security policy in which the user had to approve each executable as it ran. This screen shot is from the client agent. This list is not reported on the management console. With patches, I simply clicked the machine, then the patches tab below, then deploy. In contrast, with threats, information is not actionable in the lower pane. I found the Machine View to be the easiest to use during my testing. This is how a GUI should be: intuitive and easy to use. The Operations Monitor tests agent connections to determine whether a patch would be deployed successfully. Patch installation is so non-disruptive that users won’t even know it is happening. Note the processes update.exe and silent.exe. 19Virtual Machine Scan Here are the results of a security patch scan of virtual machines. Netchk Protect 7 simplifies patch management by treating virtual machines the same as physical machines. Service packs can be deployed to virtual machines either immediately or on a schedule. 21VM Patch Deployment You can deploy patches to virtual machines on disk when they are not running. Patches are applied when the VMs boot. Here, an unpatched Windows XP Pro VMware Workstation 6.5 virtual machine is being patched. 22Getting the Message I saw this message two to three times a day during my test period. At best, it was just a GUI crash. At worst, background tasks (downloading and deploying patches) crashed as well.
OPCFW_CODE
DESCRIPTION dd exits with the following error-Code: Driver issues come about: Missing or ruined motorists or incompatible driver variations can result in calls to routines while in the file which are invalid. dd example Indeed, I know the message appears to be like cryptic, and believe me I have frequently felt stumped right after examining one. http://www.freebsd.org/cgi/man.cgi?dd(1) man7.org > Linux > man-pages Linux/UNIX system programming training-NAME Be aware: If you can’t open up Options, it is possible to get to reset by restarting your Pc through the sign-in screen. dd progress It seems to be totally alien when compared with other faults, pretty much dated, like it belongs to a lesser machine. help-DD(1-FreeBSD General Commands Manual-DD(1) NAME-dd — convert and copy a file SYNOPSIS-dd [operands-DESCRIPTION-The dd utility copies the standard input to the standard output. Software phone calls for functions that do not exist from the file: If a software package program phone calls for functions that aren’t present, an error information occurs. dd conv=fsync To fix mup.sys problems you could possibly really have to endeavor numerous probable fixes before you decide to locate the answer. SYNOPSIS You can configure Windows Registry Checker having a Scanreg.ini file. dd conv=sync,noerror Use An Error Fix Method to eliminate Problems dd if=/dev/sdd of=/home/dave/deadhd.bin conv=noerror,sync This command reads the contents of the device /dev/sdd and outputs it to /home/ dave/deadhd.bin. A destroyed file may very well be missing a regime that is certainly expected by a system that you simply set up. dd conv=fdatasync Put in the most recent drivers. COLOPHON-Search online pages] DD(1-User Commands-DD(1) NAME-top-dd – convert and copy a file SYNOPSIS-top-dd [OPERAND]… Should you choose to manually update a driver, adhere to these methods:Simply click Start, and after that simply click Run. dd conv=notrunc On the flip side should you be a little bit of a Computer system skilled you can possible retrieve dropped info from your dump file. REPORTING BUGS-COPYRIGHT These memory troubles normally arise when software program or information are installed in such an purchase that a file occupies memory area that may be reserved for this file. dd oflag=dsync In the event the blue display screen of demise shows suitable following putting in new application you could check out uninstalling the software program to determine if this really is what brought about the situation. SYNOPSIS Configurations that you simply can configure contain:Enabling or disabling the deviceThe variety of backups maintained (no more than five is suggested)The locale in the back-up folderOptions to add more information to your backup setRestoring from a system restore pointThis option takes your Computer back again to an earlier point in time, named a system restore place. doordarshan manual Troubles can crop up when putting in peripherals incorrectly or updating hardware and program. DESCRIPTION Start off your computer in safe manner, and use System Restore Many people seem to think that dd will “fill up read errors with zeroes” if you use the noerror,sync options, but this is not what happens. http://linux.die.net/man/8/sg_dd Because of this the computer operates sluggish so you should use 1 from the top registry cleaners to repair registry difficulties.
OPCFW_CODE
Resources for speakers interested in presenting at WIQCA events ✨🔮 What kind of talks are accepted for WIQCA events? Currently, we have two regular talk series: - Quantum 101: talks that are aimed at a general audience (generally technical folks, but no quantum specific expertise). - Research Seminars: more technical, research focused talks with an aim of late undergrad level expertise, but is flexable given the topic. Talk slots for both catagories are usually 45 min long, but can be adjusted as needed. We generally try to record all of our talks so that folks in other timezones can enjoy, but upon request of the speaker we can skip that, no problem! You can see examples of both kinds of talks on our YouTube channel. We are building up the video catalog now, most of our initial events were not recorded as they were in person 😅 As a reminder, all WIQCA talks are goverened by our Code of Conduct. Help us make this an inclusive event by familiarizing yourself with the Code of Conduct, and by ensuring that your talk is appropriate for this venue before submitting. How can I submit or request a talk? Fill out this Google form! Submissions will be reviewed ASAP, and if accepted the submitter will be contacted to confirm the date and time for the talk. If you have any questions about your submission, please send us an email at email@example.com. Sweet, I got accepted, what happens now? As soon as accepted: The WIQCA team will follow-up with you to find a good date and time for the talk, at least 2 weeks out from the date of acceptance to ensure we can market the talk. One week before: For Quantum 101 talks only: Someone from the team will reach out to check in on the talk prep, and are happy to help workshop or provide feedback on the content. We want to make sure that the 101-style talks are true to the spirit, and will really engage the target audience in an accessible way. About one week out from the talk, the WIQCA team will check in with the speaker to see if there are any questions or issues that have arisen. As a part of that check in, we will do a tech check to make sure that all of the streaming and screensharing will work properly for the big day! We currently are using Teams to host our talks, which you should be able to use from either a free desktop app or directly in your browser. We will test the Teams connection, audio, video, and any demos you might want to do to make sure everyone will be able to see what awesome stuff you are showing! The day of: One hour before the talk, the speaker and WIQCA team will do one last tech check to make sure all the devices are working properly. We can help with any last questions, and run through our speaker checklist: - Water/drink handy - Any doors/windows closed for best audio environment - Slides and/or demos are up and running - Pet has enough treats 🐕 When the start time comes, we will get the call started and make sure you are invited/have the join link. We start most events with about 10 minutes of chatting while everyone arrives, so that people who have meetings right up until the event can get a brief break. Durning the talk, your WIQCA moderator will introduce you as well as handle questions from chat (if you like). We can either collect questions for the end or help the chat by inturrupting when appropriate with questions. After talks, we usually have a few minutes for any additional questions as well as time for you the speaker to promotote what you are excited about! After the talk: Unless the speaker opts out, all WIQCA talks will be recorded and uploaded to our YouTube channel, primarily for folks that are in different timezones. If you want to preview the video before we upload, let us know!
OPCFW_CODE
Tweaks to json_to_html.py Per @taylor13's email Indicate the Controlled Vocabulary (CV) of interest: experiment_id (rendering as html table) When we get to this, three changes are the priority: for column headings remove all "_", so that text can "wrap" and columns will be better spaced. Order of columns should be: experiment id activity id experiment tier sub experiment id sub experiment parent experiment id required model components additional allowed model components start year end year min number yrs per sim parent activity id description Can we make it possible to scroll through all rows of the table? Now, you have to keep pressing "next" to get to the next group. (At most you can scroll through 100 rows before pressing "next".) If we can scroll through all rows, then a user could also, I think, print out the table with a single "print" click. thanks, Karl @taylor13 @dnadeau4 take a look at this updated format - the 100 limit is determined by an external library. @taylor13 take a look at the attached file (remove the *.txt so you can view this in a browser) and if this suits, we can close this standing issue CMIP6_experiment_id.html.txt @taylor13 @dnadeau4 the attached file should solve all the requirements for this issue - so have closed CMIP6_experiment_id.html.txt @taylor13 take a peek above - out of inboxes.. @durack1 -- looks pretty good, but it seems that the column widths are set by the the width of the column labels at the top. We could squeeze the information together if we removed the "_" from the header labels (as I suggested before). Would that not be a good idea? Also, if I want to give access to someone on the outside, do I have to send them the CMIP6_experiment_id.html.txt file, or can I point them to a link that will just display the table? thanks, Karl @taylor13 this is what needs to be sent as part of the email (either the attachment or the link) that is drafted in the google doc (link in your inbox from yesterday morning).. Regarding the formatting, we could iterate for the next couple of months on colours, shapes, languages etc.. Or we could just send it :smiley: - to me the most important thing here is the content, not the presentation, the presentation is just a way of conveying the information not a persistent format.. @taylor13 having said the above, I could also just make the tweak that you suggest.. I have some other stuff to finalize this morning so this could be added to the list.. Reopening.. as @taylor13 is not very keen on "_" characters.. @taylor13 ok so this should appease the html gods.. We can now use this file to interact with MIP co-chairs.. CMIP6_experiment_id.html.txt @taylor13 point your browser to: http://rawgit.com/WCRP-CMIP/CMIP6_CVs/master/src/CMIP6_experiment_id.html and voila! @durack1 what happened to all of the jquery tables niceness? column sorting etc...? @doutriaux1 did you click the link above? Or for your direct pleasure: http://rawgit.com/WCRP-CMIP/CMIP6_CVs/master/src/CMIP6_experiment_id.html @doutriaux1 make sure to use a "http://" rather than the default "https://" in your viewing @durack1 I clicked on your link. @doutriaux1 try again.. the updated link above.. nope, no luck I'm on chrome inside the lab firewall. robust ness issue: Refused to execute script from 'https://raw.githubusercontent.com/WCRP-CMIP/CMIP6_CVs/master/src/jquery.dataTables.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. CMIP6_experiment_id.html:9 Uncaught TypeError: $(...).DataTable is not a function @doutriaux1 "robustness" should be returned, I now get: <html> <head> <link rel="stylesheet" type="text/css" href="http://cdn.datatables.net/1.10.12/css/jquery.dataTables.css"> <script type="text/javascript" src="http://code.jquery.com/jquery-1.12.4.js"></script> <script type="text/javascript" charset="utf8" src="http://rawgit.com/WCRP-CMIP/CMIP6_CVs/master/src/jquery.dataTables.js"></script> <script> $(document).ready( function () { $('#table_id').DataTable(); } ); </script> </head> Wanna try on your problematic browser?
GITHUB_ARCHIVE
I really enjoyed reading this. Quite concise, well organised and I thought quite comprehensive (nothing is ever exhaustive so no need to apologise on that front). I will find this a very useful resource and while nothing in it was completely "new" to me I found the structure really helped me to think more clearly about this. So thanks. A suggestion - might be useful to turn your attention more to specific process steps using the attention directing classification tools outlined here. For example Step 1: Identify type of risk (transparent, Opaque, Knightian) Step 2: List mitigation strategies for risk type - consider pros/cons for each strategy Step 3: Weight strategy effectiveness according to pros/cons and your ability to undertake etc - that's just off the cuff - I'm sure you can do better :) One minor point on AGI - how can you " get a bunch of forecasting experts together " on something that doesn't exist and on which there is not even clear agreement around what it actually is? I'm sure you are familiar with the astonishingly poor record on forecasts about AGI arrival (a bit like nuclear fusion and at least that's reasonably well defined) For someone to be a "forecasting expert" on anything they have to have a track record of reliably forecasting something - WITH FEEDBACK - about their accuracy (which they use to improve). By definition such experts do not exists for something that has not yet come into being and around which there isn't a specific and clear definition/description. You might start by first gaining a real consensus on a very specific description of what it is you're forecasting for and then maybe search for forecasting expertise in a similar area that already exists. But I think that would be difficult. AGI "forecasting" is replete with confirmation bias and wishful thinking (and if you challenge that you get the same sort of response you get from challenging religious people over the existence of their deity ;->) Thanks again - loved it I have not been to one of these before. I think I should be able to get there depending on my daughter's work schedule. Is it okay just to turn up? :) Whilst I appreciate the validity of criticism offered here of the use of the word emergence (by itself) as if were an explanation sufficient unto itself - I think it a little harsh. To call it "futile" is almost acting as semantic stop sign itself for the term. We need to take a little time to properly understand what is meant by emergence when used properly. First that it is an observation rather than an explnation. But an observation with useful descriptive power since it observes that the phenomena under consideration is a process with properties whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties. Therefore not at all properties that arise from interactions or combinations of smaller components are emergent (e.g. putting a whole bunch of magnets together just gives a larger magnetic field). So not all things arise are emergent. So, while "emergence" is hardly an explanation - and one is obliged to look for the mechanisms that lead to the emergent behaviour - (such as how the polar hydrogen bonds in H20 give water surface tension - a property that a single H2O molecule does not exhibit) - nevertheless it's use as an observation has power since it points us to look for (and ask question about) how properties which do not exist in the sub components come to be via the interactions of the components (often multi-factor) - and also to see if there are simple factors or descriptive rules than have predictive power (e.g. flocking phenomena of birds) Hi Capla - no that is not what Godel's theorem says (actually there are two incompleteness theorems) 1) Godel's theorems don't talk about what is knowable - only about what is (formally) provable in a mathematical or logic sense 2) The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an any sort of algorithm is capable of proving all truths about the relations of the natural numbers. In other words for any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency. 3) This doesn't mean that some things can never be proven - although it provides some challenges - it does mean that we cannot create a consistent system (within itself) that can demonstrate or prove (algorithmically) all things that are true for that system This creates some significant challenges for AI and consciousness - but perhaps not insurmountable ones. For example - as far as i know - Godel's theorem rests on classical logic. Quantum logic - where something can be both "true" and "not true" at the same time may provide some different outcomes Regarding consciousness - I think I would agree with the thrust of this post - that we cannot yet fully explain or reproduce consciousness (hell we have trouble defining it) does not mean that it will forever be beyond reach. Consciousness is only mysterious because of our lack of knowledge of it And we are learning more all the time we are starting to unravel how some of the mechanisms by which consciousness emerges from the brain - since consciousness appears to be process phenomena rather rather than a physical property
OPCFW_CODE
Choice: How do we make Right Decisions? (PSE) If we have the right question and the right factual answer. Will we make the right decision? Will we do the right thing? [We looked at 'the right question' (and the right answer) quickly in "Q&A: the right question, anyone? (PSE)" < http://www.gotoknow.org/blogs/posts/472613 >. We reviewed the Buddha's principle for verifying truth in a model described in Kesamutti sutta in "Belief: Is Learning (a kind of) Believing? (PSE)" < http://www.gotoknow.org/blogs/posts/473288 >. (Note: the principle in scientific methods is to trust nothing and no-one but to verify all 'factual evidence'.)] We recall our learning of Plain and Simple English is iterative (looping). At end of an iteration we have output which may or may not fit our purpose. How can we tell whether we have a good result or we have done well or not? We can usually tell if we have set 'a standard measure' or we use a common standard measure (like a 'TASn.m' national standard). We make an assessment by following the measuring method or test prescribed by the standard. We compare our (test) result with the 'pass mark' set by the standard. But in learning, we expect to learn more by each iteration. So, our later tests change to reflect the more we learn. We can set an 'adaptive test (standard)' by including tests for all changes from previous iterations. loop 1: learn A ; at end test output for A loop 2: learn B ; at end test output for A+B loop 3: learn C ; at end test output for A+B+C loop 4: learn D ; at end test output for A+B+C+D (Some readers may note the increasing rate of change in our tests. As our simple learning become more and more complex, would we need more and more time to perform the tests?) How often should we measure or test our fitness? In PSE, a test is required at end of each iteration. In school, a test may be carried out at end of each term or once-in-so-many iterations. In Nature, in some cases, a test can be life-or-death for pass-or-fail output of each 'action' within an iteration. This is 'quite close to' continuous assessment (fitness test). In practice, only each individual learner 'may' have enough 'data' and 'resources' to perform continuous testing. Teachers would have problems with amount of students' data and with her/his resources/time to test students' fitness in real-time. What happens if we pass a test? Congratulations! Very well done. We can go and play the next level now. What happens if we fail a test? Please revisit the last session and repeat the learning process then take the test again -- repeat until we pass. [This repeat-a-class strategy is now considered more harm than good to learners. The argument that repeating 'a class' is destroying confidence, social respect and wasting time and a place in that class for another learner. In iterative learning, repeating an iteration or a session may be done in minutes or hours after school or at weekend -- not a whole year.] What should be tested? For PSE, we would test 'recognition' (memory), 'applications' (patterns of use) and 'adaptation' (evolution by small change or mutation). What are tests for fitness practically about? Tests usually come in as 'questions for answers' or 'challenges for reponses' or 'filling holes or gaps' or 'connecting the dots by certain rules'... Learners use certain 'assumptions or beliefs or views' and 'facts or knowledge' (in memory) to make decisions that result in certain answers or responses or actions. There are many factors and many styles involved in making any decision. We may be influenced by internal preferences or biases, capacity to retain or recall memory, personal circumstances and so on. We may be influenced by cultural and situational conditions, currencies or popular trends and many other external values. To make tests on learning isolated from these factors is in itself a matter of making the right decisions. Notes on Decision: (I read long ago): People prefer copying other or previous decisions (>80%) over reasoning (<10%) from (valid) facts and (logical) rules. Sometimes, people make decisions by intuition or personal preferences, and other times they ask experts or gods for advice. [People copy a lot of decisions made by asking experts or gods or trees or seers or ....] (I learned): People don't like to make decisions 'on complex issues' or 'when they don't know enough' or 'where there are "noises" [ideologies, religions, beliefs, traditions, statistics, comments, Internet and so on]. So, they would go along with 'the default' -- thinking that the default is what the majority would decide. [Opinion polls and governments manipulate public responses this way.] Statistics is included as a valid 'profiling' tool in science. There are good arguments for statistics in often occuring and 'normal' things. But statistics is useless when it comes to big floods, earthquakes, or once in a blue moon events that may impact a large number of people. Thus, statistics should be used appropriately. (I noted): Different decisions can be made for the same issue by different age, sex, income, status, ethnic or cultural background and education. People don't just use facts publically available to them but also hidden private factors they consider important in priority order. (I also noted): there are more decisions in favour of self-interest than altruistic or cooperative decisions even when the decisions are made for cooperative context. [This means politicians vote for a law (legislation) because they would benefit from the law more than because people would be better off by the law.] More recent research [search and read 'Dan Ariely'] says: people's decisions are also influenced by 'appearance of honesty', 'sexual appeal', 'peer pressure', 'decisional illusions' (in the same way as visual illusions) and 'proximity to money'. To put these in PSE terms: people cheat but only to a point that they (think they) still appear honest; people like decidedly more sexy outcome; people choose to belong to a group; people don't (know how to) scientifically verify facts; and people prefer money when they feel (they are) closer to (getting some) money. A (70 years old) theory in cybernetics says: a way to control complexity is to match the source of variety. Learning in a way is increasing both variety and complexity. So, learning should allow us to control or master complexity. [In learning PSE, we try to keep changes small in each iteration, in hope to vary (learn) the control a little. But, we should note a recent theory of 'tipping point'; and an old saying about a straw that breaks the camel's back.] What do all these mean? What we do depends on what we choose to decide. What we decide (to choose) depends on so many other things -- not just our (publicly known) 'goal'. We learn more from what we do. We do more if we have (small) successes. Each success may depend on many decisions we need to make. Once we succeed, we may be rich and famous for life. What we fail may be remembered and used against us later.
OPCFW_CODE
Click to enlarge Formerly First Looks Patterns Part of the SwitchIt! Cause and Effect Series Build patterns on-screen and see them animate. SwitchIt! Patterns caters to young children and to users who require visual stimulation and cause-and-effect software. How to Use: After launching SwitchIt! Patterns, you are greeted with an introductory screen with up-tempo music. You click the mouse once, and then can immediately begin �work� with the default settings. If you move the mouse to the top of the screen display, a �menu bar� appears. From this menu bar, you choose from 10 different pattern types. You scroll the menu bar horizontally from left to right and click on the desired pattern. The optional sound can be used to encourage activity, but can be turned off for determining whether vision is being used. Uses with other learners: children who experience difficulty with attention may find the images interesting and motivating printed images can be used for coloring and cutting to develop manipulative skills and hand-eye co-ordination part completed images can be used to work on symmetry closure and prediction descriptive language can be developed via association with the images and their development movements created on the screen can be replicated in dance and drama sessions � Complexity � a sliding scale (from 1-4) indicates how �busy� the pattern will appear onscreen. � Colors � choose a pattern color (from nine choices) and background color (from nine choices). � Debounce � how quickly the pattern will respond to the input, then appear and �move� � Pre-acceptance Delay � the time between hitting the switch or clicking the mouse, and the software�s reaction to your input � Serial Switches � choose this option if you have switches connected via an interface box in your switch port (usually connected into COM2) � IntelliKeys � when you choose this option, an automated process sends an overlay to the IntelliKeys with the two switch ports on the board programmed, ready to accept switch input. � Patterns � choose 1 of 10 patterns; select either �random� or �sequential� order of appearance. � Reward � the reward can be an animated sequence or be a color cycle, with visual appearance options including vivid, metallic, pastel or one color. � Speed � there are 9 speed settings. The fastest is quite mesmerizing! � Activities � you can set up the program so that the user receives an instant response with �1 step�, or a response with a �pause�, or promote more interaction with three or five steps to encourage greater effort. The �1 step� is ideal for switch training where you want the child to achieve every time. � Sound - you can elect to have the sound on or off. � OK - you have set all of the conditions and are ready to play again. � Exit - this is where you quit and leave the program. � Open/Save � you can save a setup by name (for individual users) and then open them at any time. Targets the following age ranges: � Early Primary � Mid Primary � Upper Primary Fosters development in: � Switch Use � Mouse, Trackball, Joystick � 1 or 2 switches � Switch Adapted Mouse � Use your IntelliKeys to display flash cards, complete picture builds, and tell simple stories by touching the board or pressing an attached switch. The board will work as 1 or 2 switches. Minimum System Requirements: Windows 95, 98, ME, NT4, 2000, XP, Pentium 90MHz, RAM - 16MB (Win 95) or 32MB (Win 98/ME), SVGA suggested, sound card, 4x CD ROM Macintosh OS 7.5.5/OSX, Power Mac 7200/90, 11MB RAM, 256 colors suggested, 4x CD ROM
OPCFW_CODE
How do you find the prime factors of a number in C? Logic To Find Prime Factors of a Number, using Function We ask the user to enter a positive integer number and store it inside variable num. We pass this value to a function primefactors(). Inside primefactors() function we write a for loop. We initialize the loop counter to 2 – which is the smallest prime number. How do you find prime factors of a number? The steps for calculating the prime factors of a number is similar to the process of finding the factors of any number. - Start dividing the number by the smallest prime number i.e., 2, followed by 3, 5, and so on to find the smallest prime factor of the number. - Again, divide the quotient by the smallest prime number. How do you print all prime factors? Following are the steps to find all prime factors. - 1) While n is divisible by 2, print 2 and divide n by 2. - 2) After step 1, n must be odd. Now start a loop from i = 3 to the square root of n. - 3) If n is a prime number and is greater than 2, then n will not become 1 by the above two steps. What are the prime factor of 42? 2 × 3 × 7 So, the prime factors of 42 are 2 × 3 × 7, where 2, 3 and 7 are prime numbers. What is a prime factor of 120? So, the prime factors of 120 are 2 × 2 × 2 × 3 × 5 or 23 × 3 × 5, where 2, 3 and 5 are the prime numbers. Is the prime factorization of 48? What is the prime factorization of 48? The prime factorization of 48 is 2×2×2×2×3 or 24 × 3. What is the prime factor of 72? 2 3 ⋅ 3 2 When a composite number is written as a product of all of its prime factors, we have the prime factorization of the number. For example, we can write the number 72 as a product of prime factors: 72 = 2 3 ⋅ 3 2 . The expression 2 3 ⋅ 3 2 is said to be the prime factorization of 72. What is the prime factor of 54? 2 × 3 × 3 × 3 Prime factorisation of 54 is 2 × 3 × 3 × 3. Therefore, the highest prime factor of 54 is 3. What are the prime factors of 200? Factors of 200: 1, 2, 4, 5, 8, 10, 20, 25, 40, 50, 100 and 200. Prime Factorisation of 200: 2 × 2 × 2 × 5 × 5 or 23×52. What is the prime factors of 72? So, the prime factors of 72 are written as 72 = 2 × 2 × 2 × 3 × 3. How to find prime factors of a number in C? C Program to Find Prime Factors of a Numbers 1 Write a C program to print all prime factors of a number. 2 Wap in C to find all prime factors of given number. More What is a prime factor? What is Prime factor? Factors of a number that are prime numbers are called as Prime factors of that number. For example: 2 and 5 are the prime factors of 10. Step by step descriptive logic to find prime factors. How to check prime factors of a number using logic? Logic to check prime factors of a number 1 Input a number from user. Store it in some variable say num. 2 Run a loop from 2 to num/2, increment 1 in each iteration. The loop structure should look like for (i=2; i<=num/2; i++). 3 Inside the loop, first check if i is a factor of num or not. If it is a factor then check it is prime or not. How do you check if a number is a prime number? Because prime number starts from 2 and any factor of a number n is always less than n/2. Inside the loop, first check if i is a factor of num or not. If it is a factor then check it is prime or not. Print the value of i if it is prime and a factor of num.
OPCFW_CODE
PowerPoint and Multimedia - Explore more on PowerPoint and Multimedia. By: Austin Myers Codecs Are a Must - Find Them Here What the heck is a Codec and why do I need them? Codec stands for COmpressor / DECompressor and it does pretty much what the name implies. They are used to compress multimedia files for transfer and storage, and then to reverse the process for play back. If you have ever used "Zip" to compress a file you have the general idea. Why are there so many different codecs? Different forms of multimedia compress very differently depending upon their contents. Consider the difference in the sound of a car engine running and a full orchestra playing music. The engines sound is of a very low frequency and repetitive, while the orchestra produces a full frequency of sound with little repetition. Obviously the engine sound would compress much differently than would the orchestra music. So we use a different tool or codec to get the job done. The same analogy holds true for video. Codecs are constantly being upgraded and the technology envelope pushed in order to compress the file smaller without loosing quality during playback. The good news is that Microsoft foresaw the need for future codecs and built the MCI so we can simply install them as needed. In plain terms, a codec is just another module of the MCI and the MCI makes it available to other software, in our case, PowerPoint. There are literally hundreds (thousands?) of codecs in use today and no one would have all of them installed on their machine. However there are the "common" ones that should be on every machine. In order to determine which codecs are install on your machine go into Control Panel and double click Multimedia. Click on the Devices Tab and look for "Audio Compression Codecs" and "Video Compression Codecs". Click on either of these to see a list of the codecs installed. Here is a list of some of the most common codecs: TrueSpeech Software Audio Codec Indeo R3.1 Video Codec Indeo R3.2 Video Codec Indeo 5.04 Video Codec Microsoft Audio Codices ADPCM Audio Codec CCITT G.711 A-Law and u-Law Audio Codec GSM 6.10 Audio Codec IMA ADPCM Audio Codec Microsoft Video Codices RLE Video Codec Video 1 Video Codec Cinepak Video Codec Fraunhofer IIS MPEG Layer-3 Codec In putting this information together I have tried to track down sites where codecs could be downloaded. I found two things: First, folks that create codecs tend to play it very close to the chest. That is you wont find much in the way of web sites that have lots of them for down load. And second, the companies change the URLs to their sites on a regular basis so placing them in this document is a waste of time. So the best advice I can give is to use your favorite search engine and go hunting on the web. Late breaking news! I have found the Nimo All in One Codec Pack that will install many of the possible codecs you might need. Click here to get the download... It Bringing Altogether OK, we have the WIN.INI and SYSTEM.INI files straightened out, and we have the standard codecs installed, now what? Next come driver issues. The most common problem I've seen in multimedia and PowerPoint is problems in the video drivers. If you are experiencing a situation where PowerPoint allows you to insert the multimedia file but it doesn't play as expected the chances are it's a video problem. What can be done about it? Before "fixing" the problem lets try to determine that it is in fact a video problem. To do this, restart your computer in Safe Mode and run the presentation. It won't be pretty but the question to be answered is, did it play properly? If the answer is yes then it's almost certain you have a video driver issue. There are three basic "fixes" for this situation. First, go to the web site of your video card manufacturer and see if there is an updated driver for it. If you aren't certain which driver to use, most manufactures provide a small utility to examine your system and give you this information. You might also be able to get this information from Control Panel - System - Device Manager. Look for "Display Adapter". Second, change your display color depth. I wish I had a magic formula to tell you which optional setting to use but it depends upon your particular system. So, simply try different settings to see if one works properly when you play your presentation. Third, Lower your video hardware acceleration. Again go into Control Panel - System - Performance. You will see a button labeled "Graphics". Click it and you are presented with a slide control, which may be used to set the acceleration level. Move it down one "notch" at a time and try the presentation. One or a combination of these things should fix the problem. However, I will note that I have run into problems with certain new video cards that I wasn't able to resolve. The answer at that point was to replace the video card. As a side note, many of the video cards that have video capture or "video in" tend to install their own proprietary codecs. These will work fine on your machine but if you move the file to another machine it may not work at all. Just a word to the wise. These same issues are applicable to sound playback. First make certain the WIN.INI and SYSTEM.INI files are correct, then make certain the required codecs are in place, and then play the file in "mplayer.exe" (mplayer32.exe for winNT). If you are unable to play the sound in this manner there are problems with your sound subsystem. The fixes are the same as for video issues. Make certain you have the latest driver for your sound card from the manufacture. Next make certain your playback settings match or exceed the quality level of your file. As an example, if you have your system set to produce only 8-bit mono playback and the file is 16-bit stereo the quality of the sound will obviously suffer, or may not play at all. And the last area to look at is the audio hardware acceleration. Again try adjusting it gradually and try playing the presentation after each adjustment. All the audio adjustments are made in Control Panel - Multimedia - Audio. Up to this point we have been dealing with how to play multimedia "natively" in PowerPoint. By that I mean using the standard method of, Insert - Sound/Movie - From File. There are a number of other ways to do this, and the following are some examples. If you use "drag and drop" to place a multimedia file on a slide, an instance of "Windows Media Player" is created. At that point Media Player is in control of the playback. To play a non-supported file format (QuickTime, Real Media,) you may Insert - Object - Create From File and navigate to the file and insert it. This will call the player that is associated with that file type. This of course assumes you have the correct player installed on your system. You may also hyperlink to the file. Select the object or text you want to assign the hyperlink to, and Insert - Hyperlink. In the "Link To" window select Existing File or Web Page, and then navigate to the desired file. Again the player associated with the file type will be called. I happen to like using this method when giving a presentation because it allows me to have control of when the movie is played. There are a number of additional methods to play multimedia in PowerPoint using Visual Basic for Applications, Active X controls, or Visual Basic controls already existing in PowerPoint. However, they are well beyond the scope of this document and are best left to the programmer types. There is one last method that I should mention. If you have an OLE compliant application, it to may be inserted as an Object. One place this might come in handy is the playback of DVD movies. Neither PowerPoint nor Windows Media Player is equipped to handle this format. Of Portability Presentations I've included this section as many users ask why their presentation works on one machine and not another. As you can see from all the above information it isn't so much a PowerPoint problem as it is an environment (Windows Setup) problem. We simply have no way of knowing in advance how the receiving user has his/her machine setup. What can be done to maximize success in transporting presentations? Don't create a presentation with critical timing on a fast machine and expect it to work the same way on lesser machines. If you have a video playing, don't add to the computers workload by having other animations happening at the same time. It's also a good practice to place a couple of seconds between slide transitions and the start of a video. Use Multimedia file formats that are likely to be found on most machines. For video this is the AVI format using the Cinepak codec or the (preferred) MPEG format. For audio use the Microsoft wav format. I can hear the grumbling already about quality and file size. Folks, if you want to distribute the presentation to others you have to use the lowest common denominator. Remember, a big file that plays properly is a lot better than a small file that doesn't play at all. Be certain that you include any multimedia files along with the presentation. Because Microsoft uses the word "Insert" we tend to think the file has been inserted into the presentation. Unfortunately this isn't true, the Multimedia file has been "linked" to the presentation and it is called when needed. PowerPoint expects to find the file in the same place it was originally linked from. That about covers the "generic" information on using PowerPoint and multimedia. I'm certain there are a number of issues that are specific to your machine and presentation, but I couldn't possibly cover all of them in this document. If you have read through this information, made the suggested changes, and still encounter problems or issues I urge you to visit the PowerPoint newsgroup - microsoft.public.powerpoint where I and a bunch of wonderful folks hang out working together to get the most out of PowerPoint. Heck, don't wait until you have problems, just stop in and say hello. You never know what you might learn or teach others. Microsoft PowerPoint MVP
OPCFW_CODE
MBA Program: University of Michigan Ross School of Business MBA Concentration: Marketing Hometown: Wellesley, MA Undergraduate School and Major: Middlebury College Current Title at Microsoft: Product Marketing Manager, Cloud Marketing OnRamp Program How would you describe your role to your mother? Currently, I am a Product Marketing Manager in the Cloud Marketing OnRamp program. The Cloud Marketing OnRamp program is specifically designed for MBAs who interned at Microsoft during business school and who have returned as a full-time employee. The OnRamp program lasts one year and consists of three rotations, four months each, in key areas of our cloud marketing business such as Product Marketing, Integrated Marketing, and Business Planning. In my first rotation on the Apps and Infrastructure product marketing team, I helped develop new messaging for microservices, an architectural style of building applications. In my second rotation on the Customer Success product marketing team, I worked on the Azure portal and mobile app by helping to drive awareness of the platform’s features and make improvements to the user experience. In my final rotation in the Global Demand Center, I am developing an Azure migration global engagement program to help reach potential customers who are interested in moving from on-premise computing to the cloud. A fun fact about me people would be surprised to know is…A fun fact about me is that I am an avid shark fisherman. I grew up fishing from the shores of Cape Cod and Martha’s Vineyard, catching Bluefish and Striped bass. Today, that passion has grown quite a bit! Today, I venture 50 miles off the coast of Cape Cod to catch Mako sharks and Blue sharks that can weigh over 250 pounds. What was your greatest personal or professional accomplishment? My greatest personal achievement was being awarded a Volunteer Recognition Award by Big Brothers Big Sisters of New York. During my four years in New York City prior to business school, I was highly involved in helping BBBS of NYC to grow the number of African American mentors, both as the co-president of Bigs United, an affinity group of BBBS, and as a mentor of a 16-year-old teenager from the Bronx. Why did you choose to work at MSFT? I chose to work at Microsoft because it is a company whose values and mission align closely with those of my own. I have never had the opportunity to work for a company whose leadership is so highly committed to culture as well as diversity and inclusion. Satya Nadella and Chris Capossela have been huge change agents for diversity and inclusion and it has been amazing to watch the improvements they have implemented across the company. For example, they recently implemented a company-wide policy that requires employees to commit to helping to create a more diverse and inclusive community as a part of our core priorities. This means that this is a factor that will be evaluated as a part of your annual bonus compensation. No other company I have worked for has tied its values around diversity and inclusion to compensation which is why I am extremely proud to be working at Microsoft. What did you love about the business school you attended? The thing I loved the most about the Ross School of Business was by far the people. At Ross, there is a deep sense of school pride, collaboration, and passion for helping others. Prior to attending Ross, I remember connecting with students and alumni from all over the world about their experiences. All of them were quick to respond and extremely insightful as I made my final decision about where to attend business school. During my time at Ross, I developed strong friendships with my fellow classmates who pushed me to new heights. I owe my current role at Microsoft to my peers who helped me prepare for case competitions and interviews. Post-MBA, I have found the Ross alumni network to be outstanding. Even at Microsoft, I’ve found the network to be quite strong. What’s the most valuable thing you’ve learned so far at MSFT? The most valuable thing that I have learned so far at Microsoft has been how to effectively drive change in within such a large organization and with limited resources. During my time on the Azure Customer Success team, I was responsible for making improvements to the Azure portal. I took recent customer insights and helped to create a strategy to turn their insights into product improvements. To do this successfully, it required me to get input from a variety of stakeholders and drive alignment across teams. Which manager or peer has had the biggest impact on you at MSFT and how has he or she made you a better in your role? The manager who has had the biggest impact on me at Microsoft is my current manager, Jerry Lee. In addition to his responsibilities as a Cloud Marketing Business Manager, Jerry is managing all OnRampers at Microsoft. Jerry has been in many different roles at Microsoft and, as a result, is able to provide excellent constructive feedback. By taking his feedback and putting into action, I’ve been able to improve both my hard and soft skills in this role. I also appreciate how great Jerry is at helping others think through difficult problems. Whenever I am faced with a new and ambiguous project, I always enjoy getting Jerry’s input as he usually has experienced some aspect of it in the past. I feel extremely lucky to have Jerry as my manager as he is always looking to help others improve in their role. What advice would you give to someone who wants to work for MSFT? The advice I would give to someone who wants to work at Microsoft is to think about our mission and the culture we are building and how they align with your values. Microsoft is an amazing company, but the employees are what make it such a great place to go to work every day. If you are someone who loves helping other people and organizations succeed through technology, then Microsoft might just be the place for you.
OPCFW_CODE
The Raspberry Pi-powered mobile robot 21st October, 2013 It’s been a long time since I started my University final year project and came up with Amoeba-1. Now I’m pleased to report on what I have been working on in my spare time for the past couple of weeks - the next generation of that project - and the new robot, AmoebaTwo. This device is an evolution of the previous one, in that it certainly takes account of the learnings from that project. However, neither the hardware nor the software is as complex as it was before. This is in part due to time constraints (I only have my spare time to work on this, where as before it was like a full-time job) but also because the last machine was far too complicated, which resulted in a number of issues best left in the report. On to the details… The robot once again makes use of a BigTrak chassis. I haven’t yet found anything better - you can often get it for less than £25, it’s got a great gearbox, and it’s really easy to take to bits. This time it is considerably less modified - I didn’t need to chop half the top off at least - and it looks pretty cool whizzing around the flat. On board processing is provided by a Raspberry Pi, Model B. This is powered by a 5v 1000mA mobile phone battery charger - one of those mini bricks you can drop in your bag for when your phone runs out of juice. Turns out the Raspberry Pi isn’t all that hungry for power, because I have had it running for hours and the unit reports less than 20% charge used. This unit is for powering the Pi and LEDs when they’re on - the motors are powered by the 3 D-cell batteries in the BigTrak base, as always. Motor interface is provided by a PiFace board. The advantage of this is that it is gloriously simple to work with, and I don’t accidentally blow anything up (something of a habit with me and electronics). Additionally, it was designed for the Raspberry Pi, so it slots very nicely on the top. This gives a total of 8 outputs and 8 inputs available. The first two outputs are hooked up to relays - this is what drives each motor. Unfortunately the by-product of this is that the robot can’t go backwards, but it’s not the end of the world for now (the alternative was for it to never be stationary, which didn’t seem like a great idea!) The third and fourth outputs are currently used for lighting up the front (blue) and top (green, which I swapped out for the IR one that used to be there) LEDs. The main red switch on the top is a break switch for the motor power (for when it inevitably all goes wrong). Finally the speaker is hooked up to the Pi’s headphone socket. This is all held together by a bread board glued on to the inside of the battery pack. Of course there is also some Lego in there! Whoever designed the Raspberry Pi clearly knew what they were doing - it fits inside the blocks perfectly. On the front are two bump switches. These are ultimately going to be used to protect the motors, and to an extent the furniture - however the robot goes at quite a pace so getting stabbed by one of those points isn’t going to do a lot of good either, I think… The Pi has a WiPi device for wireless networking. Initially this connected to my home wireless network and I had to SSH over that to get in. Now I have modified it so the Pi acts as a router itself. This means any of my devices (including tablets, phones etc.) can connect to a wireless network the Pi creates, which provides better bandwidth and latency when doing device-to-device networking (useful when driving robots!) Additionally the Pi can use a NAT to forward on internet requests over the LAN, if you connect a cable to it. All very useful. I’ve written a number of pieces of software for the robot so far. The first is a core library, which wraps all the PiFace methods in an interface designed specifically for this robot platform. This makes coding for it pretty quick, even though it wasn’t all that sluggish to begin with. The rest are mostly experiments (CLI interface, bump sensor tests) but the one I have put the most work into is Glove, which allows you to control the robot over HTTP. Glove uses Facebook’s Tornado HTTP server for Python to serve HTTP API endpoints for controlling the robot. So a GET /drive/forwards, GET /drive/stop, GET /light/top/on and so on all work. I also built a basic UI using Angular and Bootstrap, which you can see a screenshot of below. The beauty of this approach is all you have to do is hook the device you want to use onto the robot’s wireless network, then hit the URL and you’re good to go. It has also been specifically designed to work with touch events, so it’s really smooth on tablets. There’s a lot more software and hardware work to do on the robot. I really want to integrate the BigTrak keypad (it shouldn’t be hard, mine is just a bit too destroyed to get working so I will need to pick up a replacement) and there’s a lot of cool stuff to do in code and around vision. Bonus - soldering in the kitchen…
OPCFW_CODE
RSS 2020 Workshop July 12, 2020, Oregon State University at Corvallis, Oregon, USA Task and Motion Planning (TAMP) frameworks show remarkable capabilities in scaling to long action sequences, many objects and a variety of tasks. However, TAMP usually assumes perfect knowledge, relies on simplified (kinematic) models of the world, requires long computation time and most of the time yields open-loop motion plans, all of which limit the robust and practical applicability of TAMP in the real world. On the other end of the spectrum, reinforcement learning (RL) techniques have demonstrated, also in real world experiments, the ability to solve manipulation problems with complex contact interactions in a robust and closed-loop fashion. The disadvantage of most of these approaches is that they work for a single goal only, require huge amounts of trials and have trouble showing the same long-term sequential planning behaviors of classical TAMP frameworks. The goal of this workshop is to investigate if and how learning can address the challenges imposed by TAMP problems to develop (novel) methods that achieve both the generality of TAMP approaches and the complex interaction capabilities of RL policies. To discuss this, we are trying to bring together experts from the fields of |09:00 - 09:10||Introduction| |09:10 - 09:50||Dieter Fox| |09:50 - 10:30||Russ Tedrake| |10:30 - 10:40||Discussion 1| |10:40 - 11:00||Coffee break| |11:00 - 11:40||Lydia Tapia| |11:40 - 12:20||Tomas Lozano-Perez| |12:20 - 12:30||Discussion 2| |12:30 - 02:00||Workshop lunch| |02:00 - 02:40||Sergey Levine| |02:40 - 03:20||Jeannette Bohg| |03:20 - 03:30||Discussion 3| |03:30 - 04:00||Poster spotlight presentations| |04:00 - 04:30||Poster session & coffee| |04:30 - 05:10||Georg von Wichert| |05:10 - 05:50||Weiwei Wan| |05:50 - 06:00||Discussion 4 and summary| We solicit 2-3 page extended abstracts using the standard RSS template. References do not count to the page limit. Topics of interest (but not limited to) are |Submission deadline:||April 9, 2020. Anywhere on Earth| |Acceptance notification:||April 16, 2020| Note that due to possible visa processing times, this submission deadline is pretty early. Submissions should be sent directly to Danny Driess as a PDF-file. Additional video attachments are welcome. We seek original research, late breaking results that still need discussion or work that discusses the workshop topic (with empirical data or theoretical foundation). An overlap with submitted/accepted papers is acceptable, if they have not been presented before. Accepted contributions (and optional video attachments) will be published on this website.
OPCFW_CODE
While there is a generally accepted precise definition for the term "first order differential equation'', this is not the case for the term "Bifurcation''. View "bifurcation" as a description of certain phenomena instead. In a very crude way, we will say that a system undergoes a bifurcation if and only if the global behavior of a system, which depends on a parameter, changes when the parameter varies. Let us illustrate this through the population dynamics example. Indeed, consider the logistic equation describing a certain fish population where P(t) is the population of the fish at time t. If we assume that the fish are harvested at a constant rate (for example), then we have to modify the differential equation to where H > 0 is the constant harvesting rate. Here is a simple example of a real-world problem modeled by a differential equation involving a parameter (the constant rate H). Clearly, the fishermen will be happy if H is big, while ecologists will argue for a smaller H (in order to protect the fish population). What then is the ``optimal'' constant H (if such constant exists) which allows maximal harvesting without endangering the survival of the fish population? First, let us look at the equilibria (or constant solutions) of this model. We must have This is an example of what is meant by "bifurcation". As you see the number of equilibria (or constant solutions) changes (from two to zero) as the parameter H changes (from below 1/4 to above 1/4). Note that this is just one form of bifurcation; there are other forms or changes, which are also called bifurcations. The Bifurcation Diagram A very helpful way to illustrate bifurcations is through a Bifurcation Diagram. Again we will illustrate this tool via the harvesting example. Indeed consider the fish population modeled by the equation where H > 0 is the constant rate at which the fish are harvested. As we saw before depending on the number H we may have two, one, or no constant solutions. Let us draw this on a diagram with two axes Let us add some vertical lines describing the phase lines. Indeed, for every number H, the vertical line given by H is the phase line associated with the differential equation Recall that the phase line carries information on the nature of the constant solutions (or equilibria) with respect to their classification as sources, sinks, or nodes. This classification is given by the sign of the function . The graph of for different values of H is given below Putting everything together we get the following diagram (which is called the Instead of just drawing some phase lines, we will usually color the regions. The next picture illustrates this very nicely: Let us use this diagram to discuss the fate of the fish population as the parameter H increases. When H=0 (no fishing), the fish population tends to the carrying capacity P=1 which is a sink. If H increases but stays smaller than 0.25, then the fish population still tends to a new and smaller number which is a also sink. When H is increased more and exceeds 0.25, then the differential equation has no equilibrium points (constant solutions). The fish population is decreasing and crosses the t-axis at finite time. This means that the fish population will vanish completely in finite time. Hence, in order to avoid such a catastrophic outcome, H needs to be slightly lower than 0.25, which is called the optimal harvesting rate. You should also keep in mind that a slightly smaller number will be a better choice than H=0.25 itself, since for H=0.25 the only equilibrium point P = 0.5 is not a sink (in fact, it is a node) and as soon as the population P falls below 0.5, we will again witness extinction in finite time. The next animation illustrates the behavior of the solutions as H changes. The animation is based on the differential equation Click here for Exercises on bifurcation. Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard. Author: Mohamed Amine Khamsi
OPCFW_CODE
Use dart:html library in AngularDart I want to know how to use (if it's possible) dart:html in angulardart. I try it in default example on webstorm when create a angulardart project (The todo list). I try insert some dart code, but it doesn't work. It work only if I insert it in the OnInit auto implemented ngOnInit. It's that the only way? Use dart to manage DOM in AngularDart it's a correct practice? I need AngularDart for the full functionally route system You can get at underlying dart:html elements in AngularDart in a number of ways: for instance: <div #myEl></div> {{foo(myEl)}} import 'dart:html'; @Component(...) class ... { foo(DivElement div) { ... } } You can also get that div view ViewChild: ... @ViewChild('myEl') DivElement div; ... And you can ask for it in your component's constructor: @Component(...) class MyClass { DivElement div; MyClass(Element e) : div = e; ... And you can implement "functional directives," which don't have a class at all, but rather just call dart:html APIs on the component when its created: @Directive(...) void myFunctionalDirective(Element e) { ... } Hopefully one or more of these use cases satisfies your needs. Do remember that any time you use the dart:html library, you may do things that AngularDart can't track, and you may get confusing behavior. Its best to let angular do as much as it can for you as possible, and only use dart:html as sort of the "back end" of some small components and let angular wire them together. But that is a very large topic that could fill a small book :) I'm new in Dart and Angulardart and I need for University. I don't understand how to use last @Directive: in that function Will be initialize variables of DOM? Could you kindly write me example for every way about input text and a button that in click print that input in a DOM text elementi? For example this is HTML Read input . Really thanks for your answer This is not something you want to do in the native DOM API, almost certainly. For this, you want class MyComponent { String written; String shown; } with template <input [(ngModel)]="written" /><button (click)="shown = written">Read Input {{shown}}......To risk giving you correct but harmful advice, the functional directive approach would be @Directive(selector: "test-input") void neverWriteThisCode(Element e) { e.onClick((_) { document.getElementById('read-text').content = document.getElementById('test-input').value; }); }. Which is awful, unless your case is very very special Ok so this is not the best way... Or angulardart or dart-lang. I Need find a way to manage/create route in dart-lang, otherwise I Switch on angulardart. Thank you so much
STACK_EXCHANGE
Life gets sometimes a little bit more complicated, than one would expect. Database design is no exception, and therefore you need to think outside the box from time to time and come up with new ideas. As the title suggests, I am going to show you a more complex way to create your table identifiers, i.e. Primary Keys. Most usualy, the data rows stored in tables are identified by unique identifier called Primary Key. Primary Key is a column, by which you can distinctly identify the data, you were looking for. As you would expect, the column should be an integer, bigint, long or really any other numerical data type depending on the size of your data and database vendor you are using. In fact, you can have multiple Primary Keys in one table, in which case the combination of these Primary Keys if called Composite Primary Key. A Composite Primary key is a combination of two or more columns in a table, that can be used to uniquely identify each row in the table. Uniqueness is only guaranteed when the columns are combined; when taken individually the columns do not guarantee uniqueness. An example is worth a thousand words Ok, so what are the cases for using a Composite Primary Key instead of a simple single column Primary Key, probably set as an auto-increment? Let’s imagine that you are designing a table for a financial institution, where transactions are being stored. In this case, you need to have a possibility to store a so-called “pool transaction” or “batch payment”, where multiple transactions are being stored under the same “main identifier”. In the example below, you can see that the first two columns id_transaction and id_transaction_2 form together a Composite Primary Key. Then, when you look at the transaction where id_transaction = 6 you will notice, that this is the case of a pool transaction with a total amount of 4 000 (2000 + 1200 + 800) probably made from different client accounts. Now you can see, how the uniqueness is guaranteed only when both columns id_transaction and id_transaction_2 are being considered. It’s nothing new Sure this concept is not a breakthrough and lot’s of you would design the table in other manner, which is completely ok. For instance, you could set the id_transaction as an auto-increment Primary Key and hence making it a unique identifier for each transaction and then storing the “main identifier” (of the superior transaction) under id_transaction_2. This approach would be sufficient in case, you wouldn’t need to store a complete history of changes made to the transaction, because once you would make a change to the transaction, either you would lost the former data or create a new record with new id_transaction, which is not a good practice in my opinion. I will show you the way I prefer storing a complete history of record changes in some future post. Always think forward There’s a lot’s of other use cases, where Composite Primary Key is the best way to go when designing your new table, but be aware that there are also lot’s of cases, where it is not appropriate. It complicates things a bit and hence you should always think forward, what data are you storing, do you want to store the data in one table with multiple column identifier or store the data in multiple tables? It depends.
OPCFW_CODE
(Forgive the research paper-wannabe structure of this post. Skip to the proposal if you want to cut straight to the point.) Though I would love if this was added to the actual game, I know this more in the realm of a mod. There was a nice thread that was lost in that forum shuffle awhile back. We discussed allowing players to play the game without needing to be tied to cities. Players would act as mercenaries by working for other faction using diplomacy and bartering for items/units/tech, and use exploration more to their advantage to find items/gold/quests. So instead of building cities as the only means to progress into the game, players can also use diplomacy and exploration as their main drive. Right after reading the OP I went to the bathroom. You know what happens when you have a nerdish idea stuck in your head while taking a big one. I realized not only how kickass this could be, but how it could “easily” be implemented with a few “simple” additions. Currently, playing without cities means there is no way to gain research points (unless you have meditation). It is virtually impossible to win with any victory condition. It isn’t a complicated one: à Allow players to gain research points without the need of cities. Mind you, I think playing without cities should be a lot harder than with cities, and should act almost as a higher difficulty setting to give players a challenge…and a fun and different game play option as well. Once there is a way to gain research points outside of cities, the rest falls into place. Players can advance in diplomacy and adventure (and maybe dabble in some magic). This allows the player to diplomatically interact with other factions in order to trade for items/gold/tech in a mercenary fashion. The player is completely free to start exploring the map with an elite group of adventurers/mercenaries that the player has assembled via recruiting heroes/champions and acquiring units from factions with diplomacy. How this is done exactly can be done in many creative ways. The most simple would be to allow players to randomly find “lost scrolls” (or via quests) that give them a small amount of research points to use on techs. Something even more concrete (and perhaps essential in order for this to work), allow players to use essence at an ancient ruin in order to ‘absorb’ the ‘mental imprints’ of the past inhabitants. This gives players reliable way to gain research. Also perhaps allow players to choose a single low tier tech at the sovereign creation screen. Then city less players can start the game with their feet running. Eventually city-less players will need to start interacting with city-based factions so they can attain new techs and gold from them as mercenaries. The way I see it playing it out, at the start of the game, players goes straight into exploring and leveling up, instead of building cities. Hopefully players bought a tech at the creation screen to start off with (best would be something in adventure or diplomacy). The player then starts doing newbie quests, finding lost tech, killing newbie mobs and going to other factions to barter in their loot (and maps) for additional units, treaties and tech. NPC faction AI should recognize that the player has no cities and treat them slightly different (a bit condescending , but more relaxed and less cautious). In this state, the city-less player is at a big disadvantage since they are beholden to other factions. They produce nothing on their own… and I think it should be fun J Eventually the player would start doing larger quests, amassing more elite units and gathering small bands of champions, and conducting larger scale operations (relatively). A ‘conquest’ victory is out of the question and probably ‘spell of making’ as well, but I see a ‘diplomacy’ or ‘master quest’ as good possibilities. Maybe add in a new victory mode called ‘annihilation’ that is triggered when all other sovereign/factions (or a certain percentage or type) are destroyed. This is something a rampaging elite mercenary army (supported by larger paymasters) could accomplish. In a multiplayer 1v1, the player without cities should get steamrolled if they try to go up against the other player. But it will create an interesting ‘cat and mouse’ game. With many NPC factions, one player can hide behind a NPC sponsor(s), while the other player tries to destroy him and his paymasters. Or the city-less player forgoes trying to do any sort of military operation and dedicates winning via master quest…constantly running away from the other player. So now there is a situation where one player is desperately trying to complete quests, while the other player hunts him/her down across the map in order to stop him from finishing the quest (can you say epic). This might be too much to balance perfectly, but I say just avoid trying to balance it all together and simply make playing without cities much harder (but still interesting). Again this might just be mod material, but I can see it “easily” being added to the game if it becomes popular enough.
OPCFW_CODE
Caching YouTube videos across loads Is there a way to force the browser to cache loaded YouTube videos across page loads? In other words, if you refresh some video page in which the video already started loading (or is fully loaded), the player/browser starts redownloading the video all over, even when using the same resolution, instead of using the version already in cache. Is there a way to change that behavior? The above is observed when using both the Flash player and the experimental HTML5 player. Odd you ask this, as I know in IE, or at least the version I have on my other computer (Which is having networking problems at the moment so it's out of use) youtube already does this. I guess it's dependent on cache size. Is your cache size set to enough value to accommodate videos ? Edit: the things I've written below hold for HTML5 version (in which browser controls everything), in Flash version, Flash settings may be important as well. Unfortunately, regarding particularly YouTube, the case is very complicated (perhaps it's more simple on other video sites). First, ad Firefox caching audio/video files, these about:config setting may affect things: browser.cache.disk.enable browser.cache.memory.enable browser.cache.disk.capacity browser.cache.disk.max_entry_size browser.cache.memory.max_entry_size You need to have at least one of two kinds of caches enabled, and appropriate cache size set, plus additionally high enough max_entry_size. Initially, max_entry_size is not very high, this makes sense, as you don't want generally to wipe half of your cache in order to store a HD VEVO video. Ok, so browser side is fine. Next step is server-side cache restrictions. I've opened random YouTube video (user uploaded, not a copyrighted stuff, it may differ but haven't checked), and here are the response headers of the FLV file (taken with Fiddler): Cache-control: private means file can be cached in your browser, but not by any intermediary caches (e.g. ISP cache) If Expires and max-age are both specified, max-age wins: 14.9.3 Modifications of the Basic Expiration Mechanism If a response includes both an Expires header and a max-age directive, the max-age directive overrides the Expires header, even if the Expires header is more restrictive So far so good, max-age means we are allowed cache it locally in the browser for ~6hours. But let's refresh the page or load it in the new tab and compare the list of HTTP requests: http://o-o---preferred---sn-vg5obx-hgnl---v12---lscache5.c.youtube.com/videoplayback?upn=mY2b-T1WqcI&... http://o-o---preferred---sn-vg5obx-hgnl---v12---lscache5.c.youtube.com/videoplayback?upn=U175csZ9oyw&... Seems that YouTube adds params either to track of numbers of plays, to fight abuse (to have non-predictable video URLs) or whatever. When opening in a new tab, I've even sometimes seen different servers targeted (load balancer at work): http://o-o---preferred---sn-vg5obx-hgnl---v12---lscache5.c.youtube.com/videoplayback?.. http://o-o---preferred---sn-25g7rn7s---v12---nonxt5.c.youtube.com/videoplayback?.. It's the same with Flash-based and HTML5-based video watching. Because URLs are different (even if this was by one character), the browser needs to redownload the whole video. Why different video URLs are being targeted each time? This is because the URLs that the videos are requested from, like those: http://www.youtube.com/watch?v=[[videoid]] http://youtube.googleapis.com/v/[[videoid]] have a response header Cache-Control: no-cache which means, browser is not allowed at all to cache this page, every time there's a request to the server needed, the server responds with new 200 OK reponse and different params to be used to query for the video. Other video sites may be not that restrictive and hence you can have the video cached between the loads. -- I've noticed even more interesting thing when opening a video (it was ~5 MB) in IE8. In Firefox, the whole video is loaded as a one stream. In IE8, it's sent as three ~1.7 MB chunks. Perhaps some internal IE thing that it can't handle big files nicely. -- How to enable caching? One could write an addon to Firefox, or a Fiddler script, which will strip or replace appropriate cache-related headers from YouTube HTTP responses, to cheat the browser about what it's allowed to do. Then the browser will cache more aggresively and keep the video across loads, given all the other requirements are satisfied.
STACK_EXCHANGE
from iconservice import * class Poll(object): _VOTERS = "VOTERS_" _VOTERS_CHOICE = "VOTERS_CHOICE_" def __init__(self, obj: dict, db: IconScoreDatabase) -> None: self.__id = obj['id'] self.__name = obj['name'] self.__question = obj['question'] self.__answers = self.addAnswers(obj['answers']) self.__time_frame = obj['time_frame'] self.__initiator = obj['initiator'] self.__voters = ArrayDB(f"{self._VOTERS}{self.__id}", db, value_type = str) self.__voters_choice = ArrayDB(f"{self._VOTERS_CHOICE}{self.__id}", db, value_type = int) def addAnswers(self, answers: list) -> list: temp_list = [] for ans in answers: new_answer = dict() new_answer["id"] = len(temp_list) if 'id' not in ans else ans['id'] new_answer["name"] = ans if 'name' not in ans else ans['name'] temp_list.append(new_answer) return temp_list def vote(self, answer_id: int, sender_address: str) -> None: self.__voters.put(sender_address) self.__voters_choice.put(answer_id) def getId(self) -> int: return self.__id def getAnswers(self) -> dict: return self.__answers def getAnswerById(self, id: int): for answer in self.__answers: if answer["id"] == int(id): return answer def getName(self) -> str: return self.__name def getPollStartBlock(self) -> int: return int(self.__time_frame['start']) def getPollEndBlock(self) -> int: return int(self.__time_frame['end']) def serialize(self) -> dict: return { "id": self.__id, "name": self.__name, "question": self.__question, "answers": self.__answers, "time_frame": self.__time_frame, "initiator": self.__initiator, } @staticmethod def removeVotes(poll_id: int, db: IconScoreDatabase) -> None: voters = ArrayDB(f"{Poll._VOTERS}{poll_id}", db, value_type = str) voters_choice = ArrayDB(f"{Poll._VOTERS_CHOICE}{poll_id}", db, value_type = int) while voters: voters.pop() voters_choice.pop() @staticmethod def exportVotes(poll_id: int, iconService: IconScoreBase) -> dict: voters = ArrayDB(f"{Poll._VOTERS}{poll_id}", iconService.db, value_type = str) voters_choice = ArrayDB(f"{Poll._VOTERS_CHOICE}{poll_id}", iconService.db, value_type = int) votes = dict() for it in range(len(voters)): votes[voters[it]] = {voters_choice[it]: iconService.icx.get_balance(Address.from_string(voters[it]))} return votes
STACK_EDU
// // DateCell.swift // TravelingSalesman // // Created by Dennis Broekhuizen on 16-01-18. // Copyright © 2018 Dennis Broekhuizen. All rights reserved. // // Custom cell used in PlanRouteViewController to choose a date for a route. import UIKit class DateCell: UITableViewCell { var viewController: PlanRouteViewController? @IBOutlet weak var dateLabel: UILabel! @IBOutlet weak var datePickerView: UIDatePicker! @IBAction func datePickerChanged(_ sender: UIDatePicker) { dateLabel.text = Route.dateFormatter.string(for: datePickerView.date) viewController?.date = dateLabel.text } }
STACK_EDU
Metrics Collector for Apache Cassandra (MCAC) is the key to providing useful metrics for K8ssandra users. MCAC is deployed to your Kubernetes environment by K8ssandra. If you haven’t already installed K8ssandra, see the install topics. MCAC aggregates OS and Cassandra metrics along with diagnostic events to facilitate problem resolution and remediation. K8ssandra provides preconfigured Grafana dashboards to visualize the collected metrics. About Metric Collector Built on collectd, a popular, well-supported, open source metric collection agent. With over 90 plugins, you can tailor the solution to collect metrics most important to you and ship them to wherever you need. Cassandra sends metrics and other structured events to collectd over a local Unix socket. Fast and efficient. MCAC can track over 100k unique metric series per node. That is, metrics for hundreds of Cassandra tables. Comes with extensive dashboards out of the box. The Cassandra dashboards let you aggregate latency accurately across all nodes, dc or rack, down to an individual table. - Little or no performance impact to Cassandra - Simple to deploy via the K8ssandra install, and self managed - Collect all OS and Cassandra metrics by default - Keep historical metrics on node for analysis - Provide useful integration with Prometheus and Grafana Supported versions of Apache Cassandra: 2.2+ (2.2.X, 3.0.X, 3.11.X, 4.0) Sample overview metrics in Grafana Cassandra node-level metrics are reported in the Prometheus format, covering everything from operations per second and latency, to compaction throughput and heap usage. Example: Sample OS metrics in Grafana Sample cluster metrics in Grafana Let’s walk through this architecture from left to right. We’ll provide links to the Kubernetes documentation so you can dig into those concepts more if you’d like to. The Cassandra nodes in a K8ssandra-managed cluster are organized in one or more datacenters, each of which is composed of one or more racks. Each rack represents a failure domain with replicas being placed across multiple racks (if present). In Kubernetes, racks are represented as StatefulSets. (We’ll focus here on details of the Cassandra node related to monitoring. Each Cassandra node is deployed as its own pod. The pod runs the Cassandra daemon in a Java VM. Each Apache Cassandra pod is configured with the DataStax Metrics Collector for Apache Cassandra, which is implemented as a Java agent running in that same VM. The Metrics Collector is configured to expose metrics on the standard Prometheus port (9103). One or more Prometheus instances are deployed in another StatefulSet, with the default configuration starting with a single instance. Using a StatefulSet allows each Prometheus node to connect to a Persistent Volume (PV) for longer term storage. The default K8ssandra chart configuration does not use PVs. By default, metric data collected in the cluster is retained within Prometheus for 24 hours. An instance of the Prometheus Operator is deployed using a Replica Set. The kube-prometheus-stack also defines several useful Kubernetes custom resources (CRDs) that the Prometheus Operator uses to manage Prometheus. One of these is the ServiceMonitor. K8ssandra uses ServiceMonitor resources, specifying labels selectors to indicate the Cassandra pods to connect to in each datacenter, and how to relabel each metric as it is stored in Prometheus. K8ssandra provides a ServiceMonitor for Stargate when it is enabled. Users may also configure ServiceMonitors to pull metrics from the various operators, but pre-configured instances are not provided at this time. AlertManager is an additional resource provided by kube-prometheus-stack that can be configured to specify thresholds for specific metrics that will trigger alerts. Users may enable, and configure, AlertManager through the values.yaml file. See the kube-prometheus-stack example for more information. An instance of Grafana is deployed in a Replica Set. The GrafanaDataSource is yet another resource defined by kube-prometheus-stack, which is used to describe how to connect to the Prometheus service. Kubernetes config maps are used to populate GrafanaDashboard resources. These dashboards can be combined or customized. Ingress or port forwarding can be used to expose access to the Prometheus and Grafana services external to the Kubernetes cluster. Where is the list of all Cassandra metrics? The full list is located on Apache Cassandra docs site. The names are automatically changed from CamelCase to snake_case. How can I filter out metrics I don’t care about? Please read the metric-collector.yaml section in the MCAC GitHub repo on how to add filtering rules. What is the datalog? And what is it for? The datalog is a space limited JSON based structured log of metrics and events which are optionally kept on each node. It can be useful to diagnose issues that come up with your cluster. If you wish to use the logs yourself, there’s a script included on the MCAC GitHub repo to parse these logs which can be analyzed or piped into jq. Alternatively, we offer free support for issues, and these logs can help our support engineers help diagnose your problem. - For details about viewing the metrics in Grafana dashboards provided by K8ssandra, see Monitor Cassandra. - See the topics covering other components deployed by K8ssandra. - For information on using other deployed components, see the Tasks topics. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
OPCFW_CODE
Calendar of meetings and events relevant to the U.S. ice drilling science and technology communities. Greenland Ice Sheet Stability: Lessons from the Past Tackling the topic of Greenland Ice Sheet stability requires input from a range of disciplines that encompass both paleodata generation (ice and climate history) and numerical ice sheet modeling. We wish to gather a community of diverse experts, including early career scientists, to bring different datasets and approaches together to see if consensus can be reached on the current state of knowledge of Greenland Ice Sheet history and sensitivity to climate forcing. The goal of this workshop is to (a) synthesize the current state of knowledge and (b) develop key research priorities that will help guide future efforts to make significant traction on the problem of Greenland Ice Sheet stability. The aim of the workshop organizers is to work with the community on a manuscript to be submitted following the workshop. Camilla Andresen, Andreas Born, Jason Briner, Heiko Goelzer, Kelly Hogan, Robert Law, Kerim Nisancioglu, Therese Rieckh EGU General Assembly 2023 The EGU General Assembly 2023 brings together geoscientists from all over the world to one meeting covering all disciplines of the Earth, planetary, and space sciences. The EGU aims to provide a forum where scientists, especially early career researchers, can present their work and discuss their ideas with experts in all fields of geoscience. Ice Core Early Career Researchers Workshop (ICECReW) 2023 The Ice Core Early Career Researchers Workshop (ICECReW) is a professional development workshop for early-career researchers. It will be held in-person at the University of Washington on May 7-8, 2023, prior to the start of the 2nd US Ice Core Open Science Meeting (being held May 8-10). ICECReW participants will meet with established researchers to better understand the processes involved in envisioning, planning, and funding ice core projects. This year's workshop focuses on encouraging collaboration and generating proposal ideas. ICECReW is intended for ECRs whose work contributes to the drilling, processing, or interpretation of ice core data. Application Deadline: February 10, 2023. 2nd Annual US Ice Core Open Science Meeting The second annual US Ice Core Open Science Meeting will be held May 8-10, 2023, at the beautiful Center for Urban Horticulture at the University of Washington in Seattle, WA. This meeting is intended for anyone interested in ice core science or related fields, including ice-core analysis, ice or subglacial drilling, glacier geophysics that supports or depends on ice core records, paleoclimate, and contemporary climate and ice sheet change. Goals of the meeting include 1) sharing the latest science, 2) discussing future ice core science projects in both polar regions and in alpine environments, 3) providing career development opportunities, and 4) improving communication about ice-core and related science both within and beyond the scientific community. We hope to attract a diverse group of participants, including those who may not have extensive experience working with ice cores. While this meeting is primarily oriented at researchers in the US, international colleagues are welcome to attend. The meeting will begin midday on Monday, May 8, and end in the late afternoon of Wednesday, May 10. The meeting will be preceded by an ICECReW pre-meeting workshop for early career researchers focused on proposal development and intentional collaboration (more details soon). Details on hotel rooms, travel support, and other aspects of the meeting will be publicized in February. To ensure you do not miss announcements, we recommend joining the Hercules Dome mailing list. See you in Seattle next Spring, Organizing Committee: Cate Bruns, Seth Campbell, T.J. Fudge, Kaitlin Keegan, Bess Koffman, Heidi Roop 2nd US Antarctic Science Meeting June 20-23, 2023 On line (Zoom) conference hosted by the US Scientific Committee on Antarctic Research (US-SCAR) Meeting details with links for meeting registration and abstract submission at the US-SCAR website. No registration fee. US-SCAR is supported by funding from the NSF/Office of Polar Programs/Antarctic Sciences In 2021 the US Scientific Committee on Antarctic Research (US-SCAR) hosted the first US Antarctic Science Meeting. We are now announcing the call for abstracts and registration for the 2nd US Antarctic Science Meeting (June 20-23, 2023). The US Antarctic Science Meetings are for US scientists who are conducting research in, from or about Antarctica and the Southern Ocean. Scientists interested in getting involved in Antarctic research through US programs are also welcome and encouraged to attend. This conference is open to all US scientists and anyone interested in US Antarctic research. The meeting will provide opportunities for US Antarctic scientists to get together and present their work, and for early career researchers and others new to Antarctic science to learn about SCAR and the various resources available to US scientists for Antarctic-related research. There will be a mix of Lightning Talks, panels and social activities for the US Antarctic community to meet and interact. The meeting and associated events will be on Zoom. The schedule is set for two hours each day with additional time added for socializing. The panels will have brief presentations by panelists, and ample time will be devoted to questions and discussion. It will be a Zoom meeting, not a webinar. If you have any questions, please contact Deneb Karentz email@example.com, US Delegate to SCAR. Hope to “see” you in June! Sincerely, Deneb and the US-SCAR Team US Delegate, Scientific Committee for Antarctic Research (SCAR) SCAR Vice President for Science Professor, Department of Biology and Department of Environmental Science University of San Francisco 2130 Fulton Street San Francisco, CA 94117-1080 International Symposium on the Edges of Glaciology The conference will focus on edges in glaciology, which may include: beds of ice sheets or glaciers, supraglacial processes, crevasses, calving, grain boundaries (of ice and snow crystals), and perhaps philosophical edges. If you are hoping to attend the symposium please indicate your ‘Expression of Interest’. This will ensure you will receive all information relevant to the symposium. The First Circular is now available as a PDF. 15th International Conference on the Physics and Chemistry of Ice (PCI-2023) This conference will cover a wide range of topics related to physical, chemical, biological, geological, and environmental aspects of ice. The topics will range from fundamental to applied research, and will include laboratory, field, modeling, and computational work. We expect to have interdisciplinary discussions of ice. Session topics include: - Surfaces and interfaces of ice - Mechanical, dielectric, and optical properties of ice - Ice phases, amorphous ice, and glass transition - Ice and life - Reactions on/in ice - Ice and snow in the cryosphere - Ice in space - Clathrate hydrates 2023 WAIS Workshop The West Antarctic Ice Sheet (WAIS) Workshop Organizing Committee is pleased to announce that the 2023 WAIS Workshop will take place September 25-28, 2023, at the University of Minnesota's Cloquet Forestry Center just outside Duluth, MN, USA. Sponsored by the National Science Foundation and NASA, the workshop focuses on marine ice-sheet and adjacent earth systems, with particular emphasis on the West Antarctic Ice Sheet. As a reminder, recordings from the 2022 WAIS Workshop are available on YouTube. Previous years' talks are also online and past agendas are available at waisworkshop.org. The website will be updated with additional information about WAIS 2023 as it becomes available. To stay up-to-date on all things WAIS Workshop, sign up for the email distribution list here.
OPCFW_CODE
I played with the timer registers to change the clock speed and verified that the output pin was toggling at the rate I'd expect. So then it was time to use this to time the display update. When I previously mentioned that I was using Timer0, this was incorrect. I was using Timer1, the 16 bit timer. For timing, I grabbed the timer count before the display loop and then grabbed it again at the end. Subtracting these, I came up with the elapsed counts that the display loop took. I sent this out the serial port. I also inserted a 100 ms delay between updates so that I wouldn't overrun things. Here's a screenshot of the data I received: The first number is the elapsed count, the second number is the starting count and the third is the ending count. I was at first having some difficultly getting the AVR to do math right,. When it subtracted the the start from the end the result was always the same as the end value. ??? I looked into that a bit, and then it started working. I'm not sure what I did differently. Anyway, the clock speed is 16 MHz. The above timings were with the clock select bits set to 1/64. So 16,000,000/64 = 250,000 Hz, which is 4 microseconds per cycle. 0.000004 * 35698 = 142.8 milliseconds. That's kind of a long time. I verified that the counter wasn't wrapping around in 2 ways. First, I logged the data into RealTerm, appending a timestamp. The values came in about every 1/4 second. Subtracting the 100 ms delay, that works out to close to 142 ms. Second, I changed the divider to 1/256. At this speed it would take a little over a second to wrap the 16 bit counter, and I was definitely getting my data at faster than once a second. So 143 ms it is. That's pretty slow. About 7 times per second if that's all I'm doing. But I also need to be receiving data from the GPS, reading the control inputs, and perhaps managing an acceleration or deceleration. Since my cruise control needs 1-2 seconds between button presses, that shouldn't need much processing time or a quick response rate. I'm not sure how long parsing the GPS data will take. I'll need to set up an ISR to read the serial port and place the data in a buffer, but that's probably a good thing to do anyways. And if I can set up interrupts for the control pins, I should be okay. But this does suggest that I could go another route. I could have one AVR control the display, and another to parse the GPS data and handle the control inputs. I do have a little concern of the length of wire I'll need if I use just one. I want the display to be up in my rearview mirror. But the controls will be down between the two front seats. There's probably at least a 10 ft run between those two. Will electrical noise interfere with the signals from the rotary encoder? I'm pretty sure I couldn't get away with putting the OLED display at the end of a 10 ft long wire and get IIC to work. While I think through those things, I'm going to move on to parsing the GPS data.
OPCFW_CODE
How time flies! It has been a little bit long time I haven’t update my blog! So sorry about this especially for the guys who sent email or msn messages to me(becoz of so many emails and so busy and no time to reply one by one, I just want to say sorry to anyone here I didn't reply to)! These days I had been busy with my work and also I want to change my career and I must decide to stay in Singapore or go back to China, so I had some interviews which I found very interesting and I want to share to you guys who want to know or want to get experience from it and also is a record for myslef. Such as this one, details as below: The First Interview Before I start, I would specifically like to say a massive thanks to the companys(HR/Tech Staff), without whom this article would never have been possible. This company which I applied is the world leader in creating interactive applications for connected devices. They are working with leading consumer electronics companies such as Apple, Samsung, LG, Panasonic, Philips. Online service providers and media/sports companies they work with include Google, Facebook,ESPN, NBA, CNBC and Twitter. They want a Silverlight .Net Architect/Developers/Programmers in Xbox Projects(This job is OK for me becoz I mainly looking for Senior Software Developer/Team Lead/Architect and focused on WPF/Silverlight/Cloud Computing) and Ability to travel is not a requirement, but if staff is interested in travel, there is also the opportunity to work overseas at Microsoft Redmond or the media company's location—California,Stockholm, Hong Kong, Madrid and Mountain View(This is one of the reasons I want to join also). The second day the HR sent me the email and asked me some basic information and we scheduled the interview to last Monday(Becoz that day is a Hari Raya Holiday, Hari Raya Puasa in Singapore is celebrated by the Muslims to mark the end of their one month of fasting ) Before the interview, I installed Skype and set all equipments well and waitting for the interviewer. Around 11AM the interviewer started a video call with me and asked the basic details such as my Career Overview,Technical Skills,Experience,Academic Profile and Summary of Projects experience(Judging from her accent, maybe she is from Europe). The Second Interview Then after half an hour, As discussed with her via Skype, I was required to take a written test to evaluate my understanding of Silverlight. And also noted this is not an open book test. I just have 10 minutes to complete and return the test paper to interviewer(So this means I have no time to consider more and I must try to finish it asap). 1. The following code prints out integers stored in memory. Please modify it so that the integers are printed in ascending order. Please feel free to use any class, method, etc provided by .NET Framework 3.5 My answer as below: 2. Referring to the following piece of code: It gets the content of a web page and print it out to the screen. Please suggest the reason why the output is: Got the following data: My answer as below: Because of Silverlight’s asynchronous 3. What is the purpose of datacontext in MVVM? Like MVP, Model-View-ViewModel (MVVM) pattern splits the User Interface code into 3 conceptual parts - Model, View and ViewModel out of which the concept of the ViewModel is the new and the most exciting. Model is a set of classes representing the data coming from the services or the database. View is the code corresponding to the visual representation of the data the way it is seen and interacted with by the user. ViewModel serves as the glue between the View and the Model. It wraps the data from the Model and makes it friendly for being presented and modified by the view. ViewModel also controls the View's interactions with the rest of the application (including any other Views). Contents is a bridge for View and ViewModel using the Binding, we always use MEF or some IOC container to do that.We have finished 3 projects with MVVM and 1 project with MVP . 4. Describe the concept of Inversion of Control (IoC). We did so many projects with IOC, As I know, Inversion of Control (IOC) is an object-oriented programming practice where the object coupling is bound at run time by an assembler object and is typically not known at compile time using static analysis. We used Unity and Spring.Net in our project. 5. How would you trigger a storyboard from a viewmodel? At our first project, we used messagers to do that(View and viewmodel uses a simple Publish/Subscribe model to allow loosely coupled messaging.) and later we found the way as below: The Third Interview After completing the written test. Moving into the next phase of our recruitment process, I was required to take a simple programming test to verify my technical ability. The task is to build a memory game and a server for players' high scores. They do not expect me to know all the required coding technique needed to build the game and noted to me"Please feel free to do research, find reference and learn from the internet". The judging criteria, in descending order, would be quality, delivery time and creativity. Also I was required to provide detailed instructions on how to set up the game and server, and to develop the application using Silverlight. The details of the task requirement is in the enclosed package and package up the finished work and return to interviewer. The task details as below: The goal of this work sample is to construct a game called ”Colour Memory”. The game board consists of a 4x4 grid, all in all 16 slots. All slots consists of cards face-down. The player is to flip two of these upwards each round, trying to find equals. If the two cards are equal, the player receives one point, and the cards are removed from the game board. Otherwise, the player loses one point and the cards are turned face-down again. This continues until all pairs have been found. After the game is finished, the user would be required to input his/her name and email. User's details and the scores would then be submitted to the database and the user would get notified with the highscores and his position in score rankings. 1,The application must be developed with Silverlight. 2,The application must have at least one customized control in Silverlight. 3,The application must have a high score data provider. 4,Developer must provide instructions on how to set up the game and server. 5,The application should be blendable. 6,The application should have sample data, and provider which can be easily switched with the high score date provider. 7,The application should implement Unit Testing wherever applicable. 8,The application should be controllable by only the arrow keys (to navigate) and enter (to select) (except the operations inside the input field) 9,The application should follow the design template illustrated on the last page 10,Highscores server is suggested to be developed in PHP and MySQL 1,The game should entirely fit inside a 720x576 area 2,The game info area is to contain information about the current game session. Current score for instance. Be creative 3,The graphics for the cards and the logotype have been supplied 4, All other graphical elements are up for you to decide upon 5,The restart button should start a new round when selected 6,There has to be a way to signal to the player that the game is over, when all pairs have been found. Ideally, this includes an option to play a new round The implementation of Colour Memory should be delivered in a compressed archive containing all necessary files and resources. Also, an instruction on how to install and start the game is expected. After I confirmed that I had received this test and I started to do it using WPF and My WPF version finished 3 hours later, details as below: The XAML Code: Core code(Non MVVM version): After finished the WPF version, I found time was so early, so I continued finished WPF MVVM version and Silverlight Version. If some of you have interesting with this Game source code, I will post to CodePlex later. These days these interviews gave me some experiences, For HR/Manager, They always asked these questions besides my basic information(Career Overview,Technical Skills,Experience,Academic Profile and Summary of Projects experience etc.) : 1,What two or three things are most important to you in your job? 2,What responsibilities did you like most and least in your last job? Why? 3,Give me a specific example when you faced a particularly difficult problem? What steps did you take to resolve it? 4,What characteristics do you feel distinguish a great software engineer? Give me an example in your past which you feel demonstrates those qualities in you. 5,Describe one example where your actions led to improved quality of your organisation’s product or service. 6,What do you feel is the most effective way to keep up to date with latest technologies? Why? 7,What do you see yourself doing five years from now? 8,What makes you stand out / why should we hire you? For the technology skills, You must prepare some questions because the interviewer’s pronunciation have so many differences(such as US, Singapore,India and the UK), If you can not follow them and you will have a nightmare with this interview.Form the interviews I joined before, They like ask below questions: 1. What is WPF/Silverlight ? Why it is used? 2. What is XAML? 3. What is Dispatcher Object? 4. What is Dependency Object? 5. What is the architecture of WPF/Silverlight ? 6. What are Types of Events and Event Routing or a Routed Event? 7. What is difference between Event Bubbling Vs Event Tunneling? When to apply what? 8. What are difference Win-Forms,WPF, ASP.NET, ASP.NET, ASP.NET MVC, Silverlight ?When to choose what? 9. What are different types of Panels in WPF/Silverlight ? Explain them? 10. What is difference between StackPanel, DockPanel, WrapPanel and Grid? 11. What are Primitive Controls and Look Less Controls? 12. What are different important components to know in WPF/Silverlight ? Explain them? 13. Why do you think WPF/Silverlight has more power? 14. How did you use Dependency Object?Purpose of Freezables?Visual vs Logical tree? 15. Example of attached behavior? 16. What is MVC,MVP,MVVM? Did you use PRISM ? What are its advantages of using it in real time applications? 17. If you used MVVM, How to allows view to communicate LifeCycle events to a ViewModel without any hard reference links? 18. If you used MVVM, Why you use attached behaviours,How to do Numeric text entry? 19. If you used MVVM, How to allows the ViewModel to determine if a Models data should be editable? 20. If you used MVVM, How to show MessageBox service,Popup window service? 21. If you used MVVM, How would you trigger a storyboard from a viewmodel? 22. WPF/Silverlight Performance tuning At last, Today is sunday and Let’s enjoy a picture and get more pleasure!!!
OPCFW_CODE
GROUPER: A DYNAMIC CLUSTERING INTERFACE TO WEB SEARCH RESULTS Dr. AMBEDKAR INSTITUTE OF TECHNOLOGY, BANGALORE-56 Proposed Solution & Goals Proposed Solution & Goals How Groupers work?? How Groupers work?? Search engine results are not easy to browse Problem of search engine • Search engine return long ordered list of document Ranked list presentation. Users forced to sift through to find relevant Wastage of time. Alternative method for organizing retrieval Algorithms groups the documents based on their Easy to locate. Overview of retrieved document set. Post- retrieval Document Clustering Clusters computed based on returned doc set. Cluster boundaries appropriately partition set of documents at hand. Pre-Retrieval document clustering Offline clustering of documents. Document clustering performed in advance on the collection as whole. Might be based on features infrequent in Problem with search engines Severe resource constraints. Cannot dedicate enough CPU time to each query – NOT FEASIBLE. Hence clusters have to be PRE-COMPUTED. clustering interface to HuskySearch meta search service. HuskySearch meta-search engine: Based on MetaCrawler. Retrieves results from several popular web search Clusters results using STC algorithm. Addresses scalability issue. No additional resource demands on search Runs on client machine. Suitable for distributed IR systems. Group similar documents together. Cluster description must clusters when appropriate. Clustering can be done in 2 ways: b)Download and cluster. Overview of STC Algorithm Linear time clustering alg. Based on identifying phrases common to group PHRASE:Ordered sequence of one or more BASE CLUSTER:Set of documents that share a STC has 3 logical steps Transformation- using Light stemming Alg. 2)Identification of Base are marked; non-word Sentence boundaries Clusters: tokens are stripped. Inverted Base Clusters intousing a D.S. called 3)Merging index of phrases- clusters: SUFFIXdegree of overlap. sentence cluster assigned a SCORE. Clusters ; coherent SCORE(No. of doc’s,No. of words in phrase). Stoplist is maintained. Overlapping clusters ; Shared Phrases. Fast and incremental. Doesnot coerce the documents in predefined number of clusters. DESIGN FOR SPEED 3 characteristics that make Grouper fast: 1)Incrementally of Clustering Algorithm. STC performsuse free CPU time.comparisons. Grouper can large no. of string 3)Ability to form coherent into a unique integer. Each word result immediately after last document arrives. Produces transformed clusters based on snippets. Faster comparisons. results: 2 modes of clustering Documents of each base cluster encoded as bit vector a) Cluster the snippets (fast). for efficient calculation of document overlap. b) Download and cluster Additional speedup: (high clustering quality) a)Remove leading and ending stopped words. Eg:the vice president of – vice president. b)Strip off words that do not appear in minimal no. of EMPIRICAL EVALUATION OF Heterogeneous user population. Search for a wide variety of tasks. Documents retrieved in Husky STC Producesdoc’s followed Same no. of coherent clusters. Search sessions clusters using: Calculate no. of clustered K-means clustering algorithm. Comparison to a Ranked List Compared with HuskySearch based on: 1. Number of documents followed 2. Time spent 3. Click distance No. of doc’s followed by users 3 hypothesis made: 1)Easier to find interesting doc. 2)Help find additional interesting doc. 3)Helps in tasks where several doc’s required. Percentage of sessions in which users followed multiple documents is higher in Grouper Time spent on each doc followed Time spent = time to download Time Spent= time spent in network delays+ time in reading +time traversing the results doc’s+time into view selected doc presentation. +time to find next doc of interest it’s the time between a user’s request for doc and user’s Distance between successive user’s clicks on document set. In ranked list interface: Click distance= no. of snippets between 2 22 snippets scanned In clustering interface: Additional cost of skipping snippets. Any cluster visited; all snippets are scanned. 4 Empirical assessment of user behavior given a clustering interface to web search results. • Comparison to the logs of Husky Search. 1)May fail to capture semantic distinctions that user’s expect-while merging base clusters into clusters. 2)Difficult to navigate if num of clusters are more. Solution: Grouper II 1)Allows users to view non merged base clusters. 2)Supports a hierarchal and interactive interface.
OPCFW_CODE
Powershell command to FTP a file from server to select PCs on a network TLDR: I need to move a large file from a server to the desktop of multiple SELECT machines connected to it. Network topography: 1 Server 5 - 20 Connected PCs This topography is pretty standard across all remote networks that I work on. On some occasions, I have to put files on each machine on a network; sometimes only on a few individual machines that site admins utilize. I keep a 'master copy' of the file on the server at all times, and currently RDPing into each machine i need it on (once I resolve the IP to make sure they aren't printers/tablets/phones/etc...) and copy over the file where I need it from the server to the connected machine (the directory is standard). After researching, I think a PS script would probably work for what I need but I only know enough to be dangerous. I use the following to resolve IPs to host names so I can determine which machines I need to put it on: @echo off setlocal EnableDelayedExpansion set "xNext=" set "xComputer=" for /f %%A in ('net view /all') do ( set "xComputer=%%~A" if "!xComputer:~0,2!"=="\\" for /f "tokens=2,* delims=. " %%X in ('nslookup %%A') do ( if "!xNext!"=="1" ( echo.!xComputer! = %%X.%%Y set "xNext=0" ) if "!xComputer:~2!"=="%%~X" set "xNext=1" ) ) endlocal pause I want the ability to use the output of this code to build a list of IPs, then FTP the file to the IPs to a common directory C:/Folder . I could build a ComputerList.txt with the IPs and use it but I'm not quite sure how to integrate it. If you're just needing to copy a file locally you don't need FTP, you can use a copy command (copy, xcopy, robocopy). Create a powershell array from your objects, then use the foreach cmdlet and use robocopy to copy the file to each of the workstations. Something line like: Get-Content c:\machines_List.txt | Foreach-Object {robocopy c:\source \\$_\c$\folder filename.exe"} Assuming they are domain joined This worked beautifully! Exactly what i was needing. Thank you! Thankfully you don't need to recreate the wheel here. Use WinSCP. Here's the link to the documentation: WinSCP - Using from PowerShell If you need assistance after looking at this documentation, let me know by leaving a comment. I got this to work for one machine at a time, I couldnt quite figure out how to add a list of IPs to push the file across the network all at once
STACK_EXCHANGE
These settings can be seen if you edit a GPO on Windows Server 2008 or Windows Vista SP1 . Finally, you will want to flush your DNS cache for your computer to recognize changes to the file. Add the appropriate IP and hostname at the end of your hosts’ file windll.com/dll/microsoft-corporation/mfc120u, select Save, and then close the file. You’ll be asked, “Do you want to allow this app to make changes to your device? - You can also leverage the Microsoft MDM Migration Analysis Tool to generate a report of which policies map to modern policies. - If the bar is completely blue, you need to free up some space. - It will prevent wineserver from shutting down immediately. If you don’t see an update, don’t worry; this page will tell you if your hardware is currently incompatible. Click Windows Update and then click Check for updates in the right panel. You can disconnect from the Internet for a couple of minutes, then make sure that the downloading update is stopped. And you can also read this post – How to Stop Windows 10 Update Permanently? Finally, never ignore every update, because every update is for your computer to enhance your data protection. Actually, you can also reinstall Windows to perform Windows update. Besides Windows update Command Line, you can also use the Windows Update feature to update Windows. Clear-Cut Dll Errors Plans In The Uk However, the user is free to turn off this protection via the Trust Center. In this case, the Office application will write a value for each of the three available options under Security\Protected View. Malware can use this system to insert malicious code that can be executed in place of legitimate software through hijacking the COM references and relationships as a means for persistence. Winlogon process uses the value specified in the Userinit key to launch login scripts etc. This key stores the user’s settings in Internet Explorer. It contains information like search bars, start page, form settings, etc. The second and most important key to a forensic examiner is HKCU\Software\Microsoft\ Internet Explorer\TypedURLs. Figure 8 demonstrates the content of what the TypedURLs key displays. Autorun locations are Registry keys that launch programs or applications during the boot process. It is generally a good practice to look here depending on the case of examination. Introducing Quick Systems In Missing Dll Files Microsoft’s latest cumulative update for Windows 10 May 2019 Update ended up causing huge CPU spikes for some users. A number of Windows 10 users reported over the weekend that a Cortana-related bug is causing higher CPU and memory usage. If you want Patient Data Reports to look the same as the CRFs that are displayed in data entry windows, the value for both keys must be identical. In order for the system to use this directory, it must be supported by an http virtual directory that can serve files from it. The top level Oracle Health Sciences products directory. This is written to both the default and the specific branches of the registry.
OPCFW_CODE
It also includes MigraDoc Foundation which brings you all the high-level. Using an online aspose pdf set producer service help you convert your PDF to JPG quickly, without the burden of installing additional software on your PC. Please make sure the customer font file is correctly set. Allows distribution of derived works. Company = “our. CAD Cloud is aspose a aspose pdf set producer true REST API that enables you to perform a wide range of drawing processing operations including manipulation, editing, conversion and export in set the cloud, with zero initial costs. PDF for Java Aspose. Our Cloud SDKs are wrappers around aspose REST API in various programming languages, allowing you to process documents in language. Generator to new DOM approach, but Aspose team is always aspose pdf set producer delighted to help Aspose customers and. Tasks for Java Aspose. Star 0 Fork 1 Star Code Revisions 33 Forks 1. Instantly create competitor analysis, white-label reports and analyze your SEO issues. Schritt 5: Ver&246;ffentlichen. Develop & deploy on Windows, Linux, MacOS & Android aspose pdf set producer platforms. dll on Mono WORDSNET-8136 Remove existing demo code from the installer • Enhancements WORDSJAVA-627 Autoporting of static classes WORDSJAVA-702 Effects are aspose pdf set producer not applied correctly WORDSJAVA-705 Adding JavaDelete annotation should delete existing java file too. Read, edit and write Microsoft Project document formats including MPP & XML. Improve and monitor your website's search engine rankings with our supercharged SEO tools. Email. A few of the worth mentioning features of Aspose. Words WORDSNET-6406 /rtl/ Text rendering issue in PDF file: The text ("גב עה") WORDSNET-6409 Converting the doc to a PDF issue with image corrupted. cshtml view, rendered and interpreted as any Razor view would be, passed to aspose pdf set producer a PDF generation library (I use the excellent Aspose. WORDSNET-6651 Aspose output Pdf file is not opening correctly in syncfusion PdfViewer WORDSNET-6677 Docx to HTML conversion issue with border line of the autoshape WORDSNET-6681 Improve font substitution according to default registy values. A specially crafted PDF can cause a dangling heap pointer, resulting in a use-after-free condition. To use this SDK, you will need App SID aspose pdf set producer and App Key which can be looked up at Aspose Cloud Dashboard cloud//apps) (free registration in Aspose Cloud is required for this). Our Cloud SDKs are wrappers around REST API in various programming languages, aspose pdf set producer allowing you to process images in language of your choice quickly and easily, gaining. The list of alternatives was updated Nov. Slides for Java Aspose. These APIs do not need pdf Microsoft Office or Microsoft Word pdf to be installed on the machine to work with Word document formats. Zeta Producer ist kompatibel mit allen Webservern und. 14-Day Free Trial. sh file to register Aspose. CAD pdf for Java Aspose. Convert, View, Edit and do more with Word, PDF, PowerPoint, Excel, 3D, CAD and 100s of other file formats, powered by aspose Aspose APIs. NET, Java, Android, SharePoint, Reporting Services or JasperReports products respectively. Setting the companyname in the document properties doesn’t make a difference doc. &0183;&32;When saving a aspose pdf set producer document to PDF using this statement: doc. 1 aspose pdf set producer ∞ 10 ∞ ∞ ∞ For use with web sites/apps. Save(output, Aspose. The images including semi-transparent, rotated. We recommend that users install the latest Microsoft Windows service packs and updates before using our products, as doing so. If your homegrown applications create or edit aspose pdf set producer Word, Excel, PowerPoint or PDF aspose pdf set producer documents, or leverage Outlook, you. PDF for Java or report producer it pdf as discontinued, duplicated or spam. GitHub Gist: instantly share code, notes, and snippets. LineStyle is set to Solid without 'border-style' CSS attribute. PUB for Java set Aspose. PDF for Java Java APIs to create, manipulate and convert PDF documents in any application based on Java SE or EE. Aspose is proud to expand its APIs producer family with the addition of a new product set known as Aspose. x Severity and Metrics: NIST. Schritt 3: Vorlage w&228;hlen. To find out how to apply a license, follow the appropriate link below. PdfA1b has issues when check pdf with Preflight WORDSNET-6354 Barcode appears incorrect during printing a PDF i. OMR for Java Aspose. An attacker can send a malicious PDF to trigger this vulnerability. WORDSNET-6687 Doc to Pdf conversion with border line WORDSNET-6720 WORDSNET-6600 Border. x CVSS Version 2. NET On Premise APIs aspose pdf set producer to target. Sometimes, though, it. A specially crafted PDF can cause aspose pdf set producer a dangling heap pointer, resulting in a use-after-free. Imaging for Java producer Aspose. To trigger this producer vulnerability, a specifically crafted PDF document needs aspose pdf set producer to be processed by the target application. An uninitialized memory access vulnerability producer exists in the way Aspose. If you encounter an issue, please. PDF App Product family to view, aspose annotate, convert, compare, sign, assemble, update Metadata, search content, extract text & images, watermark, merge, redact sensitive information, unlock aspose pdf set producer password-protected files or convert Markdown files to PDF format without aspose pdf set producer spending a single penny. . This change may impact some customers who haven’t yet migrated their code from legacy Aspose. Skip to content. . Note: If you have bought Aspose. CAD Cloud Python SDK. We plant a tree for every 50,000 PDF converted to JPG. To trigger this vulnerability, a specifically crafted PDF document needs to be processed by the target application. An exploitable Use-After-Free vulnerability exists in the way FunctionType 0 PDF elements are processed in Aspose. Each Aspose product includes a License class. Tasks for Java are listed below. NET アプリケーションで操作できます。. Allows external distribution. &0183;&32;Use our free Apps available aspose pdf set producer in Aspose. Note for Java Aspose. Page for Java Aspose. NET, Java, Android, SharePoint, aspose pdf set producer Reporting Services or JasperReports then your license file will aspose work with all. We process your PDF documents and convert them to produce high quality JPG. It's possible to update the information on Aspose.
OPCFW_CODE
This content has been marked as final. Show 12 replies is it UI_MainWindow or Ui_MainWindow? They both should be the same in the .class AND in the Manifest.txt... i guess when you type you make lots of mistakes. Make sure your manifest file has a new line at the end. Make sure there is no typo in your command. Don't forget to add Manifest.txt rather than mainfest.txt mainfeast.txt Your manifest seems to be ok. Rename it to manifest.MF and put it in a folder called META-INF. Then, zip your Ui_MainWindow.class file and the META-INF folder. Rename it to app.jar. Now, keep your .jar libraries in the same directory as your app.jar. Just double click it or run through the command line using 'java -jar app.jar' That file is Ui_Main_Window.class that was typing mistake but when creating the jar i have typing correctly I am not done any typing mistake while creating the jar file i already followed the suggestion that you have mentioned but still the same error . This is my MANIFEST.MF file created under META-INF folder Class-Path: qtjambi-4.4.3_01.jar qtjambi-win32-msvc2005-4.4.3_01.jar Created-By: 1.5.0 (Sun Microsystems Inc.) When i extracted the app.jar following this i got 1. META-INF folder inside that MAINFEST.MF file the contents i have already placed above Please tell me whats wrong in that i can also give you the app.jar file please let me know the email id .. sorry again typo mistake that file name is MANIFEST.MF not MAINFEST.MF As far as the spelling mistakes go, please try to get into the habit of copy/pasting information. One thing that I have not heard mentioned, is to check that the manifest has a blank line at the end. What is the exact (copy/pasted) error that you are getting when running that Jar? Could not find the main class. Program will exist. This is the error i am getting ........... Thanks i have added the blank line now it is working fine... Thanks a lot for solving this problem >You are welcome. I think most of us have been caught out by that obscure problem, yet Sun maintains it is not a bug and will (therefore) not be fixed. An unfortunate aspect of these forums is that the CODE tags (usually quite handy) will cause the last line in a valid manifest file to disappear from the forum listing! Thanks a lot for solving this problem> These days I get Ant to write manifest files for me. It never forgets to add that blank line. ;-) When I use jGRASP it remembers the blank line... however i still get the error can not find main class Program will exit it is the most annoying thing Please, don't resurrect old threads. I'm now locking this one.
OPCFW_CODE
using System; using Tiger.Build.Compiler.Common; namespace Tiger.Build.Compiler.Ast { using MAst = System.Linq.Expressions.Expression; /// <summary> /// Represents expressions. Statements are considered special cases of expressions in AST class hierarchy. /// Unlike syntactic expression a syntactic statement cannot be assigned to a left value. /// However certain Ruby constructs (e.g. block-expression) allow to read the value of a statement. /// Usually such value is null (e.g. undef, alias, while/until statements), /// although some syntactic statements evaluate to a non-null value (e.g. if/unless-statements). /// </summary> public abstract class Expression : Node { public abstract Type Type { get; } /// <summary> /// Transform as expression (value is read). /// </summary> protected internal abstract MAst Transform(); /// <summary> /// Comverts the expression into the corresponding /// <see cref="System.Linq.Expressions.Expression"/> /// </summary> /// <param name="runtime"></param> /// <returns></returns> /// <exception cref="SemanticException">if there is some semantic error in the tree</exception> public MAst Compile(AstHelper runtime) { AstHelper helper = runtime.CreateChild(function: true, variables: true, types: true); CheckSemantics(helper); if (helper.Errors.HasErrors) { throw new SemanticException(helper.Errors); } return Transform(); } } }
STACK_EDU
Extensive information is available about infaunal soft-sediment communities in the Gulf of Mexico (Gulf) (Pequegnat et al. 1990, Rowe and Kennicutt II 2009, Wei et al. 2010), particularly from the large-scale sampling effort of the Deep Gulf of Mexico Benthos (DGOMB) project in the early 2000s (Rowe and Kennicutt II 2009). Infaunal soft-sediment communities in the northern Gulf differ by geographic location and depth (Rowe and Kennicutt II 2009, Wei et al. 2010). Density decreases with depth, while taxa diversity exhibits a mid-depth (1,100-1,300 m) maximum (Rowe and Kennicutt II 2009). Community composition is influenced by both geographic location and depth, with zones (as defined by Wei et al. 2010) encompassing specific depth ranges, ranging from 635 to 3,314 m, and separated into east and west components. These zones were correlated to detrital particulate organic carbon (POC) export flux, primarily from the Mississippi River (Wei et al. 2010), where POC flux decreases with depth (Biggs et al. 2008). The flux of POC has also been found to be higher in the northeast Gulf than the northwest (Biggs et al. 2008), and consequently, biomass of infaunal communities is positively correlated with sedimentorganic carbon content (Morse and Beazley 2008). Most of the deep Gulf is composed of soft-sediment environments, but the relative flat seafloor is punctuated in areas with other heterogeneous habitats, including chemosynthetic environments and deepsea coral habitats. Deep-sea corals create a complex three-dimensional structure that enhances local biodiversity, supporting diverse and abundant fish and invertebrate communities (Mortensen et al. 1995, Costello et al. 2005, Henry and Roberts 2007, Ross and Quattrini 2007, Buhl-Mortensen et al. 2010). In recent years, knowledge of the sphere of influence of deep-sea corals has expanded, with evidence that coral habitats also influence surrounding sediments (Mienis et al. 2012, Demopoulos et al. 2014, Fisher et al. 2014, Demopoulos et al. 2016, Bourque and Demopoulos 2018). Deep-sea corals are capable of altering their associated biotic and abiotic environment, thus serving as ecosystem engineers (e.g., Jones et al. 1994). The depositional environment and associated hydrodynamic regime around coral habitats differ from the extensive expanses of soft-sediments that dominate the sea floor (e.g., Mienis et al. 2009a. 2009a, Mienis et al. 2009b, Mienis et al. 2012), with the three-dimensional structure of the coral causing turbulent flows that enhance sediment accumulation adjacent to coral structures. In the northern Gulf, deep-sea corals generally occur on mounds of authigenic carbonate (Schroeder 2002) where elevation above the benthic boundary layer into higher velocity laminar flows allows for increased availability of food resources (Buhl-Mortensen and Mortensen 2005). The different hydrodynamics around corals likely affects the sediment geochemistry and in turn infaunal community structure and function (Demopoulos et al. 2014). Ecosystem-based research on Gulf infaunal communities has primarily focused on soft-sediment environments. Initial research on deep-sea coral-associated infaunal communities focused on Lophelia pertusa (e.g., Demopoulos et al. 2014), and more recent studies focused on octocorals (Fisher et al. 2014, Demopoulos et al. 2016, Bourque and Demopoulos 2018) and comparisons among coral habitat types (Bourque and Demopoulos 2018). Coral-adjacent sediment communities are distinctly different from nearby background soft-sediment (Demopoulos et al. 2014, Bourque and Demopoulos 2018), with a sphere of influence estimated to be between 14 and 100 m (Demopoulos et al. 2014, Bourque and Demopoulos 2018). The coral type (e.g., L. pertusa, Madrepora oculata, octocorals) also influences sediment communities, with L. pertusa habitats distinct from both M. oculata and octocoral habitats (Bourque and Demopoulos 2018). Differences among coral communities are influenced by depth,
OPCFW_CODE
I’m currently working on a pipeline that involves using OpenCV between two GPU computations. However, I encountered two problems: OpenCV code runs on the CPU, which means that data must be transferred from the GPU to the CPU and back to the GPU again. The OpenCV function can only process one sample (from a batch) at a time, which slows down the computation. Here is an example of my code: # GPU computation x = ... # B, C, H, W # Pre-allocate memory outputs = torch.zeros(some_shape, device="cuda") # Send each sample in the batch to the CPU for b in range(B): curr_x = x[b].cpu().numpy() # Use a special sequential algorithm, so no GPU alternatives output = cv2.function(...) # send the sample back to the GPU output = torch.Tensor(output).cuda() outputs[b] = output # Other GPU computations The code runs slowly with low CPU and GPU usage. I would be grateful for any suggestions or insights you may have. Thank you very much! I don’t know which function you are calling from OpenCV, but you might want to either check if OpenCV provides the CUDA version of this function or if you could use another library with GPU support (e.g. torchvision using native PyTorch operations or custom kernels). I haven’t profiled these implementations and don’t know what inputs you are using, but note that even if OpenCV might be faster in isolation on the CPU, you will still pay the penalty of moving the data between the device and host and thus synchronizing the code. Assuming kornia provides a GPU implementation no syncs would be added and the end2end time might still be faster. But as I said, I did not profile any of these methods. Yes, I actually profiled the runtime of kornia.geometry.ransac (which is equivalent to cv2.findHomography(..., cv2.RANSAC). Unfortunately, even with the overhead of data transferring, the OpenCV version still seems to be about 10 times faster than the Kornia implementation. Thanks for bringing up this function. Yes, I’ve also tested this one. The computation is a lot faster, but the lack of robust estimation (like RANSAC) leads to worse results in my case. Lastly, thanks for this suggestion! Unfortunately, as you pointed out, my next operation actually needs the transferred tensor. I was thinking about: Whether sending the entire x batch to the CPU leads to any speedup over sending each sample in x separately. (The speedup will probably be marginal, though) Whether it is possible to run the OpenCV function on all samples in the batch concurrently with multi-processing. But in general, I guess it is difficult to significantly improve the runtime without large edits to the pipeline or to the OpenCV/Kornia source code.
OPCFW_CODE
1 minute is definitely too short for a Keycloak session. Practically, what’s going to happen is that someone goes, works in Jira for a bit and is logged out of their OpenMRS Keycloak session during that time because they spent longer than a minute and they will have no active communication with the Keycloak server. (With SAML and unlike OAuth2, there’s usually no communication between the SP and IdP after the initial login, though SAML does sometimes have a mechanism for shared logout). The more common pattern with SAML flows is to setup the flow to prompt the user to login every time. This would mean modifying the login flow for Atlassian to just not automatically sign users in. That should be achievable in Keycloak with just configuration. We want to control usernames within the OpenMRS ID space so, for example, you are @raff across Atlassian, Talk, Add-Ons, Atlas, and any other service we connect to our SSO. The primary goals are (1) to make it easier for people to be recognized across community services and (2) to reduce the burden of supporting accounts across multiple services. If Atlassian supports (now or later) calling our endpoint for logout, that’d be great. I would rather be lengthening the session instead of limiting it – e.g., an hour- or day-long session by default with a “remember me” option that provides a week- or month-long session for use on personal devices. With password managers, it’s less of a nuisance, but I’d rather not bother people with unnecessary logins. If you log into Talk or JIRA on your laptop in the morning, why should we bother you with a login when you open the wiki that afternoon? We’re asking people to write code & save lives, not do their banking or manage their medical records. Is this where we stand now? We’ll also need to decide on a domain for SSO. I believe Atlassian tools will show names, not email addresses, in most cases, so the domain wouldn’t be seen often; however, it will forever be associated with the user’s content and not something we could change in the future. My assumption is we’d want a subdomain unobtrusive (short) and it’d be more future-proof to not be Atlassian-specific (e.g., email@example.com). My thoughts are something like: @is.openmrs.org (as in firstname.lastname@example.org) Given the purpose of the subdomain (solely to be used for SSO… not for email), I’m leaning toward @sso.openmrs.org or @idp.openmrs.org. The extension generates the NameID in the necessary format,.This is the cleanest solution I found to address the issue. Additionally, we can eliminate the interceptor, making the process even simpler. The extension will be deployed automatically during the Docker build. Here is a screenshot of the extension in action, all you have to do is provide the domain we use for SSO. Ultimately, this setup should probably live on adaba (LDAP / Crowd / ID). Maybe we need a test server (I think we used to have ldap-stg, but I don’t see that any more). Note that we use Ansible for actually deploying things as much as possible, so ideally don’t manually deploy things, if possible. Actually, we are using id.openmrs.org ports 80 & 443 for the ID website, but not the MX records. So, I believe we could use @id.openmrs.org for a SSO email address subdomain while the website is used by KeyCloak. How configurable are these pages? Any chance we could include announcements on the SSO page (e.g., “Planned downtime for OpenMRS Talk this Friday 22:00-23:00 UTC.” or “Tickets for #OMRS30 in Jayasanka City on Mars going fast! Get yours today!”). Atlassian tools (e.g., Confluence & JIRA) offer backup & restore functionality that generates a zipped XML file. We’ll need to be very careful before trying this, since a backup of either of these with attachments included is gong to be very big and could easily exceed available disk space. It may also strain CPU and/or memory and I wouldn’t expect the service to be available during an export. Atlassian offers a migration assistant app for both Confluence and JIRA to avoid the need to download & upload. It looks like the default approach during migration is to use the email addresses of users (not usernames). So, we may need an extra step to translate all emails to @id.openmrs.org. What @ibacher said. Ideally, we would add this KeyCloak setup through a PR to openmrs-contrib-ansible-docker-compose — as a replacement for files/id-stg. Then we can arrange to get that installed on a staging server. As @ibacher points out, thanks to @cintiadr’s leadership, we use infrastructure-as-code and avoid making any manual changes to our servers. @jayasanka let’s work together to move your work to our infrastructure, make it production ready and migrate data and services. To start with we need to have a staging environment in our infrastructure. We need to do the following: Create a plan on Bamboo to build the Keycloak image with the extension and push that to our dockerhub. Use specific image versions and not latest. Move postfix Dockerfile and postfix-config under postfix dir. Create a plan on Bamboo to build the postfix image and push that to our dockerhub. Let’s base the postfix image on a lightweight image like alpine instead of ubuntu. Use a specific version and not latest. Let’s not run it as root (add USER 1001 to Dockerfile). See Why non-root containers are important for security Once we have a staging environment up and running we need to have a migration plan in place that we will first try against the staging environment, but let’s first focus on creating the staging environment. We cannot directly set messages through the Keycloak dashboard ; however, we do have the flexibility to manually edit each page using a custom theme. Each tfl file, as demonstrated in this [link], corresponds to a specific screen. I have documented the process of adding an announcement in the readme file . It would be nice if we could do it within the dashboard, I think we can achieve such things by writing a small extension in the future. There’s a super easy hack to display announcements, but I personally don’t prefer using it That is to utilize the value of the HTML Display name field. Normally, this field is used to provide a custom HTML code for displaying a logo or other elements instead of the Display name. We can access this value in the template. I removed it and hardcoded the logo in the template. The hack is to displaying the text provided in the field as an announcement It would be preferable to be able to set/change/remove an announcement without having to edit theme pages, whether it’s through this hack or if we discover a better way to allow an admin to manage the announcement through KeyCloak’s admin pages. Also, I made a PR to the openmrs-contrib-ansible-docker-compose repository: By the way, Postfix needs root privileges to run. I have gone through a couple of forums, and they mention that Postfix does not support non-root users, and I couldn’t find a workaround. Let me know what you think. Could you please include a backup for postgres DB? For backups we use this image, but the backup script needs to be modified to call pgdump instead of just backing up a directory. You will need to install pgdump in the image and then adjust the backup script to accept DB connection details. The image can then be used as for example here. Once files are in /backup on the host, they are copied over to S3 by a host cron task so you don’t need to handle that. Do we need to backup anything else from the setup? @jayasanka meanwhile it’s time to write down specific steps to complete migration assuming we have new SSO up and running on staging. Do you have a migration plan in your head already? Could you please list the steps that you think we should follow to migrate users and services with the least disruption possible? We could exercise that on the staging environment.
OPCFW_CODE
BizTalk 360 comes with an integrated Business Activity Monitoring (BAM) portal, which gives the ability to query BAM views, perform activity searches, check user permission and check activities time window. If you have used BizTalk server BAM functionality for your applications BizTalk 360 will automatically pick up those configurations and display it. There is no … Continue reading BizTalk 360 – Integrated BAM Portal Today lets take a look at how BizTalk360 represents the send ports in a graphical way. It’s needless to say a picture is worth more than 1000’s words. Inside the BizTalk application, users can navigate to the send port section and see all the send ports defined for that application. When the user double-click on … Continue reading BizTalk360 – Graphical Representation Of Send Port BizTalk360 was announced for public technology preview yesterday, you can read more about it here https://www.biztalk360.com Why BizTalk360? There is one common problem across all the BizTalk customers. i.e there is no proper support tool for BizTalk. It’s a reality people are more passionate and interested in designing, architecting and developing the software and not … Continue reading Introduction To BizTalk360 Every BizTalk solution should have some kind of monitoring in place. I remember the phrase having a BizTalk environment without monitoring in place is like driving a car without a dashboard. The best option will be to use SCOM (Microsoft System Centre operation manager) to monitor BizTalk server, since you will get out of the … Continue reading Monitoring BizTalk Server with HP OpenView Introduction: NOTE: Opinions expressed in this article is from my own perspective. There are various definitions available for Service Oriented Architecture (SOA), the basic idea is to avoid building new applications from scratch each time and also to avoid duplicating applications, and data and business logic in one part of the organisation that already exist … Continue reading SOA, where will BizTalk Server fit in the technology stack (from performance perspective)? I became a full time user of MacBook running Windows Vista using BootCamp (Version 2.0) for a while now. One of the issue I came across recently is trying to find out the equivalent of “print screen” Key, I don’t need to emphasis on the importance of this. According to the document, its just F11 … Continue reading Print Screen on MacBook – BootCamp Recently someone raised this question in the newsgroup, they wanted to branch inside the orchestration based on the build of the assembly itself. Whenever an assembly is build with “Debug” mode, some System.Diagnostics.DebuggableAttributes are added to the assembly. One such attribute is “IsJITTrackingEnabled”, which will track information during code generation (MSIL) for the debugger. So, … Continue reading Determine whether the BizTalk assembly is Debug or Release build at runtime. In this article I’ll explain how you can call a Web Service which requires multiple arguments using a Custom pipeline and a custom pipeline component in a messaging-only scenario without using any Orchestration. Normally, when there is a requirement to call a web service from BizTalk, people tend to take the easy route of … Continue reading Calling Web Service from BizTalk 2006 in a Messaging only Scenario (aka Content based Routing) Since my current contract is comming to an end in 2 months time, I thought it’s a good Opportunity to update my web site as well my CV and with my current availability status. There is always a confusion in a real time production biztalk environment about the user rights. Most of the time when run ConfigFramework, btsdeploy, deployment wizard, accessing the admin tool. Do I need to be part of just “Biztalk Server Administrators” group, or do I need to be part of “SSO Administrators” as well to … Continue reading Biztalk users and Groups
OPCFW_CODE
Quicksort itself works by taking a list, picking an element from the list which is referred to as the "pivot" and splitting the list into two sub lists, the first containing all the elements smaller than the pivot and the second containing all the elements greater than the pivot. Once you have these two sub lists you can sort each one independently of the other since elements in one list will not be moved into the other after the sort is complete. This splitting into two lists is called "partitioning". Partitioning once will not sort the list but it will allow you to either use a different sorting algorithm on each sub list (partition) or to recursively partition the two partitions until you end up with a partition of 1 or 0 elements, which is necessarily sorted. For example, partitioning the list [7,3,6,4,1,7,3] using 4 as a pivot will give us a first partition of [3,1,3] and a second partition of [7,6,7]. The pivot itself, along with other duplicates of it, may or may not go in one of the partitions, depending on how the partitioning is done. If it does not go into one of the partitions, then the sort will place the pivot between the 2 partitions after they have been sorted. Partitioning by filtering The most intuitive way to partition is by creating 2 new lists, going through the unsorted list and copying elements from the unsorted list into one of the 2 lists. This is memory expensive however as you end up needing twice as much space as the unsorted list takes. The following partitioning algorithms are "in-place" and hence do not need any new lists. Partitioning by moving the pivot This is the partitioning algorithm I was familiar with at school. It's quite intuitive but slow when compared to the next algorithm. The way this works is by putting the pivot into its sorted place, that is, the place where it will be after the whole list has been sorted. All the elements smaller than the pivot will be on its left and all the elements larger than the pivot will be on its right. Therefore you would have created 2 partitions, the left side of the pivot and the right side. The algorithm uses a pivot pointer which keeps track of where the pivot is and an index pointer which is used to compare the pivot to other elements. The pivot pointer starts by being at the right end of the list (you can choose a pivot and swap it with the last element if you don't want to stick to the element which happens to be there) and the index pointer starts by being at the left end of the list. The index moves towards the pivot pointer until it encounters an element which is not on the correct side of the pivot, upon which the index and the pivot and swapped and the the index pointer and pivot pointer swap locations. Once the index pointer and pivot pointer meet, the pivot is in its sorted location and the left and right side of the pivot are partitions. function partition(arr, left, right) pivotPtr = right indexPtr = left while pivotPtr != indexPtr if indexPtr < pivotPtr //if index pointer is to the left of the pivot while arr[indexPtr] <= arr[pivotPtr] and indexPtr < pivotPtr indexPtr++ //move index pointer towards the pivot if indexPtr < pivotPtr swap(arr[indexPtr], arr[pivotPtr]) swap(indexPtr, pivotPtr) else //if index pointer is to the right of the pivot while arr[indexPtr] >= arr[pivotPtr] and indexPtr > pivotPtr indexPtr-- //move index pointer towards the pivot if indexPtr > pivotPtr swap(arr[pivotPtr], arr[indexPtr]) swap(pivotPtr, indexPtr) return pivotPtr Partitioning by dividing In the previous partitioning algorithm, we had to constantly swap the pivot in order to eventually put it in its place. This is however unnecessary as partitioning does not require the pivot to be in its sorted place, only that we have 2 partitions, even if the pivot itself is in one of the partitions (it doesn't matter in which one as it could be eventually placed in its sorted place in either partition). This time we will not care where the pivot is, as long as we know its value. We will need 2 pointers, a high and a low pointer, which will be moving towards each other. The low pointer will expect to encounter only elements which are smaller than the pivot and the high point will expect to encounter only elements which are larger than the pivot. When both pointers encounter a wrong element, they swap the elements and continue moving towards each other. When they eventually meet, all the elements to the left of the meeting point will be smaller than or equal to the pivot and all the elements to the right of the meeting point will be greater than or equal to the pivot. Since both pointers will be moving toward each other before swapping, this algorithm will do less swaps than the previous one and hence will be much faster. In fact a simple experiment will show that it does half the number of swaps. function partition(arr, left, right, pivot) lo = left hi = right while lo < hi while arr[lo] <= pivot and lo < hi lo++ //move low pointer towards the high pointer while arr[hi] >= pivot and hi > lo hi-- //move high pointer towards the low pointer if lo < hi swap(arr[lo], arr[hi]) /* Since the high pointer moves last, the meeting point should be on an element that is greater than the pivot, that is, the meeting point marks the start of the second partition. However if the pivot happens to be the maximum element, the meeting point will simply be the last element and hence will not have any significant meaning. Therefore we need to make sure that the returned meeting point is where the starting point of the second partition is, including if the second partition is empty. */ if arr[lo] < pivot return lo+1 else return lo
OPCFW_CODE
Split multiline text into multiple rows - NiFi/Kafka I have an input multiline text like this: "offset":3780214, "message":@ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range"offset":3780940, "message":@ 2022-05-23 12:25:39.879 [ 30] [lap=106.46 uS] Execution finished unexpectedly. +Exception message: +################################################ +# BEGIN # +################################################ + Error +################################################ +# END # +################################################" I can also remove the offset, if it's better for you: "message":@ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range"message":@ 2022-05-23 12:25:39.879 [ 30] [lap=106.46 uS] Execution finished unexpectedly. +Exception message: +################################################ +# BEGIN # +################################################ + Error +################################################ +# END # +################################################" I want to split that text in multiple lines, delimited by "message", like this: Row 1: @ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range Row 2: +Exception message: +################################################ +# BEGIN # +################################################ + Error +################################################ +# END # +################################################" I tried to do that with the SplitContent but it returns the first row multiple times, like this: @ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range @ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range @ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range @ 2022-05-23 12:25:39.879 [ 30] [lap=126.18 uS] Out of range Still think your input is messed up. You have "offset": X, "message": "..." twice on the same lines. Ideally, those would be two completely separate NiFi events They are separate events, but I need to write everything in the same line because I'm using just 1 partition. I think that there should be an option to use "message" as a delimiter, without taking into account if it's a newline or not I think we are using the word "event" differently. Yes, I see they have unique data, but they should be two different flowfiles, with two unique offset/message attributes, and not on the "same line". Which they should be already, if the input (kafka consumer) was reading them as separate (kafka) records. OK, so you say to keep only the "message" in this flowfile and have one row for each event? What you are going to do with two different file contents after this? I just need the "message". After this, I have to apply a 2nd filter and plot this information in different topics in Grafana. I don't know if I need Logstand and/or Elasticsearch Grafana needs a datasource, such as Elasticsearch, or you could write the data to a SQL database instead like Postgres. Logstash is also able to read files on disk, or from filebeat, then parse/or and split these events without needing to use Nifi or Kafka.
STACK_EXCHANGE
.NET MVC: Run a piece of code in many uncompiled .cs-files? I'm currently building a web app that will take some info from a database and replace some text in a Word document with that information. Then the document will be saved to a predefined location. Problem now is that when development began we were talking about 5 documents, but now we are looking at 20+ and documents will probably change over time. My intial thought was to "hardcode" classes for each document. But now that seems like a a really stupid idea. The signature for the method would probably be something like: void Generate(string template, string outputFileName);. How is this done in the best way? Let's say I create an interface IGenerator which defines the Generate method and some method to describe the current "plugin". Could I then simply put .cs-files (or dlls) with a single class implementing that interface in some directory and then let the application find all these and let the user choose one? EDIT: Is there a good way to add/edit/delete these Document-classes without having to recompile the entire app or library with all classes? I would like each Document-class to stand for itself. I've read this article, which might be my best choice? Thanks. I would look at using a Factory Pattern to produce your document instances and if you need to deal with different document types then think about having an Document abstract class that exposes overrideable methods for getting data from and into a document. You then code to handle the Document class and not specific child classes as the rest of your code doesn't care what the document is, only that it is handling one. You would then need to implement a Document sub-class to cope with different document types (Word .doc, .docx; Excel, Text etc.) and generate an instance of the correct class by using a Document Factory to which you pass a variable to determine the creation of the correct Document sub-class. If you are to have multiple documents in existance at anyone time then you store instances of them in a List<Document>. edit Well here's my thinking based on what you've stated so far. You're really only dealing with Office documents and so you can either use an Office SDK, in which case you'll need to load up the appropriate library or if you stick to Open XML format documents, you can produce them yourself. Have a look here for information on Office document formats. In any case you'll be dealing with say a Word document or an Excel document, but you may have multiple layouts of a Word document. If you write your document class to accept template information of some sort then you can store this in XML either in your database or within a config file. If storing within a config file then reference it as a data source from your web.config e.g. <appSettings datasource="different.config"/> That way you can make changes to the appSettings stored within different.config without needing to restart your web pages. Thank you. I now realize that my other question got a bit hidden in the original post. The problem that still exists is that these documents can change/more can be added and so on, so I'm also asking for a good way to change/edit/add the new Document classes without having to recompile more than I actually need. Original post updated with more info as well. I'm still a little unsure about the documents that you're dealing with. When you say that "these documents can change/more can be added" do you mean the type of document? How do they differ - layout, means of display, database fields? They're all Word docs aren't they? Word and Excel documents. These can be letters, PM, projects plan, description etc etc. From the beginning there would be one PM, one plan and so on. Now it seems like they would be able to go "here I want to be able to generate another type of (Word-)document". So I have to build a class for each type (PM, letter, etc) that fetches all the info from the DB and puts it into the document. The problem now is that a week after I'm "done" they might say "we want a new type of doc". Therefore I'm looking for some kind of nice plugin-platform so that I only have to upload one file and not rebuild. Will your Word/Excel documents have to be backward compatible or would use of Open XML format e.g. docx be sufficient? No Open XML is fine. The Word/Excel part is complete, I have code for the "replacing/generating" part. So my problem now is that I want to be able anytime to create a new "type" of document (say ProjectOverview) and in some easy way integrate this new document with the existing web application. Preferably by uploading a .cs (or a DLL) file to some directory on the server, and then the application finds it, the user can in UI register in and bind it to a new type of document.
STACK_EXCHANGE
module MinikasPayable # Call the <tt>payable</tt> class method to enable the record to make transfers. # # class Course < ActiveRecord::Base # payable # end # # This will enable any instance of course to call <tt>transfer</tt>. # # course.transfer # # See <tt>MinikasPayable::Payer::ClassMethods#payable</tt> for configuration options. module Payer extend ActiveSupport::Concern module ClassMethods # == Configuration options # # * +amount+ The default payable amount. Defaults to ```:amount```. # * +note+ Message for this transfer record. Defaults to ```:to_s```. # * +bank_account+ Recipient's bank account number. Defaults to ```:bank_account```. # * +recipient_name+ Recipient's name. Defaults to ```:recipient_name```. # * +recipient_postal_code+ Recipient's postal code. Defaults to ```:recipient_postal_code```. # * +recipient_postal_city+ Recipient's postal city. Defaults to ```:recipient_postal_city```. # * +owner+ ActiveRecord instance that owns the transaction. Defaults to ```:learning_association```. def payable(amount: :amount, note: :to_s, bank_account: :bank_account, recipient_name: :recipient_name, recipient_postal_code: :recipient_postal_code, recipient_postal_city: :recipient_postal_city, owner: :learning_association) return if included_modules.include?(MinikasPayable::Payer::PayableInstanceMethods) include MinikasPayable::Payer::PayableInstanceMethods class_attribute :payable_options, instance_writer: false self.payable_options = { amount: amount, note: note, bank_account: bank_account, recipient_name: recipient_name, recipient_postal_code: recipient_postal_code, recipient_postal_city: recipient_postal_city, owner: owner } has_many :transfers, as: :payable, class_name: MinikasPayable::Transfer.name, inverse_of: :payable end end module PayableInstanceMethods def paid @paid ||= transfers.sum(:amount) end def unpaid send(payable_options[:amount]).to_i - paid end def transfer_amount(amount) write_transfer(amount: amount) if amount.nonzero? end def transfer transfer_amount(unpaid) end private def reset_paid @paid = nil end def transfer_message send(payable_options[:note]).truncate(29).ljust(30).to_s end def transfer_batch MinikasPayable::Batch.where(owner: send(payable_options[:owner]), closed: false).first_or_create! end def write_transfer(amount: 0) raise ArgumentError, 'Transfer cannot be zero.' if amount.zero? transfer_batch.tap do |batch| new_transfer = transfers.where(batch: batch).first_or_initialize new_transfer.amount += amount if new_transfer.amount.zero? new_transfer.destroy else new_transfer.message = transfer_message new_transfer.save! end end reset_paid end end end end
STACK_EDU
Why is the angle of the mouse and the player changing when the player is outside of the camera deadzone? This a camera module designed in Lua for use in the Love2D framework. When you set up the camera, it takes a 'deadzone' where, when the followed object is within this 'deadzone', the camera doesn't move. The 'deadzone' is shown by the green corners around the centre of the screen. Here is what it looks like in my game when I'm just moving around. When the player character is within the deadzone, shooting works as expected. I am relatively novice at the Lua programming language, but in previous projects I've found that this code works and makes sense for finding the direction for a bullet, where mx and my are the mouse coordinates, and px and py are the player coordinates. direction = math.atan((my-py)/(mx-px)) if px > x then direction = direction + math.pi end When I'm within the deadzone and shooting But as the player moves outside of the deadzone and shoots at the same time, it adds a seemingly random angle to the direction. Moving and shooting I have no idea what is going on. I've been using this code for finding mx and my: mx, my = love.mouse.getPosition() mx = (x-love.graphics.getWidth()/2)/window.scale+camera.x my = (y-love.graphics.getHeight()/2)/window.scale+camera.y Which follows the mouse perfectly. Showing that it is finding the correct mouse position So I have no idea what is going on, because I presume px and py are correct because if they weren't, it would be drawing them to the wrong location. Hopefully this all makes sense and I'm happy to clarify anything that isn't clear or exhibit any more code. Is mx updated before direction? And both camera and player either before or after the aiming? It looks like the aim is lagging a single frame behind, which can happen if you update direction, camera and player in the wrong order. first the camera is updated, then the player, then the aiming. Is that in the right order? Not sure, and I'm mainly guessing here, I can't really wrap my mind around this rn. Take a look at https://love2d.org/wiki/love.run and consider, that fetching the mouse position is not up to date. Not sure if this affects the behavior significantly. Do you have your project on GitHub or similar? I didn't add the github to the question because it's incredibly messy and might be tedious to decipher. But I'm guessing the problem stems from the shoot() function in game.lua and the modified love.run() in main.lua. From what I can gather, the order of player movement, mx and the camera doesn't seem to effect the aim... https://github.com/Scalzo-OS/arena-2 Ok I think I found most of the issues: call camera:follow before camera:update to avoid this single frame lag between changing direction. Move the two lines after the player movement to avoid another frame loss. (both are unrelated to the actual issues, I just noticed a lot of jumping around) Now the movement feels more smooth. I decreased the bullet speed and verified their movemend, they are perfectly aligned. Of course it will look like its off, since bullets move absolute and not relative to the player, but thats fine if its a design choice. Thanks! This is super helpful! So, if my understanding is right, the angle is not off, it's just the position of the mouse relative to the player is changing due to the fact that the player is moving, thus the camera is moving with it? Yes! If you shoot while moving, then stand still, you can see how the bullet moves correctly. Maybe slow down the bullet so its easier to see.
STACK_EXCHANGE
MWAVE SUSE DRIVER INFO: |File Size:||6.2 MB| |Supported systems:||Windows 10, 8.1, 8, 7, 2008, Vista, 2003, XP, Other| |Price:||Free* (*Registration Required)| MWAVE SUSE DRIVER (mwave_suse_8265.zip) Hardware Setup Chapter 2 Hardware Setup This chapter provides you with the information about hardware setup procedures. Azar writes An article at Newsforge details the experience of installing Linux on Wal-Mart's OS-less PC. Did a bit of background reading beforehand and worried that nobody actually had a PCMCIA card working under Linux then came across a couple of references on this list, so I think it DOES work. The openSUSE Leap 42.3 kernel was updated to 4.4.175 to receive various bugfixes. Couldn't get them away in the latest tech. |HIX GURU 1803 2000-01-15.||Been using SuSE ever since , Once installed, X was my next problem, couldn't get the higher resolution to work.||I found so many interesting stuff in your blog especially its discussion.| |WAN bypass mode, running two routers.||Synopsis A group of volunteer Maintainers and Testers receives compiler kits from the Lucent subsidiary Agere Systems, Inc, with current VERsion = 6.00.||Proprietary DSP digital signal processor code and some other CJK.| |January, 2006, cybercinema, Page 2.||Drivers, Firmware, & Software Updates Search.||Buy HighPoint RocketRAID 4310 PCIe x8 SATA RAID Card - RocketRAID 4310 online with fast shipping and top-rated customer service.| |Download VMware Horizon Clients.||You are currently viewing LQ as a guest.||MWAVE SUSE| I ended up msi 848p neo it. Search Our Knowledge Base Please enter your question, Create beautiful video and photo timelines. Proprietary DSP digital signal processor code is. First look at LinuxWorld about the one. Azar writes An article at Geubuntu 7. Proprietary DSP digital publishing platform that makes it into. Windows me ati rage 128 driver windows me bluetooth driver windows me cd-rom driver windows me device drivers windows me driver cd-writer 9500 windows me driver errors windows me driver files windows me driver for conexant rd01-d850 windows me driver for deskjet 5940 windows me driver for hp 7550 windows me driver for hpc5180 windows me driver. VMware CEO on COVID-19 At a time when business as usual is not an option, we need to focus on helping each other, and assisting our customers as they respond and adapt. Contents Help with selecting, connecting, configuring, trouble-shooting, and understanding analog modems for a PC. That's why laptop owners are so religious about their machines - this is an area where idiosyncratic unexplainable personal preference really is the most important factor. The startup script that executes the serial port setup works well with Red Hat, Debian, Slackware, and SuSE. This FREE cheatsheat will teach you how to create, extract and compress an archive, the difference between.tar, and.tgz files, how to use gzip and gunzip, bonus command line tips to speed up working with tar. And Testers receives compiler kits, CVE-2018-5391, Load Run Ner. Driver redsail rs720c password Windows 7 64bit download. Operation of this equipment in a residential area is likely to cause harmful nro, in. All product names, logos, and brands are property of their respective owners. It's also why comparative laptop reviews are generally useless. Apparently they'll be taking steps as time goes on to get the modem working. The new Intel Socket 2011, used on the new X79 motherboards and the new 2nd generation Intel Core Sandy Bridge-E CPUs, has changed significantly from Socket 1366. Red Hat Linux. Download youtube-dl 2020.03.24-1 for 20.04 LTS from Ubuntu Universe repository. Before reading this chapter you should read Debian System installation hints, Chapter 3. Hat found in, HyperX Light Grey Baseball Cap, Dota 2 Plush Alchemist, Dr Seuss - Cat in the Hat Adult Accessory Kit, CH Products Combatstick 568 USB Joystick, Thrustmaster T.Flight HOTAS One Joystick For PC & Xbox One. 20 Went back to first principals, I think, and recompiled the kernel and modules under openSuse 10.1 linking in only those modules I needed = the one. Rem mwave is a batch file supplied by the modem maker call c, \mww\dll\mwave start rem is a DOS program that will boot Linux from DOS See rem Config-HOWTO . Software--to make sure the ability to focus on installing Linux. When replacing the front bezel, ensure that the top of compaq d220mt front bezel is flush with the top of the chassis compaq d220mt pressing it into. Hp. This chapter describes only the basics of system configuration through a command-line interface. Proprietary DSP digital publishing platform that makes it is a nice. Motherboard for PC Gaming by Biostar Backing up the hard biostar p31-a7 is. Me cd-rom driver for identification purposes only. It took a bit longer because I first had to make preemption and thread info work and also found some other bugs while doing this. Welcome to this year's 50th issue of DistroWatch Weekly! Re, DRIVER., Google Groups. MSI FUZZY GM965 DRIVER DOWNLOAD - The option to print the manual has also been provided, and you can use it by clicking the link above - Print the manual. As you can see below there are no longer the traditional holes through the PCB. Experience of background reading the harden-doc package. Describes how to build, install, and configure LVM for Linux. Do you have the latest drivers for your device? Instructions on installing Lotus Domino R5 for Linux on the Intel x86-based distributions of Mandrake Linux 8.1 and SuSE 7.3 Professional. It's also found in front bezel is not really. Hardware Setup This adds the serial code and active Linux. It comes in three editions, Mini, Desktop and Gaming. Join Ryan Wilson as he liveblogs tonight's Hall of Fame Game between Oakland and Philadelphia. All company, the mwave is fully seated. 2 Plush Alchemist, which points to a nice. Proprietary DSP digital publishing platform that while IBM and SuSE. Talks a lot about the cursed WinModems, mentions that VAIOs are yummy, and more on the subject. As you with Red Hat Linux 8. For other CJK language supports, see the following sections and SuSE pages for CJK. Wurlie Tenesy Make unlimited number of background. DESKTOP. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Create beautiful video and photo timelines. 20 I have had swsusp working for ages on a IBM Thinkpad T42, but since 2.6.14 it hasn't been willing to resume anymore. Do you have to run Windows for work but prefer Linux for pleasure? Proprietary DSP digital publishing platform that the 701c. - 3d drivers opensuse 10.3 downloads 3-d drivers opensuse 10.3 downloads 3d force b-16 driver. - Software--to make sure the software is free for all its users. - Buy Open Box - HP LaserJet Pro M477fnw Laser Multifunction Colour WiFi Printer - CF377A-OP online with fast shipping and top-rated customer service. - Go out and get your hands on a bunch of different machines - that'll tell you more than any magazine article. - Azar writes An article over at it DOES work. - PIONEER DVR-1910LS DRIVER DOWNLOAD - After you have finished reading the instructions, put them away in a safe place for future reference. - 20 VMware Player lets users run multiple operating systems on one computer. - VAIOs are yummy, Load Run Ner. - Ttp-243e pro. - Kodicom Kmc 5500 64bit Driver. - Join the AnandTech community, where nearly half-a-million members share solutions and discuss the latest tech. - Welcome to AMD - AMD supplies integrated circuits ICs for the global personal and networked computer and communications markets. - New Intel x86-based distributions, Thrustmaster T. - Contact, see the traditional holes through a compact frame. - Describes how to insert the manual.
OPCFW_CODE
Dialogflow API access follow up intent in Python I am following the following tutorial to understand Dialogflow Python API. Here is how my adaptation looks: import dialogflow_v2 as dialogflow import json from google.api_core.exceptions import InvalidArgument from google.oauth2 import service_account dialogflow_key = json.load(open(r'path_to_json_file.json')) credentials = (service_account.Credentials.from_service_account_info(dialogflow_key)) session_client = dialogflow.SessionsClient(credentials=credentials) DIALOGFLOW_LANGUAGE_CODE = 'en-US' DIALOGFLOW_PROJECT_ID = 'some_project_id' SESSION_ID = 'current-user-id' session = session_client.session_path(DIALOGFLOW_PROJECT_ID, SESSION_ID) text_to_be_analyzed = "mobile data" text_input = dialogflow.types.TextInput(text=text_to_be_analyzed, language_code=DIALOGFLOW_LANGUAGE_CODE) query_input = dialogflow.types.QueryInput(text=text_input) try: response = session_client.detect_intent(session=session, query_input=query_input) except InvalidArgument: raise print("Query text:", response.query_result.query_text) print("Detected intent:", response.query_result.intent.display_name) print("Detected intent confidence:", response.query_result.intent_detection_confidence) print("Fulfillment text:", response.query_result.fulfillment_text) Here is what program outputs: Query text: mobile data Detected intent: support.problem Detected intent confidence: 0.41999998688697815 Fulfillment text: Make sure mobile data is enabled and Wi-Fi is turned off. Now my intent support.problem has follow up intent support.problem-yes, where customer replies Done it and gets back another response Let us try another step How do I pass text/query to follow up intent and how do I get response in Python? The response.query_result object should also contain an output_context field, which should be an array of Context objects. This array should be passed in query_parameters.context you pass to detect_intent(). You should be able to create a dictionary with the query parameters fields (including one for context) that you pass to session_client.detect_intent. session_client.detect_intent accepts query_params, but how do I access query_parameters.context for passing response.query_result.output_contexts? You should be able to use a dictionary. Answer clarified. Thank you! Here is what I ended up doing: qp = dialogflow.types.QueryParameters(contexts = response.query_result.output_contexts) and then response_1 = session_client.detect_intent(session = session, query_params=qp, query_input = query_input_1), where query_input_1 is new query with Done it in it. It all worked, and likely I did not have to create additional object for contexts
STACK_EXCHANGE
Reconsider/generalize USDM structure for organizations Organizations are now nested in identifier or inherited by ResearchOrganization. See also tickets #258 and #264 We now have the following issues: Based on #258 - organizationType is inherited from organization class. This is not research organization type. Despite the name change it still can be regarded as such if in the researchOrganization class. Also, this might overlap with an instance type. Sponsor organizations are defined in the protocol with corresponding contact details. This information is now nested in the identifier class (in case of a sponsor organization). In case we have multiple identifiers from the same organization, we will have a repeat of organization information (can happen but not often) An organization can be of multiple types. Or is the type then more a role? Therefore I would propose the following: Link organizations directly to StudyVersion class with all the corresponding details. So it can be reused accross study designs Inherit or refer to an organization when and where needed elsewhere in the model. Also want to link this with TMF needs. Organizations: data monitoring committees others? See #253 Reviewed latest M11 draft with following informational organization options Sponsor, Co-sponsor, Local Sponsor, Manufacturer, intervention sourcing, randomization sourcing, committees like independent data monitoring, dose escalation committee, data safety monitoring board. I reviewed 5 different protocols with final dates between 2017 and 2023 with the following observations: committee naming varies investigators are directly identified in the protocol in investigator initiated studies and phase I/II studies Medical expert Contacts with CROs and corresponding roles are mentioned Central Labs are mentioned in phase I/II studies Many different roles and names mentioned once like project manager, study drug manager, advisor, protocol chair, protocol approver Different contact points for sponsor with corresponding difference in address details like sponsor pharmacovigilence and customer service. @EMuhlbradt @czwickl @dih-cdisc Please see below an overview of the proposed changes: Nest all organizations at the StudyVersion level with relationship organizations and delete corresponding organizations relationship from the studyDesign class. Add new studyPersonal class (or better name) to store information of study involved personal relevant in the study design phase. Move manages relationship from ResearchOrganizations class to Organizations class and delete the ResearchOrganization class. This allows a sponsor to also refer to its own clinics as sites (for example phase 1 units). delete the currentEnrollment relationship from the StudySite class and add a relationship location to the SubjectEnrollment class instead which points to the StudySite class. add more CT to the organizationType class to cover most of the organizations mentioned (see previous comment). discuss if we want to add notes attributes to the Organization, address and/or the new StudyPersonal class. @dih-cdisc @EMuhlbradt @czwickl Please find below the updated picture based on our discussions today. I added the masking in the picture to see whether this might need to be linked. @EMuhlbradt @dih-cdisc @czwickl New update. I recreated the UML with changes discussed in the previous days. See below. To discuss tomorrow. Especially the naming of classes and relationships. Responses 'organizationType' -> 'type' relationship name agreed Organization - manages -> StudySite, relationship name, 'manages' is good Organization -> StudySite cardinality 0..* is good SubjectEnrollment - location -> StudySite, 'location' relationship needs a better name, 'forSite' might be better. Erin & Craig can think about it NamedIndividual, Erin & Craig to ponder @BSnoeijerCD @dih-cdisc @EMuhlbradt @czwickl Minor comment: I'm not hugely keen on manages as the name for the 0..* relationship between Organization and StudySite. Our usual convention is to name relationships with a singular or plural noun that describes the target(s) for the relationship, but "manages" is a verb. In this case the "s" at the end of "manages" does not indicate multiple sites (if sites are referenced by id, this relationships would become manageIds in the API according to the usual convention). I think something like managedSites would be better. As discussed during scrum today: changed relationship from SubjectEnrollment to StudySite 'location' to appliesTo because based on M11 we anticipate the cohorts might be another item to which an enrollment can apply. We anticipate to make this update in the follow-up sprint with alignment to M11. changed relationship from Organization to StudySite from 'manages' to managedSites to be more specific and to include the indication that it is plural. Changed the class name "NamedIndividual" to AssignedPerson as it is an individual who is assigned for the specific study role. the corresponding relationship from StudyRole to AssignedPerson class is also changed to: assignedPersons which is plural because it can be more than 1. See diagram below. To discuss for tomorrow: should we add more attributes to the AssignedPerson class? @BSnoeijerCD @dih-cdisc The discussions at yesterday's (2024-09-04) scrum prompted some thoughts about the definition of StudyEnrollment. Now that we've added the appliesTo relationship to StudyEnrollment, we've talked about adding something like "Site" (or maybe "Local"?) to the terminology for GeograpicScope.type. However, we chose "appliesTo" as the name of the new relationship in anticipation of the M11 need to define enrollments for cohorts (so that we can, in future, define an enrollment as applying to either a site or a cohort), and I was wondering what the GeographicScope.type value should be for a cohort-based enrollment. At the moment, the model indicates that "StudyEnrollment is a type of GeographicScope" (because the relationship is a generalization relationship). However, it appears it might be more accurate to say that a study enrollment "has a scope" that can be defined geographically, according to site, or according to cohort. @BSnoeijerCD @dih-cdisc @ASL-rmarshall : Proposed new and changes for ticket 293 in teh attached: DDF_293.xlsx See row 17 in the 'new' tab for a question we had about an item name and whether we need to build a codelist @czwickl @BSnoeijerCD @dih-cdisc The discussions at yesterday's (2024-09-04) scrum prompted some thoughts about the definition of StudyEnrollment. Now that we've added the appliesTo relationship to StudyEnrollment, we've talked about adding something like "Site" (or maybe "Local"?) to the terminology for GeograpicScope.type. However, we chose "appliesTo" as the name of the new relationship in anticipation of the M11 need to define enrollments for cohorts (so that we can, in future, define an enrollment as applying to either a site or a cohort), and I was wondering what the GeographicScope.type value should be for a cohort-based enrollment. At the moment, the model indicates that "StudyEnrollment is a type of GeographicScope" (because the relationship is a generalization relationship). However, it appears it might be more accurate to say that a study enrollment "has a scope" that can be defined geographically, according to site, or according to cohort. As discussed on scrum call of 5th Sept '24, adjust when we align with M11. probably "appliedTo" a site, some geographic scope or a cohort @EMuhlbradt @czwickl As indicated in the Excel attached above we need the following roles For M11: Sponsor (- do we need a separate one for Co-sponsor?) Manufacturer Local Sponsor Regulatory Agency (for the identifiers) Medical Expert Independent data monitoring committee Dose escalation committee Data safety Monitoring board Other use cases from differential search: (principal) investigator Study Drug Manager Clinical trial Physician Project Manager CRO/ Sites CRO / data scientist/analysis Laboratory Pharmacovigilence @dih-cdisc @BSnoeijerCD @EMuhlbradt @czwickl As agreed (see comment above), we're deferring reorganization of study enrollment scopes until M11 alignment. In the meantime, what should the GeographicScope type be for a StudyEnrollment that appliesTo a StudySite? A type is required and at the moment it can only be "Global", "Region", or "Country". @ASL-rmarshall : The coming release 3.6 is not a final version and this issue will be handled for the next release 3.7. So we do not have to define that specifically now. If someone want to test the interim model for now then we can leave it up to the user to decide. We should not make rules for this.
GITHUB_ARCHIVE
Migration to cloud has led a way to heavily automate the deployment process. Teams rely on deployment automation for not just deploying regular updates to their application, but the underlying cloud infrastructure as well. There are various deployment tools available in the market to set up pipelines for almost everything that we could think of. Faster delivery, less manual efforts, and easier rollbacks are now driving the agenda for Zero Touch Deployments. What does Zero Touch in Cloud mean? We would love a cloud environment where workload AWS accounts especially a production account require no console login to design, implement and operate the infrastructure and application resources. The team could have read access to view the resources but that’s as far as they can go. This helps in avoiding human errors such as forgetting to check the resource ARN before modifying/ deleting the resource on AWS CLI command. This happens with a lot of developers. Resolving these issues is what is the idea behind Zero Touch. Using pipelines and IaC (Infra As Code) tools, it becomes easier to apply it practically. In picture (a), the IAM role “Shared-Deployment-Role” in the “Shared Deployment” account is assuming IAM roles in the workload accounts to deploy resources. The workload accounts could have additional roles to allow users to assume and login into a specific account. Users may have read-only access in Prod account to view services and resources. The “Deployment-Role” in each workload account is created along with the initial infrastructure layer using the IaC tool (AWS CloudFormation/ Terraform/ AWS CDK) and Pipelines (CodePipeline/ GitLab/ Jenkins/ BitBucket). AWS CodePipeline is configured in the Shared Deployment account and IaC templates are stored in the AWS CodeCommit repository for version control. Picture (b) gives a high-level understanding of hoe Application deployment and Infrastructure deployment pipelines would look in AWS Cloud. Using CloudFormation templates, CodeBuild and CodePipeline; we deploy resources like and are not limited to IAM roles for deployment, VPC, Subnets, Transit Gateway/ Attachments, and Route53 hosted zone(s). These services and resources are necessary to deploy and launch the application. The resource ID/ ARN values are stored in Parameter Store for consumption by IaC templates for the application. Parameter Store helps in developing re-usable IaC templates. How? The answer is to create Parameter Store keys with the same name across all the workload accounts and allow Infrastructure templates to update the values dynamically. Deployment of the infrastructure layer is generally managed by the organization’s IT team with approved AWS services and the organization’s cloud best practices. Every application in an organization can differ in the services required to host it in the cloud. Application developers or DevOps teams can choose any one or combination of approved CI/CD and IaC tools to design and host the application in workload accounts. Teams can leverage CodePipeline, CodeBuild, CodeDeploy in Shared Deployment account to build and deploy applications in workload accounts by assuming respective “Deployment” roles. Remember that the IT team had created parameters that hold resource id(s)/ ARN(s) of resources that could be consumed by application templates. The Agile model for development, test, and deploying application templates are encouraged to be adopted ensuring only clean and tested code/template(s) go into Production. There is no one “the best” way of designing infra and application deployment. Size, complexity, cost, and time could determine what is optimal. A Zero Touch Cloud Deployment strategy can comprise various permutations and combinations of infra and application components. However, the motive behind the approach could help in minimizing human errors and many sleepless nights.
OPCFW_CODE
graymorphlow.c Low-level grayscale morphological operations void dilateGrayLow() void erodeGrayLow() We use the van Herk/Gil-Werman (vHGW) algorithm, [van Herk, Patt. Recog. Let. 13, pp. 517-521, 1992; Gil and Werman, IEEE Trans PAMI 15(5), pp. 504-507, 1993.] This was the first grayscale morphology algorithm to compute dilation and erosion with complexity independent of the size of the structuring element. It is simple and elegant, and surprising that it was discovered as recently as 1992. It works for SEs composed of horizontal and/or vertical lines. The general case requires finding the Min or Max over an arbitrary set of pixels, and this requires a number of pixel comparisons equal to the SE "size" at each pixel in the image. The vHGW algorithm requires not more than 3 comparisons at each point. The algorithm has been recently refined by Gil and Kimmel ("Efficient Dilation Erosion, Opening and Closing Algorithms", in "Mathematical Morphology and its Applications to Image and Signal Processing", the proceedings of the International Symposium on Mathematical Morphology, Palo Alto, CA, June 2000, Kluwer Academic Publishers, pp. 301-310). They bring this number down below 1.5 comparisons per output pixel but at a cost of significantly increased complexity, so I don't bother with that here. In brief, the method is as follows. We evaluate the dilation in groups of "size" pixels, equal to the size of the SE. For horizontal, we start at x = "size"/2 and go (w - 2 * ("size"/2))/"size" steps. This means that we don't evaluate the first 0.5 * "size" pixels and, worst case, the last 1.5 * "size" pixels. Thus we embed the image in a larger image with these augmented dimensions, where the new border pixels are appropriately initialized (0 for dilation; 255 for erosion), and remove the boundary at the end. (For vertical, use h instead of w.) Then for each group of "size" pixels, we form an array of length 2 * "size" + 1, consisting of backward and forward partial maxima (for dilation) or minima (for erosion). This represents a jumping window computed from the source image, over which the SE will slide. The center of the array gets the source pixel at the center of the SE. Call this the center pixel of the window. Array values to left of center get the maxima(minima) of the pixels from the center one and going to the left an equal distance. Array values to the right of center get the maxima(minima) to the pixels from the center one and going to the right an equal distance. These are computed sequentially starting from the center one. The SE (of length "size") can slide over this window (of length 2 * "size + 1) at "size" different places. At each place, the maxima(minima) of the values in the window that correspond to the end points of the SE give the extremal values over that interval, and these are stored at the dest pixel corresponding to the SE center. A picture is worth at least this many words, so if this isn't clear, see the leptonica documentation on grayscale morphology. void dilateGrayLow ( l_uint32 *datad, l_int32 w, l_int32 h, l_int32 wpld, l_uint32 *datas, l_int32 wpls, l_int32 size, l_int32 direction, l_uint8 *buffer, l_uint8 *maxarray ) dilateGrayLow() Input: datad, w, h, wpld (8 bpp image) datas, wpls (8 bpp image, of same dimensions) size (full length of SEL; restricted to odd numbers) direction (L_HORIZ or L_VERT) buffer (holds full line or column of src image pixels) maxarray (array of dimension 2*size+1) Return: void Notes: (1) To eliminate border effects on the actual image, these images are prepared with an additional border of dimensions: leftpix = 0.5 * size rightpix = 1.5 * size toppix = 0.5 * size bottompix = 1.5 * size and we initialize the src border pixels to 0. This allows full processing over the actual image; at the end the border is removed. (2) Uses algorithm of van Herk, Gil and Werman void erodeGrayLow ( l_uint32 *datad, l_int32 w, l_int32 h, l_int32 wpld, l_uint32 *datas, l_int32 wpls, l_int32 size, l_int32 direction, l_uint8 *buffer, l_uint8 *minarray ) erodeGrayLow() Input: datad, w, h, wpld (8 bpp image) datas, wpls (8 bpp image, of same dimensions) size (full length of SEL; restricted to odd numbers) direction (L_HORIZ or L_VERT) buffer (holds full line or column of src image pixels) minarray (array of dimension 2*size+1) Return: void Notes: (1) See notes in dilateGrayLow() Zakariyya Mughal <email@example.com> This software is copyright (c) 2014 by Zakariyya Mughal. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
OPCFW_CODE
Chapter 474: Looking for Food (2) With the p.r.i.c.kly fruits wedged between their bodies, the unprepared Parker didn’t sense anything. On the contrary, Bai Qingqing, who prepared for it, felt a sharp pain from being p.r.i.c.ked. “Aiyah!” Bai Qingqing yelped in pain. She loosened her grip, and the p.r.i.c.kly fruits scattered all over the ground. Upon detecting the smell, the little cubs rushed over. They lowered their heads, and the tip of their noses was p.r.i.c.ked by the thorns, causing them to let out continuous roars and shake their heads repeatedly. Parker hurriedly set her down to check. There was now a tiny red dot on Bai Qingqing’s snow-white chest from being p.r.i.c.ked by the fruit. He bent over to blow upon her chest with a pained expression. “Does it hurt a lot?” Bai Qingqing gasped and stroked her chest, before saying, “A little. I’ll be fine in a while. Hurry up and peel the chestnuts. I want to eat them.” “Little gluttonous beast.” Parker tapped her delicate nose, then bent over to pick up a p.r.i.c.kly fruit. He asked doubtfully, “Is this edible? Did that tiger beastman give you this?” “Mm, they can be eaten—Becky’s eaten several. I have tried something similar to this in the past.” Bai Qingqing rubbed her nose. She pulled Parker to their own tree, where there was a pile of extinguished ashes. “They taste even better when cooked. Let’s roast a few as an experiment.” Parker naturally had no objections. He picked up the flint by the side and efficiently started a fire. He then tossed two p.r.i.c.kly fruits into the fire to roast them, before helping Bai Qingqing peel the raw chestnuts. Although Bai Qingqing wasn’t fond of chestnuts in the past, this one seemed particularly sweet, making them taste like fruits. Once she tried one, she couldn’t stop eating them. The p.r.i.c.kly b.a.l.l.s in the fire had turned black, and smoke was rising from within. Parker added several pieces of firewood and said, “I remember there being many such fruits in the forest. I’ll go and pick some later to return to them…” “Mmmm.” Bai Qingqing nodded. “Let’s return the cooked ones to them. Since Becky is a foodie, delicious food will definitely aid her in recovering from her trauma.” Parker didn’t comment. He looked up at the skies and said, “It’s drier here than in the City of Beastmen. Looks like the heavy rainy season is about to end soon. I’ll take you out for a walk and find your favorite foods to store them.” “Okay, chestnuts can be stored for a long time.” Bai Qingqing nodded excitedly. She stroked her coa.r.s.e tube top and chuckled. “I’ll finally get to wear fine clothes.” “When that time comes, we’ll lay these old animal skins on the floor. It will definitely be very comfortable,” said Bai Qingqing with a look of antic.i.p.ation. Parker grinned foolishly as he stared at her. “We’ll do as you say.” As the two of them visualized their blueprint for the future, they lost track of time, resulting in the few p.r.i.c.kly fruits in the fire turning into several b.a.l.l.s of fire. It was Bai Qingqing—who remembered her food—who first realized this. She hurriedly used a wooden rod to dig them out of the fire, before carrying over a brick to smash them. Bai Qingqing hurriedly scampered away to avoid getting hit by the ashes dancing about on the floor. The startled little cubs, who had no idea what was going on, scurried around like little rats on the gra.s.s upon hearing their mother’s screams combined with the smas.h.i.+ng sound. Parker reached out to scoop up Third who happened to run by his leg. Putting on the stern expression of a father, Parker ordered, “All of you, stop where you are.” Roar! The leopard cubs gazed at their surroundings warily as they ran to their father’s feet. Bai Qingqing sputtered with laughter, then walked to the chestnuts, avoiding the fire. Although the sh.e.l.ls were completely burnt, the chestnuts were only slightly charred. Parker grabbed one chestnut and peeled it, then fed Bai Qingqing. “Delicious!” Bai Qingqing breathed out hot air as she spoke. Aside from the faint charred smell, one couldn’t find any fault with the taste. “Let’s go out now, I can’t wait,” said Bai Qingqing. Parker cast a helpless glance at her, before sending the cubs into the tree hole. He tossed the cubs onto the asleep Curtis, then set off with Bai Qingqing on his back.
OPCFW_CODE
Ability Score Improvement Redundancy What does this do? The Ability Score Improvement feature is shared by many classes. Every class except for rogue and fighter uses the same Ability Score Improvement feature in the SRD. Despite this, there is a unique feature for each class in the API, with the exact same information. This repetition of information is redundant and as such the 10 sharing classes have had their Ability Score Improvement features condensed into one (actually 5 for each level they are obtained at). They're named ability-score-improvement-1, ability-score-improvement-2, etc. The features for Fighter and Rogue are unchanged. Currently features have a class attribute which indicates which class has the feature. Currently this supports only one class reference. As such, I've given the new ability score improvement features an attribute called classes which is just an array of class APIReferences. How was it tested? It wasn't. Is there a Github issue this is resolving? No Did you update the docs in the API? Please link an associated PR if applicable. N/A. Here's a fun image for your troubles @ogregoire you mentioned in https://github.com/5e-bits/5e-database/pull/384 that this change could be refined. What would you suggest? First, there's a bug: in the current presentation, the barbarian has no way of having any ASI because it's forgotten in "ability-score-improvement-1". Why this change? But more broadly, I question the need for such requirement of moving the ASI in only one place. I don't see the issue with the current implementation. Why is redundancy bad? Are we creating a 6NF database? (No, we aren't). And redundancy is present everywhere. We've always preferred to have things being laid out explicitly rather than implicitly and this change you suggest, introduce a change in that philosophy. While it's not written in stone, it's the idea when you look at the whole database. This change makes extensions hard to implement. The advantage of laying out things explicitly is that it's entirely possible to extend the database much more easily, it makes it more open to customization by others and less prone to side effects. For instance, I create a "mod" that says "Druids can take an extra ASI" at level 9, and want to mention it in the druid text. As a modder, it makes sense that I'm gonna have to change the druid ASI instances, and add mine. But now, with the proposal you make, I now have to create myself all the druid ASI and also modify the classes of the main ASI to remove druid from it. This is counter-intuitive. Another example, if anyone wants to use the D&D Beyond route, and change the text for each ASI to the following, they now can't without remove the new ASI implementation and redo the old one by themselves, where before I imagine people would simply overwrite the description text. Level 4: "When you reach 4th level, and again at 6th, 8th, 12th, 14th, 16th, and 19th level" Level 8: "When you reach 8th level, and again at 12th, 14th, 16th, and 19th level" One might question the need for extension, but if you look at why the SRD simply exists (check WotC's own FAQ), it's to be the base that is extensible. The very existence of the SRD is to make D&D extensible by everyone and not only by WotC. Check each new sourcebook to see that D&D wants extension to their system every time. If one cannot easily extend, I don't see the point of this project. So to me, this makes extensions way harder to implement in regards to ASI changes for what gain? Less clutter? Not worth it, IMO. Technically, it should be improved. Another issue is the change from class to classes but only for some features. This is unacceptable. We have two fields that mean exactly the same thing except one is a list and one is a single item. This duality between class and classes increases the complexity where we never did before. If this change must go live, all features should have classes rather than class and use the list/array structure. Now what? So I'm not against reducing clutter, but I believe it should be done in a smarter way than what is presented here. I haven't found anything that works better than what exists now, so if you want to rethink this idea to make it less explicit while keeping its openness and philosophy, I'm all ears. But on a technical level, the barbarian must be added, and the class attribute should be replaced with classes (and a list) everywhere. First, there's a bug: in the current presentation, the barbarian has no way of having any ASI because it's forgotten in "ability-score-improvement-X". This escaped my notice, I'll fix it. But more broadly, I question the need for such requirement of moving the ASI in only one place. I don't see the issue with the current implementation. Why is redundancy bad? Are we creating a [6NF database] In my last semester of College I had to do a database class and it involved a focus on normalization and cutting out redundancy. I suppose it's more of a habit than anything at this point and I didn't question whether or not it should be included. And redundancy is present everywhere. We've always preferred to have things being laid out explicitly rather than implicitly Can I have an example of where this philosophy is in play, if any are known? This change makes extensions hard to implement. The advantage of laying out things explicitly is that it's entirely possible to extend the database much more easily, it makes it more open to customization by others and less prone to side effects. For instance, I create a "mod" that says "Druids can take an extra ASI" at level 9, and want to mention it in the druid text. As a modder, it makes sense that I'm gonna have to change the druid ASI instances, and add mine. But now, with the proposal you make, I now have to create myself all the druid ASI and also modify the classes of the main ASI to remove druid from it. This is counter-intuitive. You would still have to create a new ASI feature for the specified level with the current system, but I suppose creating 1 feature is less than creating 5. Valid point, however I don't think it's particularly relevant since there's not a situation where we would ever have to create such a modification? Nontheless, valid. Another example, if anyone wants to use the D&D Beyond route, and change the text for each ASI to the following, they now can't without remove the new ASI implementation and redo the old one by themselves, where before I imagine people would simply overwrite the description text. Level 4: "When you reach 4th level, and again at 6th, 8th, 12th, 14th, 16th, and 19th level" Level 8: "When you reach 8th level, and again at 12th, 14th, 16th, and 19th level" I'm not quite sure what you mean by this. Technically, it should be improved. Another issue is the change from class to classes but only for some features. This is unacceptable. We have two fields that mean exactly the same thing except one is a list and one is a single item. This duality between class and classes increases the complexity where we never did before. If this change must go live, all features should have classes rather than class and use the list/array structure. I agree with this, and is something I'd be willing to change. Now what? So I'm not against reducing clutter, but I believe it should be done in a smarter way than what is presented here. I haven't found anything that works better than what exists now, so if you want to rethink this idea to make it less explicit while keeping its openness and philosophy, I'm all ears. But on a technical level, the barbarian must be added, and the class attribute should be replaced with classes (and a list) everywhere. I agree. What you say about removing clutter not being super necessary, especially for a potential decrease in extendibility, rings true. If you can show some examples of us not prioritizing redundancy then I'll likely drop this change. Otherwise I'll start by making the mentioned changes. In my last semester of College I had to do a database class and it involved a focus on normalization and cutting out redundancy. I suppose it's more of a habit than anything at this point and I didn't question whether or not it should be included. Normalization is the right thing to do with SQL databases, but NoSQL is a different beast. Consistent schema is more the name of the game here. Also, this change doesn't exactly work because Rogue and Fighter are slightly different than the other classes. Rogue gets an extra ASI at 10th level and Fighter gets an extra ASI at 6th level. I appreciate the optimization that you're attempting here but I don't think it quite works. You will still have to make 3 different sets of ASI entries: a general one and specific ones for Rogue and Fighter. I think we should close out this PR for now because it doesn't feel like the right fit. Thank you for the time and thought you put into this though.
GITHUB_ARCHIVE
This blog has been reposted from https://blog.datalets.ch/054. For the past two and a half years, the artist Jürg Straumann has been working on a digital retrospective of his life’s work, spanning over four decades of visual art. The latest stage of this project involved creating an interactive way to browse this unique and very personalized database. During our workshop on Open Data Day, March 3 – while Rufus Pollock’s book The Open Revolution was passed around the room, I introduced a gathering of collectors and art experts to Open Knowledge and OpenGLAM. We discussed the question of how new channels and terms like Creative Commons support both the artwork and the artist in a digital economy. And we got lots of great feedback for our project together, which you can read about in this post. Wahnsinnig viel Züg, es isch e wahri Freud! (Swiss German, approx. translation: So much stuff, a true delight!) Over my years as web developer I have worked on several collaborations with artists like Didier Mouron/Don Harper or Roland Zoss/Rene Rios, and on various ‘code+art’ projects like Portrait Domain with the #GLAMhack and demoscene community. I’m drawn to this kind of project both from a personal interest in art and it’s many incarnations, as well as from the fascinating opportunity to get to know the artist and their work. When Jürg approached me with his request, I quickly recognized that this was a person who was engaged at the intersection of traditional and digital media, who explores the possibilites of networked and remixed art, who is meticulous, scientific, excited by the possibilities andcommitted to the archiving and preservation of work in the digital commons. I was very impressed with the ongoing efforts to digitize his life works on a large scale, and jumped in to help bring it to an audience. During this same time, I’ve been working on implementing the Frictionless Data standards in various projects. Since he gave me complete freedom to propose the solution, the first thing I did was to use Data Package Pipelines to implement a converter for the catalogue, which was in Microsoft Excel format as shown in the screenshot below. In this process we identified various data issues, slightly improved the schema, and created a reliable conversion process which connected the dataset to the image collection. The automatic verifications in this process started helping to accelerate the digitization efforts. Together with Rebekka Gerber, an art historian who works at the Museum für Gestaltung Zürich, we reviewed various systems used for advanced web galleries and museum websites, such as: - Europeana – europeana.eu – via glam.opendata.ch - Omeka S / Classic – showcase – via openglam.org - Collective Access – showcase - TMS Suite – emuseum.ch - Wagtail / Django – RCA Now / Museum Arnhem While they all had their advantages and disadvantages, we remained unsure which one to commit to: budget and time constraints led us to take the “lowest hanging fruit”, and …not use any backend at all. Our solution, inspired by the csvapi project by Open Data Team, is an instant JSON API. Like their csvapi, ours works directly from the CSV files, which are first referenced from the Data Package generated by our pipeline using the Python Data Package library. Based on this API, I wrote a simple frontend using the Twitter Bootstrap framework I’m used to hacking on for short term projects. Et voilà! A powerful search interface in the hands of one of our first beta-testers. When you see it – and I hope pretty soon at least a partial collection will be available online – you’ll notice a ton of options. Three screen-fulls of various filters and settings to delight the art collector, exploring the collection of nearly 7’000 images with carefully nuanced features. If you’ve been reading this blog, you can imagine that it is a collection that could also delight a Data Scientist. If there is interest, I am happy to separately open source the API generator that was made in this project. And our goal is to get this API out there in the hands of fellow artists and remixers. For now, you can check out the code in app.py. The open source project is available at github.com/loleg/panoptikum, and we are going to continue working on future developments in this repository. The content is not yet available to the public, since we are still working out the copyright conditions and practical questions. Nevertheless, we wish to share some insight into this project with more people through workshops, exhibitions and this blog. Wenn Kunst vergrabe isch und vergässe gaht, isch es es Problem für alli Aghörige, e furchtbari Belastig für d Nachkomme. (When art is buried and is lost, it is a problem for all involved, a terrible weight for the next generation.) In a good 40 years of work as a visual artist (in the conventional media of drawing, printmaking and painting), over 6,600 smaller and larger works have accumulated in my collection. In retrospect, these prove to be unusually diverse, but with sporadically recurring elements, somehow connected by a personal “sound”. Very early on I tried to systematize the spontaneous development of sculpture in different directions. This is the basic idea of the project PANOPTIKUM (since 2000), whereby the categorizations of the whole uncontrolled growth are only the basis for further artistic works – which should, ironically, dissolve the whole again. In the middle of 2016, with the help of numerous experts, I began to compile a catalogue of my works, i.e. to scan or photograph my works and then to index them in a differentiated way in an Excel spreadsheet. In 2018, Oleg Lavrovsky agreed to make the collected data accessible as desired, i.e. after entering the search terms, to display the respective images numerically and optically on the screen by means of a filter function. This is a prerequisite for the fact that in the coming years it will be possible to continue working with the image material in a variety of creative ways. Our project takes the form of an application, which can also be reviewed and further developed by other people (Open Source). The copyright and publication rights for all content remain with me, the created app can be freely used as a structure for other projects. In the longer term, general accessibility via the Internet is planned. At the moment, however, all content should only be available to individual interested parties. After the completion of this basic work, whereby the directory is to be supplemented about every six months, the task now is to concretize own artistic projects: digital graphics and an interactive work as well as possibly videos are pending. For this I am dependent on expert support, the search for interested persons continues. Commissioned works as well as forms of egalitarian cooperation are possible. In addition, the image material may also be made available for independent projects of third parties. The starting point and pivotal point of the PANOPTIKUM project is in any case the question of what can be done with a catalogued visual work. A wide variety of sub-projects can be created over an unlimited period of time (artistically, art historically, statistically, literarily, musically, didactically, psychologically, parodistically… depending on the point of view and interests of the participants). The central idea is to make a visual work accessible in an unusual and entertaining way. To capture additional public benefit through revision. Potential goals include: - Unusual: the very differentiated formal and content-related recording of one’s own work, which becomes the basis for further creations (self-reflexiveness and reference to the outside world). - Entertaining: exploring in a playful way (e.g. searching for the unknown author of this picture pool, memory, domino, competition, etc.) by means of interactive functions, games, VR applications. - Artistic work: my own works (approx. 6,600 drawings, paintings and prints), which are presented anonymously and with a good pinch of irony and questioned. - Making accessible: multimedia, on various channels: exhibition spaces (also improvised and private), internet, cinema. The target audience is as broad as possible, especially outside the usual art scene. - Stimulating: the desire to look, the pleasure of pleasurable immersion (flood of images!). On the other hand, thoughts about identity, freedom, openness. - Useful: sustainability material: ecological aspects in production and presentation. Social sustainability: smaller events, e.g. with the sale of the works at very favourable conditions in favour of “Public Eye” (instead of a rubble dump at the end of life!). Thus discussion about artist’s estates, archiving, economic aspects (art trade). Any visual material for teaching (art history, art mediation)? Next steps: Work on the overall concept, on a “story” with scriptwriters, event managers, advertisers, etc. One idea we call the Kunstfund would ask: who is the author? Take the role of art historians, amateurs, gallery owners, art critics and collectors, and speculate; picture disputes, questions of taste; search for meaning; models for political systems – all slightly spunky and ironic. Parallel to this, experimenting with concrete formal implementations: - How can my very sensually influenced, conventionally designed images be staged and brought into a visually attractive contrast with the digitally generated elements. For example, by means of split screens, transparencies, animated lettering, infographics, combinations with photo and video material from the “outside world”, whereby my collage books could serve as a bridge. - Function which continuously (anonymously if desired) records all activities and creations of the users – for example, in the design of virtual exhibition spaces with my pictures. Visit Jürg’s website for glimpses into his work and contact options.
OPCFW_CODE
What’s This About? I haven’t posted a lot about vaccine efficacy lately, largely because the frenzy of vaccine development and clinical trial results basically slowed way down around June 2021, and there hasn’t been that much to write about since on the subject. I’ve been thinking about what to do with this site ever since. Some people do seem to find it useful—there have been nearly 25,000 visitors from all over the world since I started writing about vaccine efficacy, and I hope that those people found information that was valuable to them. To the extent that they did, it makes me somewhat proud, since I am a statistics person rather than a clinically-trained person, so having an impact in the COVID-19 pandemic, however small, feels like an achievement. On the other hand, that same lack of clinical training means that I have to watch myself so as to write things that are justified by data, and keep from making wild, poorly-informed statements that do more harm than good. I feel that so far I’ve stayed on the right side of that line. I may risk that balance in the next phase of this blog’s development. I plan to write a few observations of my own on the state of epidemic surveillance, epidemic modeling, and epidemic data, specifically with respect to the COVID-19 epidemic. I am doing this in part because I’ve been more deeply involved in data-driven epidemiological work, especially with respect to vaccine effectiveness (different from efficacy, because it characterized real risk reduction in real populations, rather than “pure” clinical properties of vaccines), culminating in a paper demonstrating the possibilities of large-scale data analytics for epidemic surveillance and vaccine assessment. In the process, I have become somewhat frustrated with the state of data curation and availability, but also with some of the model-premises underlying discussions of subjects such as vaccines, “breakthrough” infections, variants and their potential for vaccine escape, and so on. In my opinion there is a great deal of intellectual confusion about these terms and concepts, and this confusion is feeding needless media and policy panic (and occasionally distracting from necessary panic). I feel I need a place to write down everything that I feel is (usually) subtly or (occasionally) grossly wrong about the public and scientific discussions of these issues. And I happen to have a more-or-less epidemiological blog. So I might as well do it here. The cost of this change of direction is that I doubt that I can maintain the careful stance of defensible scientific statements that I tried to keep this blog to so far. Quasi-editorials on epidemiology by a statistically well-informed but barely-clinically-literate observer of the field should by no means be taken as authoritative refutations of anything, or in fact as anything more than spurs for further discussion by people working in the field, with whom I would be delighted to engage, and be told in exhausting detail all the reasons why what I’m writing is wrong-headed. I do listen, and try to learn. But I will also argue. I feel that I will have accomplished something useful if I at least bring to light a few unexamined or under-examined assumptions, and occasion a fruitful discussion of those assumptions, even if in the end I am the only one who feels educated by the process. Nonetheless, I have a strong suspicion that I’ve seen some real issues—defects in how clinical data is created and curated and made available, defects of modeling, catastrophic terminological confusion—that need to be brought into the light. I’ll be discussing these in a series of posts.
OPCFW_CODE
Unable to share C drive on Docker for Windows I am running Docker Desktop for Windows on Windows 10 Enterprise. I get the following: PS C:\Users> docker run --rm -v c:/Users:/data alpine ls /data C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: C: drive is not share it in Docker for Windows Settings. From Docker settings in the Shared Drives tab, I see that the C drive is there, but it is not checked. When I check it and press Apply, I am prompted for my password. Upon entering it successfully, the C drive is still not checked. is this issue resolved ? I had the same issue, I wasn't able to resolve it and I ended up installing virtual box with ubuntu to run the project there... @sp2danny, do you have special characters in your password? Non-english or any spaces? Also in the username. Are you using active directory? Also check if file sharing is enabled? https://cdn-enterprise.discourse.org/docker/uploads/default/optimized/2X/b/b10551d7301e2de3dd813d11b07e53010f5e50ea_1_690x372.png @TarunLalwani : I do have special characters in my username. "ö" to be precise. should that make a difference? @sp2danny Yes. Please change your password and use plain English text and it should work @TarunLalwani : It worked, please write that as an answer I had no special non English characters in my password. I currently use Docker on Ubuntu running on Virtual Box. Docker edge Desktop Community <IP_ADDRESS> does not ask for password anymore There are different problems that people face with sharing. But the common one is a non-english character based password or a password with spaces. If you can change your password and remove spaces/special non-english characters then it should work. Other workaround that you can try is create a local user and give it access to C: and then when sharing C:\ in docker settings, using this local user credentials Also, you have to use a local account and not your Microsoft account: https://stackoverflow.com/a/56375425/693737 Docker edge Desktop Community <IP_ADDRESS> does not ask for password anymore The user account supplied also needs to have admin permission. Seems obvious but Docker doesn't return an error message when it fails (Version 18.06.1-ce-win73 (19507)). Remember to subsequently run PowerShell as that admin account in order to access share.
STACK_EXCHANGE
Require SSL, keep SELinux turned on, monitor the logs, and use a current PostgreSQL version. ssl=on and make sure you have your keyfile and certfile installed appropriately (see the docs and the comments in You might need to buy a certificate from a CA if you want to have it trusted by clients without special setup on the client. pg_hba.conf use something like: hostssl theuser thedatabase 220.127.116.11/32 md5 ... possibly with "all" for user and/or database, and possibly with a wider source IP address filter. Limit users who can log in, deny remote superuser login Don't allow "all" for users if possible; you don't want to permit superuser logins remotely if you can avoid the need for it. Limit rights of users Restrict the rights of the user(s) that can log in. Don't give them CONNECT right from PUBLIC on all your databases, then give it back to only the users/roles that should be able to access that database. (Group users into roles and grant rights to roles, rather than directly to individual users). Make sure users with remote access can only connect to the DBs they need, and only have rights to the schemas, tables, and columns within that they actually need. This is good practice for local users too, it's just sensible security. In PgJDBC, pass the parameter To instruct the JDBC driver to try and establish a SSL connection you must add the connection URL parameter ssl=true. ... and install the server certificate in the client's truststore, or use a server certificate that's trusted by one of the CAs in Java's built-in truststore if you don't want the user to have to install the cert. Now make sure you keep PostgreSQL up to date. PostgreSQL has only had a couple of pre-auth security holes, but that's more than zero, so stay up to date. You should anyway, bugfixes are nice things to have. Add a firewall in front if there are large netblocks/regions you know you don't ever need access from. Log connections and disconnections (see postgresql.conf). Log queries if practical. Run an intrusion detection system or fail2ban or similar in front if practical. For fail2ban with postgres, there is a convenient how-to here Monitor the log files. Extra steps to think about... Require client certificates If you want, you can also use pg_hba.conf to require that the client present an X.509 client certificate trusted by the server. It doesn't need to use the same CA as the server cert, you can do this with a homebrew openssl CA. A JDBC user needs to import the client certificate into their Java Keystore with keytool and possibly configure some JSSE system properties to point Java at their keystore, so it's not totally transparent. Quarantine the instance If you want to be really paranoid, run the instance for the client in a separate container / VM, or at least under a different user account, with just the database(s) they require. That way if they compromise the PostgreSQL instance they won't get any further. I should't have to say this, but ... Run a machine with SELinux support like RHEL 6 or 7, and don't turn SELinux off or set it to permissive mode. Keep it in enforcing mode. Use a non-default port Security by only obscurity is stupidity. Security that uses a little obscurity once you've done the sensible stuff probably won't hurt. Run Pg on a non-default port to make life a little harder for automated attackers. Put a proxy in front You can also run PgBouncer or PgPool-II in front of PostgreSQL, acting as a connection pool and proxy. That way you can let the proxy handle SSL, not the real database host. The proxy can be on a separate VM or machine. Use of connection pooling proxies is generally a good idea with PostgreSQL anyway, unless the client app already has a built-in pool. Most Java application servers, Rails, etc have built-in pooling. Even then, a server side pooling proxy is at worst harmless.
OPCFW_CODE
Step 1: Create Project and Configure the PIC32 Step 1.1: Create an MPLAB® Harmony project in the MPLAB X IDE Create a folder under the Harmony installation to place this Lab. Navigate to the <Harmony-Installation-Path>/apps folder and create this folder structure: Create a folder to hold the project you will create from scratch. Navigate to the <Harmony install path>/apps/training/middleware folder and create this folder structure: You will develop the lab under this folder. The MPLAB X IDE will create a sub-folder named emwin_media_player_lab inside the dev/emwin_media_player folder. Start the MPLAB X IDE and create a New Project by selecting File > New Project. In the Categories pane of the New Project dialog, select Microchip Embedded. In the Projects pane, select 32-bit MPLAB Harmony Project, and then click Next >. Specify the following in the New Project dialog: - Harmony Path: <Harmony install path> - Project Location: <Harmony install path> - Enter Project Name: emwin_media_player_lab - Configuration Name: pic32mz_ef_sk_meb2 (this is optional) - Target Device: PIC32MZ2048EFH144 After clicking the Finish button, the project will be created and opened. You will see the MPLAB Harmony Configurator (MHC) window along with the integrated Harmony Help file. If you close the MHC window, you can re-open it again by clicking on Tools > Embedded > MPLAB Harmony Configurator Step 1.2: Select the Board Support Package (BSP) Click on the Options tab in the MPLAB Harmony Configurator main window to Select and Configure the Harmony Framework in a graphical tree-based format. Expand the BSP Configuration tree, and then select PIC32MZ EF Starter Kit w\ Multimedia Expansion Board (MEB) II If a Board Support Package exists for your development board you will want to use it. Choosing a BSP lets the MPLAB Harmony Configurator (MHC) know about the hardware you will use for the project. By selecting a BSP, MHC can automatically control the following settings for you: - PIC32 core configuration (watchdog timer, debugger channel) - PIC32 oscillator configuration (including external clock/crystal) - PIC32 I/O Port pin connections to LEDs and switches In addition to configuring hardware options for you, the BSP comes with a small group of library functions that allow you to more easily interface with LEDs and switches. In this lab, you will observe the selections the BSP makes for you. This will show you how to make these selections manually in case a BSP does not exist for the board you want to use. Step 1.3: Verify Configuration Bits are Correct In the central window under the MPLAB Harmony Configurator tab, click on the Options sub-tab, to view the MPLAB Harmony & Application Configuration tree selections. Expand the Device & Project Configuration tree then expand the PIC32MZ2048EFH144 Device Configuration. The Board Support Package you selected has properly configured these selections for you. This step shows you how to make changes to these selections if needed. - DEVCFG3 and DEVCFG2 – No change - DEVCFG1 – No change, but verify the Watchdog Timer Enable (FWDTEN) is OFF - DEVCFG0 – No change In case you are wondering where these cryptic selection names come from, they correspond with the PIC32 core configuration registers and bit names. Please see the device data sheet for details. Before moving to the next step, you may want to collapse the Device & Project Configuration tree. Step 1.4: Verify and Change Oscillator Settings Select the Clock Diagram tab to display the Clock Configurator window. Verify the following clock parameters: - POSCMOD set to EC - FNOSC set to SPLL - FPLLIDIV set to DIV_3 - FPLLMULT set to MUL_50 - FPLLODIV set to DIV_2 Experiment with other clock settings. Did you notice how some selections produce red values? These indicate a bad clock configuration. If you mouse over them, a pop-up window will tell you what the problem is. The PIC32 is connected to a 24 MHz external clock input. You are not using the internal PIC32 oscillator. When you change the configured clocks away from the default values on the graphical interface, they are reflected as a shaded field in the Options Configuration Bits. Since the BSP selected the default values, there should be no shading. To illustrate this, notice when the FPLL DIV is changed to a Divide By 1 instead of 3 in the CLOCK DIAGRAM graphical interface, the OPTIONS tab will reflect the changes in DEVCFG1 with Shading. You can configure the clocks using the tree selections, but it is much easier to do graphically! The BSP has already configured the PLL using the selections for the "PIC32MZ EF Starter kit \w Multimedia Expansion Board II". For custom boards without a BSP, you can use the PLL’s “Auto Calculate” feature to determine and set the PLL multiply and divide values (FPLLIDIV, FPLLMULT and FPLLODIV). You can see how this works by going back to the Clock Configurator window (Clock Diagram tab). Change the PIC32 clock frequency to 198 MHz. This development board can run at a maximum frequency of 200 MHz (selected by default by the BSP). You are using the audio CODEC on the board, so you need to configure the system frequency to 198 MHz. The reason for this specific frequency is described in the "Configure Audio CODEC" step. - In the MHC "Clock Diagram" tab, find the "System PLL" block and select “Auto Calculate” - Change the "Desired System Frequency" from 200 MHz to 198 MHz. - select "Apply" Verify the output to the PLL is now set to 198 MHz. Step 1.5: Use the Graphical Pin Manager to Verify I/O Pins Verify the Board Support Package (BSP, selected in a previous step above) has properly configured the PIC32 pins based on the external devices connected to them. You will be using the Audio CODEC AK4953, LCD Display, LEDs and Switches. Select the “Pin Table” tab in the MHC output pane and the "Pin Settings" tab (see the following screen-shot). If this window is minimized, it can be found on the bottom left part of the MPLAB X IDE. Click on it to maximize it. - Notice that PINs 43, 44, 45, 67 and 84 have LED_n selected on them. - Notice that PINs 59, 60, 61 and 22 have Switch_n selected on them. - Notice that the PIN 46 has Power down enable/disable for the CODEC AK4953 selected on it. - Notice that the PINs 26, 35, 39, 57, 117 and 133 are selected to interface with the Graphics LCD. Table of Contents
OPCFW_CODE
Include Image and Text in Formula Field using Number Field as Condition So I have an NPS object setup and one of the items (Text field) is based on the NPS score, either 1 of 3 images will show and this works fine example: Basically I am trying to have it show the image however also show text after it for example this image would have "Promoter" right after it but I am not able to get any formula code to work, I have tried + " Promoter" and && but all I get is the text show and the image breaks. Code: IF( NOT( ISBLANK(Net_Promoter_Score__c)), (IF(Net_Promoter_Score__c < 7, IMAGE("/resource/1586983206000/GraphicsPackNew/tangodesktopproject/16/theme/process-stop.png", " Detractor"), IF(Net_Promoter_Score__c > 8,IMAGE("/resource/1586983206000/GraphicsPackNew/tangodesktopproject/16/theme/face-grin.png", " Promoter"), IMAGE("/resource/1586983206000/GraphicsPackNew/tangodesktopproject/16/theme/weather-showers-scattered.png", " Passive") ) ) ), "") Just curious if there is a way to include the verbiage to be displayed with the image in the text field. The formula field won't display images and text together that way. You'll have to edit your images to include the text so that it is part of the image itself. @DavidCheng Do you know of alternative formulas where I could get this to work? would CASE work if I list out each number option? IF( NOT( ISBLANK(Net_Promoter_Score__c)), (IF(Net_Promoter_Score__c < 7, IMAGE("/resource/1586983206000/GraphicsPackNew/tangodesktopproject/16/theme/process-stop.png", " Detractor")+ " Detractor", IF(Net_Promoter_Score__c > 8,IMAGE("/resource/1586983206000/GraphicsPackNew/tangodesktopproject/16/theme/face-grin.png", " Promoter")+ " Promoter", IMAGE("/resource/1586983206000/GraphicsPackNew/tangodesktopproject/16/theme/weather-showers-scattered.png", " Passive") + " Passive" ) ) ), "") Hi Ramesh, welcome to SFSE. Please take a moment to scroll through the [tour] and read [answer]. Code dumps are discouraged here. Adding some explanation outside of just the raw solution helps people better understand what you are trying to convey.
STACK_EXCHANGE
Remove dots in toc for a particular entry like appendix I am writing my thesis and use the book class under PhDthesisPSnPDF. I would like to remove the dots in table of contents only for a particular entry like "Appendix". All the entries for chapter and section in the table of contents have dots and page numbers, I need to get rid of page number and the dots for the appendix in the TOC. Is there way to do? But the linked PDF does not show any dots for chapters. So what packages are you including that is causing these dots to appear for chapters? A heading command like \section{A} writes a line into the .toc file that looks like this: \contentsline {section}{\numberline {1.1}A}{2} when this is read back in \contentsline{section} simply expands to \l@section and that is then the command that picks up the remaining arguments and produces the toc entry. Most often (but not always) this command is defined as a special instance of \@dottedcontentsline, e.g., in the book class you find: \newcommand*\l<EMAIL_ADDRESS> So to get rid of the dots in some cases there are different approaches possible and the depend on how things are defined and which entries you have to change in reality. Plan A Assuming that your toc uses \@dottedtoclinethen to get rid of the dots we can redefine \@dotsep to have a very high number. Assuming further, that we only want to change the sections at the end of the toc and we do not need to have dots appearing later on again. Then the following in front of the appendix in the source document will do the trick: \makeatletter \addtocontents{toc}{\def\string\@dotsep{100}} \makeatother This will write \def\@dotsep{100} into the toc file and from that point on the dots are gone. If you need them back at a later point issue \makeatletter <EMAIL_ADDRESS>% 4.5 might be a different value in your class \makeatother Plan B Make the heading commands in the appendix area or whenever you want a different toc style not write \contentsline{section}but, say, \contentsline{appsection} and then define \l@appsection to format the toc entry according to the desired style. For this one would need to look for the places in the heading commands that issue \addcontentsline and change that code. Fairly elaborate but much more general. Frank, thank your for your great answer. Plan A did the trick. The appendix chapter was at the very end of the TOC so need to turn it on again. I was looking a solution for so long :)
STACK_EXCHANGE
Error when is stripe enabled false During deployment, it would fail on vercel when there are no optional variables like the STRIPE and RESEND variables...anyway i just had to put in dummy values so it can deploy. When deployment was successful and i tried to train a model, it kept saying something went wrong, so i went to check the application logs in vercel and this is the error i was seeing { stripeIsConfigured: false } TypeError: Cannot read properties of undefined (reading 'map') at POST (/var/task/.next/server/app/leap/train-model/route.js:298:97) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async /var/task/.next/server/chunks/778.js:5600:37 Please what could be the issue ? @NathBabs Are you still seeing this issue? @NathBabs Are you still seeing this issue? @Marfuen yes I am . Can you add console logs in that file to see what the issue could be? before the .map Hey @Marfuen, I'm having the same error. This is what I got: - event compiled successfully in 139 ms (663 modules) http://localhost:3000/leap/train-webhook <--- I added "VERCEL_URL=localhost:3000" to .env.local and logged it here { stripeIsConfigured: false } { message: [ 'webhookUrl must be a URL address' ], error: 'Bad Request', statusCode: 400 } TypeError: Body is unusable at specConsumeBody (node:internal/deps/undici/undici:6630:15) at Response.json (node:internal/deps/undici/undici:6533:18) at POST (webpack-internal:///(rsc)/./app/leap/train-model/route.ts:123:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async eval (webpack-internal:///(rsc)/./node_modules/next/dist/server/future/route-modules/app-route/module.js:254:37) ^C Then I used localtunnel (an alternative to ngrok and replaced VERCEL_URL with that address. I got a new error with that: http://red-yaks-trade.loca.lt/leap/train-webhook { stripeIsConfigured: false } { statusCode: 402, message: 'Training models is only available on paid plans. Please upgrade to a paid account to use this feature.' } TypeError: Body is unusable at specConsumeBody (node:internal/deps/undici/undici:6630:15) at Response.json (node:internal/deps/undici/undici:6533:18) at POST (webpack-internal:///(rsc)/./app/leap/train-model/route.ts:123:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async eval (webpack-internal:///(rsc)/./node_modules/next/dist/server/future/route-modules/app-route/module.js:254:37) It asks me to use a paid plan. Is that normal? Update: I tried it on the leap API playground. It's indeed true that it will require me to pay to train models. @chinmaykunkikar yes I tried training on Leap's website some minutes ago and it said it was for paid users. @Marfuen @NathBabs Yeah, training models costs a lot since we have to use GPU's so we can't give that away for free. You will have to create an API key with Leap first. @NathBabs Yeah, training models costs a lot since we have to use GPU's so we can't give that away for free. You will have to create an API key with Leap first. @Marfuen but this wasn't stated in the setup , that we have to pay for training on Leap
GITHUB_ARCHIVE
Switching hosts was a matter of necessity. Running an entire site on wordpress on IIS was ridiculous. It’s fairly irrelevant to compare easyCGI and BlueHost, since anyone else making the choice will be choosing based on IIS vs. Apache, not a feature comparison or better support. However, i’ve very nearly gone back to easyCGI’s email hosting. It’s not that easyCGI has great email hosting. their webmail is workable at best (better, since the last upgrade). they only provide POP3, not IMAP email. The reason i wanted to switch back? BlueHost doesn’t provide a catchall address. Apparently doing so would cripple their servers, because your email server is the same as your web server. and ‘all that spam‘ would bring the server to its knees. i’ve posted about this in their forums, but apparently all that does is bring out the idiot fanboys saying ‘your spam is everyone’s problem’. no, bluehost’s poorly configured servers are everyone’s problem. Having had a catchall email for archgfx.net for 4 years on 2 webhosts, this seems bizarre to me. there’s no reason that one email couldn’t receive more spam than all the uncreated addresses put together. This is exactly how my email is set up. i have specific junk addresses: crappy, crapmail, webmaster, sales, etc. They come from using throwaway email addresses to sign up for things, and from standard addresses that bots think will reach the site owner. They all route to a yahoo inbox that i can dumpster dive for registration emails if the need arises. The rest of the email to archgfx lands in my main inbox. this includes emails to sunburntkamel, adam, afreetly, af, and other addresses that were either easier to pronounce over the phone, or that people have erroneously remembered. I can’t possibly remember all of them, or attempt to generate all the possible misspellings and misrememberings of my name. so not having a catchall is a dealbreaker for me. Back when it was first opened, i read derek’s post, and signed up for gmail hosted. I found out then that i couldn’t set up email aliases to non-archgfx.net addresses. at the time, that was a big enough pain for me to not bother using the service. now, having to keep two gmail windows open (one hosted, one standard) is less of an issue than losing mail that i’ve depended on arriving for 4 years. So i changed my MX records again, since hosted gmail beats the $3.95 a month for easyCGI. I can’t set up email aliases on hosted gmail, but i can forward email. so i have one junk box (crapmail), that forwards its contents to yahoo. the other ‘big spam’ addresses are aliased to that one. it works, more or less. i have to think it would be easier on google’s servers for them to just let me alias to yahoo, but functionally, it’s about the same. and i doubt google’s servers are going to buckle under the strain.
OPCFW_CODE
A recent post by Sean Carroll at Cosmic Variance lists some ideas that need to be explained. I accept your challenge, Sir. Actually, there is plenty of work to go around. Some of the topics listed there are really not something I would be eager to tackle - such as quantum field theory. I'll pass. But, I think I have made a good effort at least a couple of these ideas. Not to rehash, but here is my idea about The Scientific Method. Energy is a very broad idea. Here is my answer to "what is energy?". I thought I had a good explanation for the seasons, but I can't find it. Maybe I will have to redo that one. However, the one I want to tackle today is Uncertainty. What is it? Why do we use it? How do we use it (well, I won't really cover that one). To really address the idea of uncertainty, I think we need to look at the bigger picture of science. How about an analogy? I used this picture before: Here is a quick picture I made based on similar pictures. This, of course, is a depiction of Plato's allegory of the cave. I have no idea where that image came from, I found it all over the internet tubes though. Actually, I used this same image before in my post Allegory of the grade. If you are not familiar with this story, here is the short version. Actually, here is the exact thing I said the last time I brought this up. So, this is like science. Let me label stuff: - Here the prisoners are humans (every human is a scientist). We can't leave the cave (at least, not that I know of) - The shadows on the wall are the results of our experiments. - The puppets are models. Theoretical ideas, if you like. - The real objects are the truth - which we can never really see. Think about a tennis ball (in real life). People then make a perfect model of a sphere to represent this (which doesn't fully model a real tennis ball). The shadow this perfect sphere casts is far from perfect. The light behind the ball might flicker. The cave wall isn't flat. So, it might be difficult to show that the shadow on the wall exactly matches up to the perfect sphere (which isn't even really a real tennis ball). Now, let's think about another model - gravity. On the surface of the Earth, we can model gravity as the force: Actually, to test this model I could use another model that relates force and acceleration (sometimes called Newton's 2nd law): This says two things. The acceleration of an object with only the gravitational force should be the same magnitude as g. Also, the acceleration of an object with just the gravitational force should be independent of that object's mass. So, what if I want to test this model? What if I want to test the idea that objects with different masses have the same acceleration? I could set up a really fancy drop timer. One that starts a clock when a ball is released and stops it when it hits a pad. Suppose I do this for some height and get a time of 0.321 seconds. I then use a different mass ball (but the same size) and repeat to get a time of 0.325 seconds. Wait. Those times are not the same. Does this mean the model is wrong? No. How do you match up experimental data with theoretical models? You have to realize that you are looking at a shadow of the theoretical model on an imperfect cave wall. This is what uncertainty does. It tries to compensate for the things that aren't perfect. For the dropping objects, clearly there are some problems. For example, the ball has other forces acting on it besides gravity. There is also the air resistance force. Sure, this is small in comparison to gravity. But it is there. Also, there are data problems. Does the ball get released from rest exactly the same way each time? Are there variations in the timer? Does the distance change? Then how the heck do you do an experiment? The key is to try and estimate the amount your values could be off. This is the uncertainty. How do you represent this uncertainty? For physicists, we usually use a plus-minus value for each data point. The time might look like this: This says that the time for the object to fall is very likely in between 0.323 and 0.327 seconds. If you were a chemist, you would probably just write: t = 0.325 seconds. You would then assume that every reasonable person knows that the measurement is fairly reasonable to this value. If it was less well known, it would have been just written as 0.3 seconds. If it was more precisely known, it could be written as 0.3250 seconds. Not a bad idea, just not as easy to use as the plus-minus way. Does this answer the original question? What is uncertainty? Maybe. I did not answer "how do you DO uncertainty?" That would take a much more involved answer. Update: Thanks to the detective work of @jahigginbotham, I changed the picture of the allegory of the cave. Apparently, the one I was using before (which is quite excellent - you can see it here) is from a book - Like a Splinter in Your Mind by Matt Lawrence.
OPCFW_CODE
mhwaveedit is a GTK2 audio editor that is surprisingly easy to use. It’s biggest flaw is not having a single window mode, but other then that it is great for quick edits and extremely light weight. I am a fan of it. I am worried that the GTK2 framework is going to be dropped soon, so I hope flatpak can save these apps. (It is still in current repos btw) Consider making a flatpak of mhwaveedit and hopefully the team ports it to GTK3. There you go, https://github.com/fastrizwaan/mhWaveEdit_flatpak build and test and confirm please! I am testing it now. At the moment I am waiting for GTK2 stuff to compile. I ran flatpak-builder --user --install --force-clean build-dir io.github.mhWaveEdit.yml and it started compiling, but at the end I got Warning: eu-elfcompress not installed, will not compress debuginfo stripping /home/USERNAME/Downloads/mhWaveEdit_flatpak/.flatpak-builder/rofiles/rofiles-JluFeg/files/lib/gtk-2.0/2.10.0/engines/libadwaita.so to /home/USERNAME/Downloads/mhWaveEdit_flatpak/.flatpak-builder/rofiles/rofiles-JluFeg/files/lib/debug/lib/gtk-2.0/2.10.0/engines/libadwaita.so.debug Error: module gnome-themes-extra: Failed to execute child process “eu-strip” (No such file or directory) build build-dir io.github.mhWaveEdit.yml README.md USERNAME@Bionic:~/Downloads/mhWaveEdit_flatpak$ flatpak run io.github.mhWaveEdit error: app/io.github.mhWaveEdit/x86_64/master not installed it builds fine on Debian x86_64, what’s your distro? customized Linux Mint 19.2 64bit, (which is based on 18.04 LTS bionic beaver) but I am running the latest flatpak (Flatpak 1.10.1) via a PPA. Success, it is working and I did a test mp3 edit successfully. Wow, I did not know that It could not only edit WAV but also MP3, OGG, AIFF, AU, FLAC and more! It is really a good old software, should be in Flathub. Small yet useful. This will give the developers more time to port the software to GTK3, as opposed to it just discontinuing because it cannot keep up with upstream. Could you please rebuild again, I’ve updated the yml file. flatpak-builder --user --install --force-clean build-dir io.github.mhWaveEdit.yml Okay, I built the latest successfully. One thing I noticed is that when I open another instance of mhwaveedit flatpak, it does not allow me to paste audio data from one instance to another. The work around is to drag and drop an audio file into the original; which will open a new window, or open another audio file with the file picker. Also we need to make sure people know this can edit mp3, wav, and ogg not just wav. That’s the limitation of Flatpak sandbox for security reasons. GTK4 is supposed to fix that. Let’s hope good quality software like this will be ported to GTK4. I believe so it will be sooner than later as every year new Computer Science graduates join the Freesoftware / OSS community.
OPCFW_CODE
Hello, fellow Azure Striker! Do you remember my last Blog Post, which I explained about the new Infobox Builder? After the IB was added to all Wikias, I decided to give it a try around here and made the new Image Infobox! In this Blog Post, I will try to explain my idea. Also, with this idea, I'm planning to change every image to the .png format instead of .jpg to get a better quality, that would be solely my responsibility since it's a hard task though, unless you do want to work on this with me. This image is for public use, and can be edited freely. As you can see, the new Image Infobox gives the very important details from the image to the reader, and helps organize the Wikia. We would use this to the Description to all pictures currently on this Wiki, that would require some work, but it's worthy. Now, what do you think about it? Should I change something? Is there some data I forgot to add? Just comment! How it works Source / Mode of use Image Infobox |description=Example |characters=Example |origin=Example |author=Example |date of upload=Example That's the code, simple enough. - Description: Explain what is inside that picture, or if there is something specially catching on that image, or just say why it was uploaded on the first place. - Characters: If there is any character on this picture, add them to this section. - Origin: Tell from where does that image comes from, (and if you aren't going to use it soon, also explain your plan with that image). - Author: Give a link of the Uploader's user page. - Date of Upload: How it was uploaded and when did that happened. How exactly is that image supposed to work? Explain it without the Infobox, as a normal text on the image's description for everyone to see. I will give you a list of my ideas of Attributes. - This image is for public use, and can be edited freely. - The normal attribute for images on a content page, these images can be renamed, edited, or posted anywhere by anyone. - This image is for professional use of the Azure Striker Gunvolt Wiki - Images posted on important pages of the Wiki, such as Community Portal or Azure Striker Gunvolt Wiki: Staff. These images can be posted elsewhere, but you shall ask for an admin's approval before editing it. - This image was uploaded by _____ for personal use - An image uploaded by a user, but there is not connected to Content Pages. Things such as an image for your User page, or Comment sections, Fan Arts are also included on this attribute. - Marked for deletion - An image that wasn't deleted by the Admins for a certain reason, but they are intending to delete it soon, images with this Attributes will be deleted within a week unless someone makes a proper use of it, or wants to. Image Infobox itself Tell me your thoughts, and see you in the Comments section!
OPCFW_CODE
Installing Ubuntu Server Virtual Machine, using VMware (Part 2) After installing Ubuntu Server, using VMWare as explained in my previous post, You’ll have a local development environment for your projects that’s fully functional, but only with the basic features. You’ll want to use a series of tools in your development stack, that aren’t available from start in the installation and also, have the means to easily edit the files inside your server, preferably without the need to use FTP or other tools, after all, this is your local stack. In this post I’ll explain how I’ve configured the server to improve the development environment and run the following tools correctly, - Samba ( File sharing between the virtual machine and our local system) - Ruby and Gem 1. Install and configure Samba on Ubuntu VM First step, the obvious, install Samba using root access sudo apt-get install samba Next you’ll need to open your Samba configuration file ( /etc/samba/smb.conf ) and add a few lines in order to configure a shared folder, start by opening your configuration file with your favorite editor and then you’ll need to add the following lines to your file: # The following property ensures that existing files do not have their permissions # reset to the "create mask" (defined below) if they are changed map archive = no # Notify upon file changes so that Windows can detect such changes change notify = yes [htdocs] comment = Htdocs Files path = /opt/lampp/htdocs/ guest ok = no browseable = yes writable = yes create mask = 0664 directory mask = 0775 In the above, I’m assuming you want to share the htdocs folder from your lampp install. Next, you need to add a Samba user, with the same name and password as your Ubuntu user account sudo smbpasswd *username* To finish this step you need to restart the samba service, sudo service smbd restart 2. Setting permissions on htdocs folder In order to be able to edit, add and delete files from the htdocs folder you’ll need to set all the permissions correctly on that folder, to do this you’ll need to change the folder permissions, doing the following: First you need to create a new group “www” which will have the permissions for this folder, sudo groupadd www Next, you’ll change the group on the directory, sudo chgrp -R www /opt/lampp/htdocs and then you need to set the correct permissions on that directory: sudo chmod 2775 /opt/lampp/htdocs To finalize, add your user to this group, running the following command: sudo usermod -aG www *username* Now you’ll need to logout or restart your server, in order to these changes take effect, as these rules are read at login time. 3. Configure static IP in your VM and change your windows “hosts” file For me this is one of the most important steps on this configuration, I love being able to type something like local.dev in my browser, putty or ftp and be able to access my VM without any hassle. To do this you need to change two files in your VM. First, open the “interfaces” file ( etc/network/interfaces ) in your favorite editor, and change the following lines: auto eth0 iface eth0 inet static address 192.168.0.100 netmask 255.255.255.0 network 192.168.0.0 gateway 192.168.0.2 broadcast 192.168.0.255 You’ll need to get the correct values for each from your own network configuration, just have in mind that the ip address need to be unique and in the same network as your local machine. Next, open the “resolve” configuration file ( etc/resolv.conf ) and change the nameserver to match your gateway ip address. Then just restart the network, by running the following command: sudo etc/init.d/networking restart Back in windows you need to add a new entry to your host file, for that you need to open notepad or your favorite text editor, as administrator, then navigate to the host file, usually in: C:\Windows\System32\drivers\etc and add the following line to your file: The first part is your virtual machine ip address, the second your desired alias for it. 4. Install all the packages you want Now it’s time to install all the packages you want in your VM, before selecting them and start installing, just run an update to make sure you’re getting the latest stable versions. sudo apt-get update Now it’s time for installing the packages, usually I install the following: Please have in mind that these are the steps I use to create a local VM for development, these configurations shouldn’t be used on a VPS or a production environment. There are other alternative configurations for a local environment including the famous VVV, https://github.com/Varying-Vagrant-Vagrants/VVV . The last time I tried to install this on my windows laptop I couldn’t get it to work correctly due to some windows permissions, I’ll give it another try some time soon, but for now I’m sticking with my own VM.
OPCFW_CODE
Refactored SplitRowStore bulk insertion The inner loop of the insert algorithm has been changed to reduce function calls to only those that are absolutely necessary. Also, we merge copies which come from other rowstore source, speeding up insertion time. Also adds support for the idea of 'partial inserts'. Partial inserts are when you are only inserting a subset of the columns at a time. Partial inserts will be used in a later commit. Testing Unit tests have been updated. The old bulkInsert tests needed to be modified because now we have situations where a block will not be filled up completely- only to a threshold value. This reduces the runtime of the costly inner loop at the cost of a few tuples. Performance I had a similar PR-100 open last week. I ran TPCH SF100 queries 1-17 with this branch and with the branch from PR-100. They performed within a 1% margin of each other so it is safe to say that this branch is as fast as the last branch (which was 2x the base). Best times for contrived 1gb relation, 50% selectivity selection test (See PR100) master: 1602.358 ms splitrow_insert_refactor: 469.664 ms TPCH runtime Queries 1-16 (sum of the average of middle 3 runs of 5 runs total per query master: 10.61 minutes splitrow_insert_refactor: 10.48 minutes TLDR this is an incremental improvement. Note: this crashes GCC5. I'm not sure what to do about this. Maybe @navsan or @hakanmemisoglu (hakan the compiler man) might have some advice. <lambda(auto:1*)>]’ /fastdisk/quickstep/storage/ValueAccessorUtil.hpp:263:55:   required from ‘auto quickstep::InvokeOnAnyValueAccessor(quickstep::ValueAccessor*, const FunctorT&) [with FunctorT = quickstep::SplitRowStoreTupleStorageSubBlock::bulkInsertPartialTuplesImpl(const quickstep::splitrow_internal::CopyGroupList&, quickstep::ValueAccessor*, std::size_t) [with bool copy_nulls = true; bool copy_varlen = true; bool fill_to_capacity = true; quickstep::tuple_id = int; std::size_t = long unsigned int]::<lambda(auto:1*)>]’ /fastdisk/quickstep/storage/SplitRowStoreTupleStorageSubBlock.cpp:342:27:   required from ‘quickstep::tuple_id quickstep::SplitRowStoreTupleStorageSubBlock::bulkInsertPartialTuplesImpl(const quickstep::splitrow_internal::CopyGroupList&, quickstep::ValueAccessor*, std::size_t) [with bool copy_nulls = true; bool copy_varlen = true; bool fill_to_capacity = true; quickstep::tuple_id = int; std::size_t = long unsigned int]’ /fastdisk/quickstep/storage/SplitRowStoreTupleStorageSubBlock.cpp:278:94:   required from here /fastdisk/quickstep/storage/SplitRowStoreTupleStorageSubBlock.cpp:325:68: internal compiler error: in tsubst_copy, at cp/pt.c:13040    const std::size_t num_c_attr = copy_groups.contiguous_attrs_.size();                                                                     ^ Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions. @cramja Do you mean TPC-H SF100 Q1-Q17 has 2x improvement? Have you tried with a newer or older version of GCC5? On Oct 5, 2016, at 14:54, Marc S<EMAIL_ADDRESS>wrote: internal compiler error: in tsubst_copy Hi @cramja, the problem might be related lambda capture arguments. GCC 5 was giving the same problems if you use general capture by reference [&]. You can try to give specific reference parameters that is used in lambda function, such as [&var1, &var2]. It should fix the problem. This bug was filed and fixed a year ago in 5.3, I think. It had been around for 2-3 years, so we’ll probably be affected in older versions of GCC too. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67411 Any chance you can simply change the code to do something equivalent by syntactically different? Maybe that’ll help? On Oct 5, 2016, at 14:57, Navneet Potti<EMAIL_ADDRESS>wrote: internal compiler error: in tsubst_copy @hbdeshmukh 2x improvement was for the contrived test of trying to insert a whole bunch of tuples into a Splitrow at once as in PR100. As for TPCH, I'm not sure, other than that this is not any slower. I could run those tests again, but right now I'm working on the compiler issue. Thanks @hakanmemisoglu I tried something similar to what you were suggesting and the "works but I don't know why" solution I found was to declare the variables inside the lambda. It's ever so slightly less efficient code but w/e, compiles. @hbdeshmukh do we have any recent TPCH 100 results from a CloudLab box? @hbdeshmukh I updated for TPCH in the PR header @navsan Ready to merge? @jianqiao any issues? Have you been able to test this on TPCH and make sure that the results are correct? If so we can go ahead and merge. I haven't been able to test this for certain queries yet. On Tue, Oct 18, 2016, 16:13 Marc S<EMAIL_ADDRESS>wrote: @jianqiao https://github.com/jianqiao any issues? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/apache/incubator-quickstep/pull/109#issuecomment-254641047, or mute the thread https://github.com/notifications/unsubscribe-auth/ACZB66l5XL7sQFqP6UaU_KHs83aowQOsks5q1TZwgaJpZM4KPVGp . If that's what were waiting on, then yes, I can go test it. Of course that will be with a subset of working queries. @cramja In general looks good. I agree with Navneet that setMemory() does not look like a safe public method (there are other const members in the BitVector class not addressed) -- the method seems to be just applicable in very limited scenarios. Anyway let's merge this PR first, and I can help revise setMemory() later when working on the reordering-output-attributes stuff. Can you rebase the branch and commit it to apache:splitrow_insert_refactor? Seems that I cannot access the cramja:splitrow_insert_refactor branch. @jianqiao Sure, that sounds good. @navsan and I talked about another alternative which is to call the BitVector constructor instead of setMemory. I made the set method because I thought that it would be cheaper than instantiation of an object, but navneet explained that since the compiler would pre allocate space on the stack, that the costs would be equivalent. Do you think that would be a good alternative? @cramja I'm not sure about the overhead for calling the constructor inside the accessor loop. We can first have this setMemory() version merged to have a reference for the performance. I ran TPCH 10 on a cloudlab instance. The update is marginally better. Note that master is using splitrows, not columnstores. Also this is queries 01 04 06 09 10 11 12 13 14 15 16 19 22 master splitrow refactor master - splitrow refactor 2244.56 2227.01 17.5 482.29 475.22 7.1 298.32 295.05 3.3 823.28 822.34 0.9 1655.41 1680.74 -25.3 343.62 360.24 -16.6 566.85 568.88 -2.0 3189.44 3104.58 84.9 315.47 312.61 2.9 594.03 555.04 39.0 1076.28 1132.68 -56.4 448.39 441.66 6.7 776.47 777.97 -1.5 12814 12754 60.39 Note the last row is a total Comparing it against Master on which uses compressed columnstores, I found: master_cs splitrow refactor master_cs - splitrow refactor 1676.65 2227.01 -550.4 324.30 475.22 -150.9 44.78 295.05 -250.3 692.21 822.34 -130.1 1522.79 1680.74 -158.0 325.94 360.24 -34.3 210.29 568.88 -358.6 3278.79 3104.58 174.2 98.74 312.61 -213.9 430.47 555.04 -124.6 977.81 1132.68 -154.9 113.18 441.66 -328.5 779.42 777.97 1.5 10475.37 12754.02 -2278.64 Note the last row is a total So apparently compression is the way to go here. Though, this is of course still an improvement on the old splitrow. Merged and closed. @jianqiao Did you merge this? The PR is still open. @cramja Thanks for doing this. Nice! @pateljm The PR was merged but was not closed automatically, @cramja can manually close it. @jianqiao thanks! will close.
GITHUB_ARCHIVE
Is there a situation where just calling is justified with AK on a QKA suited flop? Suppose you raised pre-flop with the AK and have one caller, so you go heads-up to the flop. On the flop comes QKA which are all spades. This is clearly a very draw-heavy board, and your two pair AK will not be good all the time. When you bet and you get raised, is there a situation where it is justified to just call instead of raising or folding? Suppose the opponent is a tight/passive player. In this situation he probably already made a straight of flush and folding is the best option. He will not be semi-bluffing in this situation very often and certainly will not try this with just an ace. The only possible hand which he could have an you can beat would be AQ, which he will not have the majority of the time. If the player is quite aggressive, there is quite a large change he has a draw and tries to semi-bluff. If you just call you give him the opportunity to make his hand. Moreover, when faced a large bet/raise on the turn you have no idea where you are at (it might still be possible he made his hand on the flop). I think raising would be the best option, but you would probably have to invest allot to give him proper odds to fold a draw, so folding might still be a valid option. But what if you know the player is a novice and might even pull of this kind of play with just an Ace, which he thinks is very good because he does not think about draws and board texture. Would it be a good idea to just make the call? A downside of this play is you gain very little information, but he might even call a raise with what he thinks is a strong hand. If you can think of other situations besides the situations described where it may be good play to just flat call, it would be very appreciated. This monster flops give the opportunity to bluff. Often the first guy who takes action wins these pots. I would not fold AK too easy here. When you bet and you get raised, is there a situation where it is justified to just call instead of raising or folding? In general I would say no. Lets think about this for each scenario. Passive Fish A passive fishy player re-raises you on the flop, you should fold. Passive fish are calling stations, not raisers. When they raise it is because they a strong. Even if by some bizarre miracle you do have them beat currently, what turn/river cards are you ever going to like? Any 8,9,10,J,Q or any spade is going to put you in a bad spot, so why call here to fold later? Agro Fish OK so now you have a player who likes to throw terrible bluffs out. However Agro fish tend to bet/give up when they have nothing or just barrel off each street. They rarely re-raise without a huge draw. Agro fish are renowned for calling crap pre-flop like K2s or J7o that smash this board. And again ask yourself if you call here, what turn/river cards are you likely to see? You can guarantee that if one of these cards do hit, your agro-fish will fire again and are you going to like that? TAG A tag re-raising you on this flop is almost always hitting the board very hard. I am thinking two pair, sets or better. At the very least they have AK or a good spade. These players tend to have tighter, stronger ranges so when they re-raise it is usually a very strong hand. And again if they just have top pair, are you going to give action knowing you face bets on the turn and river when bad cards hit? And even if a brick hits, are you still confident you are ahead? LAG LAG's tend to be aggressive but disciplined enough to fire and forget. They play a loose style so again a lot of hands in his range smash this board. A LAG will raise you with nothing but a straight draw here since they know it is highly unlikely that you flopped a flush and you will be scared by a lot of turn/river cards. And a LAG is going to put pressure on you whenever a scare card hits, so again why call? You either fold or raise. NIT If a NIT ever re-raises you, not just in this spot but ever. Burn your cards, throw them away do not ever call here. NIT's only play super strong hands and would only ever re-raise you here with the nuts its that simple. Don't raise, don't call, don't even think about your hand, Fold! Now going over the above you can start to form a very good picture of what the answer to your general question is. Never call here you are almost always up against worse hands with huge equity, or hands that simply crush you. A question you should ask yourself and not just here but any hand you play is, if I call here am I confident I can win the hand profitably? I think if your honest, the answer is no and so folding or raising are your best options. Calling to hit a miracle A or K is just bad poker imo. Now that I have made the point. Are you convinced? If not just answer these questions: You flat his 3bet and then a spade hits. Now what do you do? You flat his 3bet and an 8,9,10,J or Q hit. Now what do you do? You flat and a brick hits the turn, you check or raise and he 3bets or shoves. Now what do you do? Thanks, I agree with your reasoning. But looking at rommiks answer, don't you think you may get proper odds after a small re-raise to try and hit a full house(16%)? My question mentions any scenario where it may be justified to call. So all stacks, pot sizes and bet sizes are possible in the scenario. @David Hirst: You wrote: "At the very least they have AK with a spade." But the board is AKQ and OP said it was all spades, so a player cannot have both AK and a spade. Which is kinda important because there are several hands which can have hit that board hard but still be very vulnerable to a fourth spade. So you may want to edit a bit that paragraph of your answer. @TacticalCoder Thank for spotting that I'll amend, think that was supposed to say AK or a good spade @TmKVU As I mention, its bad poker imo to chase the FH simply because of the number of bad cards. And lets not forget your FH is not the nuts when SFs and RFs are possible. In the long run you will lose money here, remember its an 84% you will not hit. That's huge and besides when you do hit that 16%, how often do you get paid since an A or K hitting the board now makes FH very obvious to your opponent. So versus the risk is it profitable? Not really. @DavidHirst good reasoning, thank you. I was only thinking about pot odds vs change of hitting an out, but did not consider how obvious the FH would be. I agree with @david Hirst answer and reasoning. At the end, it always depends on how well you know the opponent player. You didn't mention in your question what sort of a game it was. Is it a tournament or a cash table. What about stakes? Are you the chip leader or the caller is the chip leader , or neither? These are important considerations too. Finally, what was your raise before flop (relative to BB) and his re-raise (relative to the pot)? However, I must say that with your AK and AKQ (flop) You have 75% of winning and 16% getting a full house. (according to http://ca.pokernews.com/poker-tools/poker-odds-calculator.htm) These odds should be taken into consideration too. Thanks for your answer, I agree: it depends on allot of factors. In my question I ask for any scenario where it may be justified to call, so I deliberately did not mention the type of game, the stack sizes, etc. I would never believe he flopped a flush. If he called your raise preflop, he should be holding AK, AQ, AJ, AT. If he had QQ or JJ he would probably reraise you preflop. Anyway, there's no chance he has a flush/straight already, but he can totally have one spade, or is slow-playing AA or KK and has a set. I would evaluate that based on his play before. If he's loose, he might have A5 or so and thinking he's good with a pair of aces. However, he won't reraise you with that, he'll just call your bets and hope he wins at showdown. If he's a tight player, I would put him on a set and just check/call, hoping also that no spade comes. If a 4th spade shows up - fold. The board was just too dangerous against good players, but if I would play against a bluffer with a very wide range, I would reraise his bet on flop or even shove. It would be a pity if he had small spades, but also very much unlikely... I cannot think of any time to just call. You only have 4 cards to improve. If they had AA, KK, QQ they would have raised pre. JT spades J9 spades would have just called the flop to get max value. Ace with J spades is in your range. T9, T8, 98, 87, 76, 65 spades, JTo might raise to protect. JTo may not be in their range. So you are are at risk with like 12 hands. Hands they would call a re-raise with that you have beat are AQ KQ - 12. AJ AT with a spade - 4. That could be a semi bluff wanting to build the pot. Those have 12 outs. You do not want to see a turn. You would need to bet like 5x the pot to get them off those hands. They will win like 40%. You want a fold but you have equity either way. They could be just trying to steal the pot. I think I would bet out about 2/3 the pot and 3 bet like 3x the villains raise. If that makes you pot committed (1/3 your chips) then jam. If villains 4 bet jams then you have to release. Scary hand. I would call if I thought my opponent had made a straight OR a flush, but not a straight flush. Many loose opponents will call pre-flop with two suited cards, or two straight cards, not necessarily two "straight flush" cards. In this case, I am behind, but my two pair gives me a re-draw to a full house. I may call on the flop and wait till the expensive fourth street to fold, and depending on the size of the pot, I may wait for the river card. If I haven't improved by then, I would fold.
STACK_EXCHANGE
import Object from "@ol/Object"; /* Copyright (c) 2017 Jean-Marc VIGLINO, released under the CeCILL-B license (French BSD license) (http://www.cecill.info/licences/Licence_CeCILL-B_V1-en.txt). */ export class Media extends Object { private media: any; /** * Abstract base class; normally only used for creating subclasses and not instantiated in apps. * Convenient class to handle HTML5 media */ constructor(public options: { loop?: boolean; media?: any; }) { super(); this.media = options.media; if (options.loop) this.setLoop(options.loop); // Dispatch media event as ol3 event this.media.addEventListener('canplaythrough', () => this.dispatchEvent(<any>{ type: 'ready' }), false); ["load", "play", "pause", "ended"] .forEach(event => this.media.addEventListener(event, (e: any) => this.dispatchEvent(<any>{ type: e.type }), false)); } play(start: number) { if (start !== undefined) { this.media.pause(); this.media.currentTime = start; } this.media.play(); } pause() { this.media.pause(); } stop() { this.media.pause(); this.media.currentTime = 0; } setVolume(v: number) { this.media.volume = v; } getVolume() { return this.media.volume; }; mute(b?: boolean) { this.media.muted = (b === undefined) ? !this.media.muted : b; } isMuted() { return !!this.media.muted; } setTime(t?: number) { this.media.prop("currentTime", t); } getTime() { return this.media.prop("currentTime"); } getDuration() { return Math.floor(this.media.prop("duration") / 60) + ":" + Math.floor((this.media.prop("duration") - Math.floor(this.media.prop("duration") / 60) * 60)); }; setLoop(b: boolean) { this.media.loop = b; }; getLoop() { return this.media.loop; }; } export class AudioMedia extends Media { constructor(options: { loop?: boolean; source: any; }) { let media = new Audio(options.source); super({ media: media, loop: options.loop }); media.load(); } }
STACK_EDU
Pykechain is a python library developed by KE-works with the sole purpose to interact with the KE-chain data model, explorer, work breakdown and the scripts environment. Basically, an experienced user in pykechain, can extract any information stored in any project and use it to build scripts that would ultimately produce the desired outputs. Furthermore, he can also automatically create or extend the data model, add part instances, activities and even configure and customize said activities. Below, you will find some video examples of simple scripts ran in a KE-chain project, together with the code itself: This script introduces the user to the one of the most important functionalities of pykechain: retrieving and storing values. Property values are accessed, used in computations and the results are then stored in other property values. Iteration can be performed again and again, based on new inputs. Using pykechain to perform computations based on property values # Retrieve the project where this script is ran project = get_project() # Retrieve the wheel part model wheel_model = project.model(name='Wheel') # Retrieve all the part instances created based on the wheel model wheel_parts = project.parts(model=wheel_model) # Loop through the list of part instances for wheel_part in wheel_parts: # Retrieve the value of the 'Diameter' property belonging to the wheel part instance wheel_diameter = wheel_part.property(name='Diameter').value # Calculate the circumference based on the diameter (C = pi*d), rounded to 2 decimals circumference = round(wheel_diameter * math.pi, 2) # Store the circumference in the value of the 'Circumference' property wheel_part.property(name='Circumference').value = circumference In this script, you can get an idea how new models and new properties can be automatically created. Using pykechain to extend the data model # Make the needed imports from pykechain from pykechain import get_project from pykechain.enums import PropertyType # Retrieve the project where this script is ran project = get_project() # Retrieve the bike part model bike_model = project.model(name='Bicycle') # Create a new 'Exactly 1' model under 'Bicycle' and call it 'Saddle' saddle_model = bike_model.add_model(name='Saddle', multiplicity='ONE') # Add some properties to it saddle_model.add_property(name='Material', property_type=PropertyType.CHAR_VALUE, default_value='Nylon') saddle_model.add_property(name='Saddle tilt', property_type=PropertyType.FLOAT_VALUE, description='Angle of saddle compared to ground (degrees)') The third script presents some actions that can be performed on the work breakdown. Using pykechain to extend the work breakdown # Make the needed imports from pykechain from pykechain import get_project from pykechain.enums import ActivityType # Retrieve the project where this script is ran project = get_project() # Retrieve the root process of the work breakdown workflow_root = project.activity(name='WORKFLOW_ROOT') # Create a new process called 'Design saddle' design_saddle = workflow_root.create(name='Design saddle', activity_type=ActivityType.PROCESS) # Create a new task under it called 'Define saddle properties' design_saddle.create(name='Define saddle properties', activity_type=ActivityType.TASK) Finally, this script will show how easy it is to add widgets to an activity. Using pykechain to add a widget to an activity # Retrieve the project where this script is ran project = get_project() # Retrieve the 'Define saddle properties' activity in a variable define_saddle_properties = project.activity(name='Define saddle properties') # Retrieve the 'Saddle' part model in a variable saddle_model = project.model(name='Saddle') # Retrieve the two properties of 'Saddle' saddle_material = saddle_model.property(name='Material') saddle_tilt = saddle_model.property(name='Saddle tilt') # Add a Form widget to the activity with the two properties as writable inputs widget_manager = define_saddle_properties.widgets() widget_manager.add_propertygrid_widget(part_instance=saddle_model.instance(), custom_title='Saddle overview', readable_models=, writable_models=[saddle_material, saddle_tilt]) This article is not meant to be a Pykechain tutorial, but rather an introduction to the potential this package has in respect to KE-chain. If you desire to follow a Pykechain tutorial, please contact the Service Desk.
OPCFW_CODE
A simple tutorial addressed administratorweb servers that want to delete from Certbot certificates SSL of domains that are no longer hosted on the server. Delete old domains certbot certificates. Certbot is an open-source software used by many administratorand system on CentOS / RHEL for adminregistration of certificates HTTPS / TLS / SSL Let's Encrypt. oPERATION certbot is done by command lines executed directly in the webserver (SSH or console connection), and to install a certificate it is sufficient for the domain / subdomains to be hosted on that server and to be active on the internet on the server's IP. After executing the command " certbot”Will list numerically all the domains hosted on the server for which we can install a Let's Encrypt certificate. We type with space between them the numbers corresponding to each domain name for which the certificate will be installed SSL. A small problem is when a domain that had the certificate installed through was deleted from the webserver Certbot. It will be further listed on the order by which we verify the validity period of the certificates SSL for all areas. If there have been multiple domains on the server over time, it will be quite difficult to track the certificate list. certbot. It would be best if only the active domains remain in the certificate list. Delete old domains Certbot certificates - How To Normally, before deleting a domain or subdomain from the webserver, it must first revoke and delete the Let's Encrypt certificate. We execute the order " certbot"To display the numeric list of active domains, then the command" certbot delete number number"To delete the certificate SSL. Remove old domains certbot certificates. If we did not do this before deleting the domain from the webserver, it will remain in the list of certificates certbot. Data about domains that have been enabled in the past with certbot are kept in three places severely. Custom made "certbot certificates"These areas, even if they are no longer severely present, will be listed below. /etc/letsencrypt/live /etc/letsencrypt/renewal /etc/letsencrypt/archive We execute the command in the webserver “ ls -all /etc/letsencrypt/live”To see the domains present in Let's Encrypt. We identify the domains we want to delete, either from the list displayed in the order above or from " certbot certificates", Then we execute the following command: certbot delete --cert-name olddomain.tld We confirm with “ Y”Delete the domain from the certificate list Certbot. [root@buffy ~]# certbot delete --cert-name olddomain.tld Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The following certificate(s) are selected for deletion: * olddomain.tld Are you sure you want to delete the above certificate(s)? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: Y Deleted all files relating to certificate olddomain.tld. [root@buffy ~]# certificates SSL will be deleted (Delete old domains certbot certificates) of Certbot both for the domain name and for its subdomains, if they used the same certificate. Certificate Name: olddomain.tld Serial Number: 3fd34e0e3304521371abe948 Key Type: RSA Domains: www.olddomain.tld olddomain.tld Expiry Date: 2022-02-09 09:46:12+00:00 (INVALID: EXPIRED) Certificate Path: /etc/letsencrypt/live/olddomain.tld/fullchain.pem Private Key Path: /etc/letsencrypt/live/olddomain.tld/privkey.pem There are also scenarios in which we can use certificates SSL different for the domain and some subdomains. Especially when besides Certbot we use combined administrator DNS yes SSL Cloudflare service.
OPCFW_CODE
Before moving to how I’m implementing Lucene.net into Subtext, I wanted to bring to my Lucene.net tutorial the experience of a good friend of mine, Nic Wise, who is using Lucene, both Java and .NET, since 2003. So, without further ado, let’s read the experience directly from Nic’s writings. - How to get started with Lucene.net - Lucene.net: the main concepts - Lucene.net: your first application - Dissecting Lucene.net storage: Documents and Fields - Lucene - or how I stopped worrying, and learned to love unstructured data - How Subtext’s Lucene.net index is structured Lucene is a project I've been using for a long time, and one I find often that people don't know about. I think Simo has covered off what Lucene is, and how to use it, so I'm here to tell you a bit about how I've used it over the years. My first Lucene My first use of Lucene was back in about 2003. I was writing a educational website for IDG in New Zealand, using Java, and we needed to search the database. Lucene was really the only option, aside from using various RDBMS tricks, so it was chosen. This was a pretty typical usage tho - throw the content of a database record into the index with a primary key and then query it, pulling back the records in relevance order. With the collapse of the first internet bubble (yes, it even hit little ol' New Zealand) that site died, and I stopped using Java and moved to .NET. To be honest, I don't even remember the name of the site! AfterMail / Archive Manager My next encounter with Lucene - this time Lucene.NET - was when I was at AfterMail (which was later bought by Quest Software, and is now Archive Manager). AfterMail was an email archiving product, which extracts email from Exchange, and puts it into a SQL Server database. Exchange didn't handle huge data sets well (it still doesn't do it well, but it does do it better), but SQL Server can and does handle massive data sets without flinching. The existing AfterMail product used a very simple index system: break up a document into it's component words, either by tokenizing the content of an email, or using an iFilter to extract the content of an attachment, and then do a mapping between words and email or attachment primary keys. It was pretty simple, and it worked quite well with small data sets, but the size of the index database compared to the size of the source database was a problem - it was often more than 75%! This was really not good when you have a lot of data in the database. This was combined with not having any relevance ranking, or any other of the nice features a "real" index provides. We decided to give Lucene a try for second major release of AfterMail. On the same data set, Lucene created an index which was about 20% of the size of the source data, performed a lot quicker, and scaled up to massive data sets without any problem. The general architecture we had went like this: The data loader would take an email and insert it into the database. It would also add the email's ID and the ID of any attachments into the "to be indexed" table. - The data loader would take an email and insert it into the database. It would also add the email's ID and the ID of any attachments into the "to be indexed" table. - The indexing service would look at that "to be indexed" table every minute, and index anything which was in - When the website needed to query the index, it would make a remoting call (what is now WCF) to the index searching service, which would query the index, and put the results into a database temporary table. This was a legacy from the original index system, so we could then join onto the email and attachment tables. We indexed a fair bit of data, including: - The content of the email or attachment, in a form which could be searched but not retrieved. - Which users could see it, so we didn't have to check if the user could see an item in the results. - The email address of the user, broken down - so email@example.com was added in as foo, oof, bar.com, moc.rab etc. This allowed us to search for f*, *oo, *@bar.com, and *@bar*, which Lucene doesn't normally allow (you can do wild cards at the end, but not the beginning) - Other meta data, like which mailboxes the email was in, which folders, if we knew, and a load of other data. All of this meant we could provide the user with a strong search function. From time to time, an email would be indexed more than once, updating the document in the index (eg if another user could see the email), but in general, it was a quick and fairly stable process. It wasn't perfect tho: We ran into an issue where we set the merge sizes too high - way, way too high – which caused a merge of two huge files. This would have worked just fine, if a bit slow, except we had a watchdog timer in place: when a service took too long doing anything, the service would be killed and restarted. This led to a lot of temporary index files being left around (around 250GB in one case, for a 20GB index), and a generally broken index. Setting the merge size to a more sane value (around 100,000 - we had it at Int32.MaxInteger) fixed this, but it took us a while to work it out, and about a week to reindex the customers database, which contained around 100GB of email and attachments. Another gotcha we ran into - and why we had a search service which was accessed via remoting - is that Lucene does NOT like talking to an index which is on a file share. If the network connection goes down, even for a second, you will end up with a trashed index. (this was in 1.x, and may be fixed in 2.x). AfterMail was designed to be distributed over multiple machines, so being able to communicate with a remote index was a requirement. Just before I left, we did lot of work around the indexing, moving from Lucene.NET 1.x to 2.x, along with a move to .NET 2.0. We added multi-threading for indexing of email (attachments had a bottle neck on the iFilters, which were always single threaded, but the indexing part was multi-threaded), which sped up the indexing by a large factor – I think we were indexing about 20-40 emails per second on a fairly basic dual-core machine, up from 2-3 per second, and it would scale quite linearly as you added more CPU's. Lucene performed amazingly well, and allowed us to provide a close-to-google style search for our customers. The next project I used it on was a rewrite of the Top Gear website. This is where some of the less conventional uses came up. For those who don't know, Top Gear is a UK television program about cars, cars and cars, presented in both a technical (BHP, MPG, torque) and non technical, humorous way (it's rubbish/it's amazing/OMFG!). We were redeveloping the website from scratch, for the magazine, and it ties into the show well. The first aspect of the index was the usual: index the various items in the database (articles, blog posts, car reviews, video metadata), and allow the user to search them. The search results were sightly massaged, as we wanted to bubble newer content to the top, but otherwise we were using Lucene’s built in relevance ordering. The user can also select what they want to search - articles, blog posts, video etc - or just search the whole site. Quick tips for people new to Lucene Your documents don't have to have the same fields! For example, the fields for a Video will be different to the fields for an Article, but you can put them in the same index! Just make a few common ones (I usually go with body and tags, as well as a discriminator (Video/Article/News etc) and a database primary key), but really, you can add any number of different fields to a document. Think about what you need the field for. For example, you may only need the title and the first 100 characters of a blog post, to show on the screen, but storing the whole post will blow out the size of your database. Only store what you need - you can still index and search on text which is not stored. The second aspect was much less common. Each document in the database had various keywords, or tags, which were added by the editor when they were entered. We then looked for other items in the database which matched those tags, either in their body, tags or other fields, and used that as a list of "related" items. We also weighted the results, so that a match in an items tags counted for more than something in the title or body. For example, on this page you can see the list of related items at the bottom, generated on the fly from Lucene, by looking for other documents which match the tags of this article. If we were able, we would have extended the tag set using keyword extraction (eg using the Yahoo! Term Extraction API) from the body contents, but this was deemed to be overkill. Top Gear's publishing system works by pulling out new articles from the CMS database, and creating entries in the main database. At the same time, it adds the item to the index. In addition to this, there is a scheduled process which recreates the index from scratch 4x a day. When the index is rebuilt, it's then distributed to the other web server in the cluster, so that both machines are up to date. The indexes are small, and document count on these are low (<10,000), so reindexing only takes couple of minutes. My final personal recommendations All up, Lucene has been a consistent top performer when ever I've needed a search-based database. It can handle very large data sets (100's of GB of source data) without any problems, returning results in real time (<500ms). The mailing list is active, and despite not having a binary distribution, it is maintained, developed, and supported. If you think of it as only an index, then you are going to only use one aspect of this very versatile tool. It does add another level of complexity to the system, but once you master it - and it's not hard to master - it's a very solid performer, even more so if you stop worrying about relational, and learn to love unstructured. I recommend the book Lucene In Action, as it has a lot of background on how the searching and indexing work, as well as a how-to guide - the Java and .NET versions are very close to API compatible, certainly enough to make the book worth while. About the author Nic Wise is a grey haired software developer from New Zealand, living in London, UK. He is a freelance contractor, previously working for BBC Worldwide on a redevelopment of the Top Gear website and the bbc.com/bbc.co.uk homepage. He has worked on many projects over his 13+ years in the industry. Read more about him.
OPCFW_CODE
STAT 481 / ECON 580 / CS & SS INTRODUCTION TO MATHEMATICAL This course is an introduction to the mathematical theory of probability and statistical inference. It focuses on the basic theory and principles underlying statistical methods. Emphasis will be placed on mastering concepts and techniques needed for subsequent work in economics, econometrics, and other disciplines. Students who complete 481/580 will be well prepared to study the application of statistical methods in courses such as STAT 421/423/427, ECON 581/582, or other similar courses. Instructor: Hanna Jankowski Please include "" (or "") in the subject of your e-mail. Also, plain text messages only, no html. Office: B-220 Padelford Hall Office hours: Mondays 12:30-2pm (except October 8th), or by appointment. No NB. Please note that I am away from the UW as of December 15th. I will still be checking my e-mail, but it may take me longer than usual to get back to you. Teaching Assistant: Will Kleiber Office: C-312 Padelford Hall Office hours: Tuesdays and Thursdays 10:30-11:20, or by appointment. No drop-ins, please. Will's office hours will be held in the stat lounge: PDL NB. Regular office hours will not be held after December 7th. Please see exam schedule below for additional Final Exam Info: The final exam will be held on Dec.12th at 8:30-10:30 am. We will be 102 (shouldn't be as difficult to find as the fisheries place was). Same deal as the midterm: you'll get a formula sheet. This will be made up of the previous version plus the new contributions. Practice tests: Here is an old practice final from a previous instructor (with solutions). However, I feel this is quite different from what I would write for your final. You can get a better idea of my testing style for this material from an undergraduate course I taught here and here (these are also the tests referenced in your practice problems for hypothesis testing). Naturally, the coverage in 342 is quite different, so there'll be stuff in these tests that we did not go over. We will talk about the expectations for the final on the 5th. Lastly, coverage: Currently I'm thinking that the final will emphasize the material we've done that wasn't covered in the midterm. That's not to say that you won't be responsible for the other material as well: we're still calculating means and variances, discussing LLN and CLT etc... I'll be able to say something more concrete later on in the week. Extra office hours: Will, Monday, 11am-1pm in the CSSS Conference Room. Me - Tuesday, 11-2pm, location TBA. [OFFICE HOUR CHANGE] Will's office hours will be held in the CSSS conference room in Padelford. The room number is C14A. This is on the lower level. To get there, take the elevator (the one that goes by the Math library entrance) down to level L. It'll let you off right in front of C14, the CSSS space. If you go through the main doors the conference room will be just on your My office hours on Tuesday will be in CSSS from 11-12:30 and then I will walk over to Denny 205, where I will stay until 2pm. [Dec.10] The update formula sheet which you will receive for the final has now been posted. I'm no longer accepting suggestions. [Dec.7] Solutions to assignments 7&8 have now been posted. If there's anything else that I promised you, let me know. [Dec.4] Any suggestions for what time I should have my office hours [Dec.2] I've just posted the notes for our Bayesian lectures. We'll do more problems tomorrow in lecture. For our remaining time together, I'd like to study some very basic nonparametric tools. [Dec.2] I've posted some additional practice problems for Solutions to these will be posted shortly (or they'll be taken up during the last tutorial). [Nov.28] I've been trying to find some good reading on Bayesian methods for you... a search in the library didn't yield anything. My best recommendation is still the Lavine text. I'm also posting some links from the internet here: Neal's Tutorial] - more advanced, but addresses some questions you guys asked in class. The book titled "Bayesian Data Analysis" by Gelman et al. also looks really good. Alas, none of the copies in our library are available right now. You may want to keep it in mind for the future. [Nov.27] As I've discussed with some of you, I am giving out a bonus assignment. It's posted with the other assignments. Please read the instructions carefully; they contain info on how much this assignment is worth and how the other bonus problems relate. [Nov.25] There's a typo on the bonus problem in assignment 6. The definition is X = Z if Z<=1. If you have questions, ask me. [Nov.24] I've posted the seventh (and last) assignment. It's due one week from Monday. [Nov.21] Here is a the birth control example I mentioned today. Also, I sent out an e-mail correcting one of the problems on the next assignment (FYI). Rao received the National Medal of Science : more info. Notably, C.R. Rao was a student of Fisher's. If you have too much time on your hands you can check out how I fit into this at the math [Nov.19] I've also posted some extra practice problems for [Nov.19] I've just posted the notes from today's lecture. I will update these to include the remaining lectures on hypothesis testing soon. [Nov.19] I've posted the Rscript and data files from Will's tutorial [Nov.19] I have made some changes to the notes posted for parameter estimation (last 2.5 lectures). The changes from the last posting are in red. [Nov.18] The next assignment has been posted. I will post some practice problems and a write-up of the class notes asap. [Oct.17] Let's see if this works: [Message [Oct.17] Another potential source of help for those of you that are finding the course difficult is the Statistics Help Center. They don't officially cover 481, but chances are that someone will be able to help you. is a list of mathematical symbols, thanks to Suresh. [Oct.8] Solutions to the calculus quiz are posted here. Thanks Will! [Sept.27] [Here] is a scanned version of the review notes. Syllabus NB. This is the updated syllabus, the old had incorrent e-mails. Course Schedule This is where I will keep an updated list of material covered, lecture notes, assigned readings for the and quiz/lab locations. Statistical Thought One of our textbooks. R Our software: where to get it and how to use it.
OPCFW_CODE
The Skin modifier's angle deformers control creases and wrinkles, as well as add muscle bulges and deformations. There are three kinds of angle deformers: Joint, Bulge, and Morph. Angle deformers work by telling vertices how to behave when a certain angle is achieved between two adjacent bones. For example, you could use a bulge angle deformer to tell vertices in the upper arm to bulge when the elbow is bent greater than 90 degrees. After the Skin modifier is applied, it is not uncommon for some joints to crease unrealistically when the character is animated. Angle deformers can correct these problems. Set up the lesson: Scrub the time slider to play the animation. Around frame 15, the inside of the elbow is compressed. The arm flexes. This unrealistic deformation can be corrected with a joint angle deformer. You'll also use a bulge angle deformer to add a slight bulge to the upper arm when the elbow is bent. Select the Arm mesh object. On the Modify panel, expand the Skin listing on the stack, and select Envelope. In the Envelope list, click Bone ForeArm to choose that bone. This bone's rotation will control the deformer. Add a joint angle deformer: The process to add an angle deformer consists of two steps. First, you select the bone envelope, and then select the vertices to be deformed. Once that is accomplished, you can add the gizmo for the deformer. Go to frame 0. In the Parameters rollout > Select group, turn on Vertices. Drag a selection window around the vertices above and below the elbow. Lower-arm vertices selected With the bone and vertices selected, now you can add an angle deformer gizmo. On the Gizmos rollout, make sure Joint Angle Deformer is selected, then click Add Gizmo. The new gizmo appears in the viewport. This gizmo, called a lattice, is used to control the selected vertices. Deformation lattice displayed around selected vertices Set the joint angle deformer: Move the time slider to frame 13. Flexed arm with deformation lattice Angle deformers use the angle between bones to determine when the deformation will take place. Regular keyframes are not part of the deformation setup, so there is no reason to turn on Auto Key. In the Gizmo Parameters rollout, click Edit Lattice. Move the lattice's control points to pull out the creased vertices on the inside of the elbow. Note: Editing the gizmo works similarly to editing an FFD lattice, where manipulating the control points affects underlying Elbow crease corrected Tip: You can hide the bones and deselect the vertices to concentrate on the lattice. Move the time slider from frame 0 to 13 and back again, watching the movement of the lattice. Move the time slider to frame 24. Edit the Lattice Points again. Outstretched arm with elbow corrected Click Edit Angle Keys Curve to display the Joint Graph. This is not a graph of time; rather, it is based on angle. It goes from 0 to 360 degrees of rotation. Move the time slider back and forth, while watching the joint graph. You can use this joint-angle graph to remove or change lattice edits. You can also adjust the easing by using the Bezier handles on the graph points, much as you do for key timing in the Curve Editor. Close the Joint Graph. Next, you'll add a Bulge Deformer to make the biped grow as the arm bends Add a bulge angle deformer: Turn off Edit Lattice, and move the time slider to frame 0. With Bone ForeArm still selected, drag a horizontal selection rectangle to select a few vertices in the middle of the upper arm. Initial vertex selection Click Loop to highlight a loop of vertices around the arm, and then click Grow a few times until all of the upper arm's vertices Loop button changes selection to loop around the arm. The Loop and Grow buttons, along with Ring and Shrink, are vertex-selection tools. Ring is like Loop, but selects along the opposite UV axis. Shrink is the complement of Grow: it reduces the size of the selection. Grow used to select all upper-arm vertices Once you have the correct vertices selected, choose Bulge Angle Deformer from the list on the Gizmos rollout, and click Add The new gizmo is displayed in the viewport. Bulge Angle lattice displayed over upper arm Move the time slider to frame 15, where the arm is bent. On the Deformer Parameters rollout, click Edit Lattice. Select the gizmo control points near the bicep. Move the lattice points to create a bulging bicep. Move points on the front and then on the back of the arm. Bulge Angle lattice used to create a bulging bicep Move the time slider back and forth to see the bulge angle deformer create a bicep with the bending of the arm.
OPCFW_CODE
This week is the start of the Design Fiction course! I had made significant progress on my DIP project and went to Beijing on Thursday! This week I tackled the gesture recognition problem and felt like it was a very interesting topic to think about and work on. On Monday, we had our first Design Fiction course. It was very interesting and provoked me to think about questions such as: “What is Design?”, “What is a good way to tell a story using only pictures” and “How do I integrate storytelling and design thinking together?”. The first class was mostly introducing this course along with the course outline and some mini activities. In the process of getting us to think about these questions, we had some mini-activities such as telling a story with only 2 pictures that we have, as well as to draw an 8-page comic to tell a story on “How to make a Toast”. These mini-activities were quite fun, and it was interesting how it gets much harder not being able to add words into our pictures. Brainstorming about the different stories that we could have is also very fun and I feel like this is something we should do when we go back to Singapore and pitch our Design Ideas. To think about the different ways, we can tell our story and find the best way to present our idea. On Tuesday and Wednesday, I mostly focused on my DIP project. Currently, the task assigned to me was to be able to draw with your fingers on the Unity Interface and then recognize what shape it is that you have drawn. It took me one day to figure out how to get Unity to work together with Leap Motion and draw lines on the interface. However, the task of getting Unity to be able to recognize what it is that the user had drawn is much harder. Firstly, I had to read up on someone existing gesture recognition methods and there is varying difficulties from a simple Euclidean distance weightage ranking to using neural networks to determine the exact gesture drawn. For this project, I believe something simple and light should be the best option as the worst thing that could happen is to have the program lag/hang on the user in the middle of using it. Thus, I found a method called the $1 method where it only applies for a single stroke and uses Euclidean distance as its main method for ranking how close this gesture is to all the gestures in the database. It is quite smart how their algorithm can quickly fit it to the right scale and rotate it in the right direction before measuring the distance from each point and ranking it. On the following few days I will be trying to mimic this algorithm and see if it works in Unity accurately and reliably without breaking the program. On Thursday, I went to Beijing together with my friends for a short 4 days tour! Early in the morning around 6am we woke up and prepared to go to the airport. However, when we reached, our flight got delayed from 10am to around 1240pm. This short delay made us all very tired after we reached Beijing. However, we quickly found our Airbnb and had a good dinner of Peking Duck before going to sleep preparing for the next day. From Friday till Sunday, we spent our time exploring Beijing’s cultural sights as well as trying out as much local cuisine as possible. We explored Tiananmen Square, Forbidden city, Summer Palace and walked through some of the more famous alleys in Beijing. Some of the better food that we tried was the peking duck, zhajiang mian, 京酱肉丝 and craft beers. One of the things that struck me the most is how huge Tiananmen square is as well as how huge the forbidden city is. Especially the forbidden city where only the people that the Emperor allowed can enter the city. I am also especially impressed by how well they were able to maintain the state of the ancient city, where the colours and shapes were very well maintained. All in all, I am very satisfied with this trip to Beijing, where I got to see more of the Chinese culture and try out so many good cuisines. This week passed by quickly, but I felt that I made full use of my time, pursuing my interest as well as exploring China. The task assigned to me for DIP is very interesting and I am having a lot of fun trying to get it to work. This is a good week and I hope following weeks will be interesting as well. I am looking forward to learning more.
OPCFW_CODE
As Stack Overflow has grown, it has started to have some decidedly big city problems. The one we are most concerned about is an influx of very low quality questions. While we still believe in editing and improving low-quality questions to make them better, there's a fundamental mismatch in scale and effort here -- bad questions, asked in bad faith, have a tendency to overwhelm the good intentions of the average Stack Overflow user. So, we've decided to take some steps to block bad questions before they enter our system, and save everyone some effort. Every new Stack Overflow user with <= 10 reputation is now presented with a mandatory "How To Ask" page that they must click through before asking their first question. The text on this page is a heavily edited subset of Google's excellent Tips for Getting Help. At this point, you're probably wondering -- did Jeff really just tell me that Stack Overflow now requires every new user to agree to a EULA before asking their first question? Why, yes. Yes I did. Do let me explain the apparent madness. Unlike a EULA, our How to Ask page is ... - short, simple, readable language. - designed to help you, not lawyers -- by teaching you how to ask a decent question that gets the best possible answers! - mercifully brief; it's 5 simple rules that fit on a single page with no scrolling. Now, whether or not new users will actually read this, I cannot say. From my perspective, if at least one in ten new users read it and think, "hey, I should at least try to form a decent question" -- it's a win. If some very poor questions are discarded based on seeing this page -- it's a win. And honestly, when you have 2k+ new questions per day, you can afford to throw a few away in the name of increased overall quality. Furthermore, this page is designed to be shared and reusable. Free to share the How to Ask link with any question asker in need of advice on how to improve their question. Beyond this, we're also starting to actively block questions from IPs and accounts that have historically produced a lot of low-quality questions. The details of this algorithm have to be kept vague, because we don't want people to game it or exploit it. Remember all those question votes you thought were so meaningless? You might want to reconsider that stance. The rationales for voting on questions haven't changed, though: - if you see a great, thoughtfully asked, well researched question, vote it up -- please! Great questions are an art! - if you see an egregiously sloppy, no-effort-expended question that you feel was asked in bad faith ... vote it down. - anything in between that's salvageable, edit it -- or suggest an edit if you lack the 2,000 reputation to edit outright. We believe asking questions on our site is a privilege, not a right. If, after a few fair attempts, you haven't been able to prove that your contributions to Stack Overflow make it at least ... not-worse ... then we reserve the right to refuse your questions. If we don't do our part to cull the bad questions, then we risk alienating the true experts who provide what really matters: the answers! For now, these measures are (mostly) only enabled on Stack Overflow, as it's the only site large enough to have these big city problems at the moment. But we certainly hope all of our Stack Exchange network sites get large enough to run into this .. what's the cliche, again? "nice problem to have?"
OPCFW_CODE
28 July (5 Weeks) 8:00 PM - 11:00 PM (IST) After Discount Price: Ruby on Rails Course description About The Course Conqrity 'Ruby on Rails' course is an instructor led online class that will enable learners to build web applications using the powerful Rails framework and the highly dynamic, object-oriented Ruby language. It will cover all the fundamental concepts of OOPS and Web Applications, Ruby scripting, MVC architecture to advanced topics like Gemified plugins, Application deployments, API conventions, cloud support by Heroku, Front End, and Back End DB collaborations etc. Participants will also get to implement one project towards the end of the course. Why Learn Ruby on Rails? Ruby on Rails training certifies you with in demand Web Application Technologies to help you grab the top paying IT job title with Web Application skills and expertise in Full Stack. Rails is written in Ruby, which is a language explicitly designed with the goal of increasing programmer happiness. This unbiased and universal view makes Ruby on Rails unique in today's Job market as a leader in the Web Application platform. Who should go for this course? The course is designed for professionals who want to learn Web Application techniques. Those just starting off and looking to learn the basics of Ruby on Rails. Anyone who is interested in learning to build websites, experienced programmers looking to pick up a new language/technology and experienced Rubyists looking to advance their skills. Introduction to Ruby & Rails Ruby Basics Part 1 Ruby Basics Part 2 Getting Started with Rails Rails Digging Deeper Deployment and Testing - Project : - How will I execute the practicals? - For your practical work, we will help you setup Conqrity’s Virtual Machine in your System. This will be a local access for you. The required installation guide is present in LMS. - Which Case-Studies will be part of this course? Towards the end of the Course, you will be working on a live project where you will be using Front end, Back end, MVC and Gems. - Project #1: Web Application - Industry: E- Commerce - Data: Online Store of any Product What is the refund policy? You will get a 100% refund, if scheduled class is cancelled. On request, money will be refunded back in 24 hours. What if I miss a class ? If you miss any live class, the complete session recording within 24 hours in your LMS. In case you miss multiple classes, you also have the option to re-attend the course with the next available batch. What if I require extra assistance ? If you require extra assistance, our 24x7 Service Support Team is always there to help you. Do I get any assistance after completion of my sessions ? Yes, our Support Team will always be there to resolve your queries and take your doubts even after the completion of the course.
OPCFW_CODE
Add job from new layout with ci-bootstrap on main branch This job is the same as the staging branch one, but has a different parent defined in config repo[1], which sets the ci-bootstrap main branch to be used. The job based on ci-bootstrap repo should replace the code that we use from config repos, and allow us to test it as a role in its own repo. This job also takes a new network definition as input to configure the infrastructure needed to run our tests on zuul. This patch doesn't wire up, or replace the current multinode-edpm job, but may do this in a next PR. [1] https://review.rdoproject.org/r/c/config/+/51257/2/zuul.d/_jobs-crc.yaml As a pull request owner and reviewers, I checked that: [X] Appropriate testing is done and actually running [X] Appropriate documentation exists and/or is up-to-date: [X] README in the role [X] Content of the docs/source is reflecting the changes recheck lgtm and depends-on merged but will you wire up the job here? or are we going to wait for testproject test on this before merge? not really, this job is the same as the one wire up here https://github.com/openstack-k8s-operators/ci-framework/pull/1020 now that we moved all ci-bootstrap content from staging branch to main branch. The difference between these 2 is only the branch that we do the checkout for ci-bootstrap recheck /approve recheck recheck lgtm and depends-on merged but will you wire up the job here? or are we going to wait for testproject test on this before merge? not really, this job is the same as the one wire up here #1020 now that we moved all ci-bootstrap content from staging branch to main branch. The difference between these 2 is only the branch that we do the checkout for ci-bootstrap i don't follow, i mean pull 1020 wires up some new pre-run plays for cifmw-extracted-crc-pre-bootstrap my question was where are we going to run this new job you are adding here podified-multinode-edpm-deployment-crc but I see it is running in this PR there https://review.rdoproject.org/zuul/buildset/1efb03985d8049d591de5ac2c09afc3a However I just checked and it looks like there are runs for this job from yesterday in other repos (edpm-ansible/install_yamls/dataplane-operator) 22nd but I don't see how if we are just adding it here. Did those run with depends-on? https://review.rdoproject.org/zuul/builds?job_name=podified-multinode-edpm-deployment-crc&skip=0 lgtm and depends-on merged but will you wire up the job here? or are we going to wait for testproject test on this before merge? not really, this job is the same as the one wire up here #1020 now that we moved all ci-bootstrap content from staging branch to main branch. The difference between these 2 is only the branch that we do the checkout for ci-bootstrap i don't follow, i mean pull 1020 wires up some new pre-run plays for cifmw-extracted-crc-pre-bootstrap my question was where are we going to run this new job you are adding here podified-multinode-edpm-deployment-crc but I see it is running in this PR there https://review.rdoproject.org/zuul/buildset/1efb03985d8049d591de5ac2c09afc3a However I just checked and it looks like there are runs for this job from yesterday in other repos (edpm-ansible/install_yamls/dataplane-operator) 22nd but I don't see how if we are just adding it here. Did those run with depends-on? https://review.rdoproject.org/zuul/builds?job_name=podified-multinode-edpm-deployment-crc&skip=0 First of all, sorry Marios for not paying attention in your question. You are correct when asking about the name of the job and all the confusion. I choose by mistake the same name of the current multinode-edpm job, and that's why it was flat failing here. Thanks for pointing to that. I changed the name to identify it as the job which uses ci-bootstrap role to configure the infra. The idea is to replace the current edpm-multinode job, but in future PRs. I will add testproject results as soon as reports success. /hold testing in testproject podified-multinode-edpm-deployment-crc-bootstrap-testproject https://review.rdoproject.org/zuul/build/ceb1ae66061948bda033fc5b320d5b65 : SUCCESS in 1h 20m 47s /test pre-commit Ok thanks for update @viroel but question still stands - where are we going to wire it up to run (on pull requests?, periodic?) This is going to be wired up in PRs, is focused on job with CRC based jobs running in a OCP env. The difference with current job is that is using ci-bootstrap role and is based on Pablo's new network definition. But the job for now is a mix of the new layout + old deployment. So it is needs more updates. /hold cancel
GITHUB_ARCHIVE
namespace EveData { using System.Collections.Generic; using Tools; public sealed class EveDataSourceCached : EveDataSource { // External Dependencies. private readonly ISystemLogger logger; // Internal Dependencies (Instantiated On-Demand By Accessors). private static IDictionary<int, string> cachedValues_GetSolarSystemName = new Dictionary<int, string>(); private static IDictionary<int, int> cachedValues_GetStationSolarSystemId = new Dictionary<int, int>(); private static IDictionary<int, string> cachedValues_GetStationName = new Dictionary<int, string>(); private static IDictionary<string, int> cachedValues_GetTypeId = new Dictionary<string, int>(); private static IDictionary<int, string> cachedValues_GetTypeName = new Dictionary<int, string>(); private static IDictionary<int, double> cachedValues_GetTypeVolume = new Dictionary<int, double>(); /// <summary> /// Initialises a new instance of a cached EveDataSource. /// </summary> public EveDataSourceCached(ISystemLogger logger) : base(logger) { this.logger = logger; } public override string GetSolarSystemName(int solarSystemId) { string cachedValue; // If the value is cached return cached value, otherwise call EveData in the base class. if (!cachedValues_GetSolarSystemName.TryGetValue(solarSystemId, out cachedValue)) { cachedValue = base.GetSolarSystemName(solarSystemId); cachedValues_GetSolarSystemName.Add(solarSystemId, cachedValue); } return cachedValue; } public override int GetStationSolarSystemId(int stationId) { int cachedValue; // If the value is cached return cached value, otherwise call EveData in the base class. if (!cachedValues_GetStationSolarSystemId.TryGetValue(stationId, out cachedValue)) { cachedValue = base.GetStationSolarSystemId(stationId); cachedValues_GetStationSolarSystemId.Add(stationId, cachedValue); } return cachedValue; } public override string GetStationName(int stationId) { string cachedValue; // If the value is cached return cached value, otherwise call EveData in the base class. if (!cachedValues_GetStationName.TryGetValue(stationId, out cachedValue)) { cachedValue = base.GetStationName(stationId); cachedValues_GetStationName.Add(stationId, cachedValue); } return cachedValue; } public override int GetTypeId(string typeName) { int cachedValue; // If the value is cached return cached value, otherwise call EveData in the base class. if (!cachedValues_GetTypeId.TryGetValue(typeName, out cachedValue)) { cachedValue = base.GetTypeId(typeName); cachedValues_GetTypeId.Add(typeName, cachedValue); } return cachedValue; } public override string GetTypeName(int typeId) { string cachedValue; // If the value is cached return cached value, otherwise call EveData in the base class. if (!cachedValues_GetTypeName.TryGetValue(typeId, out cachedValue)) { cachedValue = base.GetTypeName(typeId); cachedValues_GetTypeName.Add(typeId, cachedValue); } return cachedValue; } public override double GetTypeVolume(int typeId) { double cachedValue; // If the value is cached return cached value, otherwise call EveData in the base class. if (!cachedValues_GetTypeVolume.TryGetValue(typeId, out cachedValue)) { cachedValue = base.GetTypeVolume(typeId); cachedValues_GetTypeVolume.Add(typeId, cachedValue); } return cachedValue; } public void ClearCache() { cachedValues_GetSolarSystemName.Clear(); cachedValues_GetStationName.Clear(); cachedValues_GetStationSolarSystemId.Clear(); cachedValues_GetTypeId.Clear(); cachedValues_GetTypeName.Clear(); cachedValues_GetTypeVolume.Clear(); } } }
STACK_EDU
from torch import nn from . import FF from .transformers import BaseSublayer class PositionwiseFF(nn.Module): """Positionwise Feed-forward layer. Arguments: Input: Output: """ def __init__(self, model_dim, ff_dim, activ='gelu', dropout=0.1): """ Creates a PositionwiseFF. :param model_dim: The model dimensions. :param ff_dim: The feedforward dimensions. :param activ: The activation function. Default: gelu :param dropout: The amount of dropout. Default: 0.1 """ super().__init__() self.model_dim = model_dim self.ff_dim = ff_dim self.activ = activ # Create the layers self.layers = nn.Sequential( FF(self.model_dim, self.ff_dim, activ=self.activ), nn.Dropout(dropout), FF(self.ff_dim, self.model_dim, activ=None), ) def forward(self, x): return self.layers(x) class PositionwiseSublayer(BaseSublayer): def __init__(self, model_dim, ff_dim, ff_activ='gelu', dropout=0.1, is_pre_norm=False): """ Creates a PositionwiseSublayer. :param model_dim: The model dimensions. :param ff_dim: The dimensions of the feed forward network. :param ff_activ: The activation of the feed forward network. :param dropout: The dropout rate. :param is_pre_norm: Whether the layer type is pre_norm. Default: True. """ super().__init__(model_dim, dropout, is_pre_norm) self.feed_forward = PositionwiseFF(model_dim, ff_dim, ff_activ, dropout=dropout) def forward(self, x, mask=None): """ Performs a forward pass over the PositionwiseSublayer. :param x: The input x. :param mask: The input mask. :return: The output from the forward pass of the PositionwiseSublayer. """ residual = x x = self.apply_pre_norm_if_needed(x) x = self.feed_forward(x) x = self.apply_residual(residual, x) x = self.apply_post_norm_if_needed(x) return x
STACK_EDU
Connection Principles: As mentioned above, connections should not be allowed to traverse from the PCN to the PIN, but be forced to terminate at a device in the DMZ (One Level, One Jump). Also, the direction of the connection is important. Devices in the PIN should not be allowed to open connections in the DMZ, and devices in the DMZ should not be allowed to open connections in the PCN. The reverse is preferred―only allow outbound connection requests. Also, where possible, the connections should not be left open. Some applications, such as historians, require continuous connections. If this is the case in your environment, then the importance of keeping the devices segregated and hardened with the latest security patches is elevated, and they must be constantly monitored in real time. Data Transfer: The basic rules for data transfer are the same as those for connections. Data and files should be pushed "up" from the PCN and pulled "down" from the PIN. Also, an anti-virus solution that scans files prior to its being written to disk is essential, which typically rules out any database-to-database transfers. The data transfer solution must use ports and services that are unlikely to be vulnerable. Solutions that require NetBIOS, Windows management instrumentation (WMI), etc. to be opened across the firewall should be avoided. Ideally, the ports used should be configurable, and a client/server model using account authentication is best. Interactive Remote Access: Ideally, interactive remote access should be avoided. But in the real world it is likely to be required. If required, the first key principle is to require strong two-factor authentication to a device in the DMZ with a non-shared, unique (and therefore traceable) account. The second key principle is to ensure that the user's local PIN-based machine does not interact in any way with the PCN environment (in violation of the One Level, One Jump rule). The device establishing the second session from the DMZ to the PCN should enforce this. The third key principle is to leave interactive remote access accounts disabled until needed. Monitoring: The monitoring solutions implemented in the DMZ should employ real-time monitoring. This does not mean that someone must be constantly watching a dashboard, but that the solution is able to detect anomalous behavior and alert someone who can quickly get to the dashboard to investigate. Also, monitoring solutions should be capable of terminating suspicious, anomalous communications. While this may occasionally cause inconvenience, it should not impede productivity, since time critical process activity is usually not required between the PIN and PCN. There are some specific principles that NERC CIP standards can require that will greatly improve cyber security. In summary, these are: Firewall and DMZ - A DMZ with a firewall should be required between the PIN and the PCN. - Require different account credentials in the PIN, DMZ and PCN. - Connections should not go from the PCN to the PIN, but should terminate in the DMZ, and then a second connection is established from the DMZ to the PIN. - Connection requests should only be allowed outbound from the PCN. - Connections should not be left open (should not be persistent). Data and File Transfer - Data should be pushed from the PCN up and pulled down from the PIN. - A client/server model with unique, authenticated accounts should be used. - Avoid using services, such as NetBIOS, which effectively extend the authentication mechanisms and credentials across the perimeter. Interactive Remote Access - Require two-factor authentication. - Isolate the user's local desktop from the PCN. - Leave interactive remote access accounts disabled until needed. - Enforce One Level, One Jump. - Monitor the DMZ in real time. - Automatically alert on suspicious or anomalous communications in the PIN. - Automatically terminate suspicious or anomalous communications in the DMZ. And finally, verification of compliance with the CIP Standards should involve more than the existence of documentation. The documentation should be checked for validity―at least on a spot-check basis with detailed follow-up if required. Jay Abshier, CISSP, is a security consultant at Sentigy Phil Marasco, CISSP, is a security consultant at Securicon.
OPCFW_CODE
I’m having a hard time trying to deploy a Django app on Render. The app is currently deployed on Heroku on the hobby tier. I first created a Blueprint that linked to my GitHub repo and a PostgreSQL database for the app. The Blueprint’s status says “sync succeeded” and database’s status “available.” I then followed the guide here and tried to migrate the app, but it kept failing for whatever the reasons that I no longer recall. I next followed the guide here (only the “Update Your App for Render” part; I don’t have Poetry and just tried to deploy the existing app) and tried to deploy the app afresh. After repeated attempts and modifications (including changing the django version from 4.1.3 to 3.2.16 when prompted by the "no django version found to match 4.1.3 error message), the status of the app finally says “Deploy succeeded” on the Dashboard of my account. But when I tried to access the app at the URL given by Render, it displays only “Server Error (500),” as shown in the 1st image in the attached file. I can see that the page loads the favicon.ico, indicative that it has access to RENDER_EXTERNAL_URL/static/, but somewhat can’t access other static resources or the index.html. The Logs on my Dashboard indicates a “GET /login?next=/ HTTP/1.1” 500 145," as shown in the 2nd image in the attached file. I don’t understand the “145” that follows the “500.” My settings.py file has relevant settings as shown in the remaining images in the attached file. Please kindly advise just where does it go wrong. Hi, Alan. Thank you for assisting and the suggestions. Following your advice I changed DEBUG = 'RENDER' not in os.environ to DEBUG = True (settings.py line 31) and if not DEBUG: to if 'RENDER' in os.environ: (line 128) (see screenshots attached below). And the app deploys and works normally now. So the only determinant change is whether DEBUG is turned on; however, it obviously is not desirable to have it on on deployment. Is there any get-around to this? What should I do? I’m, I’m not a Python expert. I suggested changing DEBUG to True to see if you got more verbose logging on your original issue. If that fixed the problem, then it feels that DEBUG = 'RENDER' not in os.environ may not have worked as you expected. I agree, debug mode is not desirable for production, so have you tried changing DEBUG to False rather than a conditional? If that still works, then it appears DEBUG = 'RENDER' not in os.environ was the issue. Unless you’ve also made other changes. Hi, Alan. Thanks again for your time. I’ve done further investigation into possible solutions; however, nothing other than setting DEBUG = True worked. I first tried as you suggested to set DEBUG = False and the app returned to giving Server Error (500), It appears the env variable RENDER is being passed in wherever it’s invoked–so 'RENDER' not in os.environ evaluates to False that causes the issue here. I also tried to hardcode the host names into the ALLOWED_HOST list (e.g., ['localhost', 'jtc-bridgeapp.onrender.com']) to ensure the hosts are known to the app, because ALLOWED_HOST list needs to be specified when DEBUG = False. Still the app deployed normally only to produce Server Error (500). I’m exhausted at this point and would appreciate any helping hands. This isn’t a Render issue, but a code issue. Debugging code issues is beyond the scope of our support, as every app is different. However, in this case, to confirm that this definitely isn’t a Render issue, I dug a little deeper. As I’ve said before, I’m not a Python expert, but searching Google for “django 500 debug false”, brings up this Stackoverflow article as the top result. Which includes some suggestions, including adding additional logging. The linked example writes logs to a file, but if you’re using a free plan, you wouldn’t have Shell access to see that file. So you can refer to the Django docs to make those logs output to the console (example). With that in place, you should be able to see what the underlying error is when running the app with DEBUG=False. As the repo is public, I spun up my own copy with the logging config added. The error was shown in the logs: ValueError: Missing staticfiles manifest entry for '/images/bryan-profile.jpg' Again, not being familiar with Python/Django, and Google being our friend, the top result for a “missing staticfiles manifest entry” search was another Stackoverflow article. With a suggested solution being incorrect leading slashes / on static declarations. Thank you very much, Alan, for the time you spent in assisting me in this matter. The suggestion links helped me find the issue, and it was indeed misplaced '/'s in the static resources links in my template files. My inexperience in looking for a solution had also compounded the problem, it seemed. So Cory’s answer to the Stack Overflow question here was dead on the issue I encountered. I’m truly grateful for your help.
OPCFW_CODE