Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Why would humans fight a planet full of monsters? So in my world a portal suddenly opened on a field. The portal leads to a planet full of monsters. Why would the people in my world attack and defeat the evil planet. What resources would there have to be for there to be strong motivation. This is in a semi-modern world. For example 1800s or 1700s. If it matters the portal was opened by evil magicians. If you want to merge new account with old one you mention you had but lost, I think the moderators can help you with that. Ok how do I reach the moderators? https://worldbuilding.stackexchange.com/help/merging-accounts Ok thanks a lot. I think that your question is too broad and does not have enough context to give you a proper answer. You provide so little information that any motivation for conquest would work. Why do you say the planet is evil? What specifically makes the inhabitants monsters? Why humans would invade is easy: free natural resources. In the 18-19th century, the Doctrine of Discovery was still in force in Europe The same old reasons. Take their loot and bring it home. Put it up in your city to impress the folks. Or melt it down and make coins out of it, with your face on them. Maybe a monster on the flip side so people can chuckle because they got their loot taken. Take their land and grow stuff on it. Sell that stuff, or eat it. Or sell it to hungry monsters who used to get their food there until you showed up. Use their payments for reason #1. Take the monstery people and make them work for you. They can build a triumphal arch in your city depicting you showing up and taking their stuff. Once that is done you can have them do other jobs you need doing. Convert their monstery people to your religion. This is compatible with points 1 thru 3. Kill their monstery people because they came thru the portal to your world with #1,2,3 or 4 in mind. nice and funny. i like it Perhaps for ownership of land. During those times land represented money, in a way, and I can easily imagine the more ambitious people embarking on that quest to defeat the monsters on the planet. Also, the planet might be useful for research purposes in astronomy. I used to have a few thousand reputation but lost the account so now I am a new contributor like you. Welcome to the site, I hope you enjoy it! This is a great answer, but generally 1-2 line answers don't get too many votes, so you may want to add other reasons or perhaps add more of your reasoning for a clearer, more thorough explanation. Hope it helps! In 1700s, the land itself (particularly, frontier land) was hardly valuable in the New World. However, there were people who just couldn't bear it in the Old World and they were willing to take the risk and move to the other continent. When responding to questions it's best to try and keep opinion out of it and fully answer the question. And while responses don't have to be lengthy, it's good to put in examples. If the monsters happen to resemble abominations from any of the major religious texts, then the portal might be mistaken for a gateway to the darker half of the afterlife. With such a mistaken conviction, the living heirs of less than saintly loved-ones might invade hell in hopes of liberating their recently damned deceased. Monsters who happen to die during that liberation would just be collateral damage.
STACK_EXCHANGE
In this post I will try to discuss some inner details of OpenSmalltalk-VM immediate floats. Immediate floats are present only in 64 bits hence I won’t talk about 32 bits VM in the whole blog post. In addition, OpenSmalltalk-VM supports only double precision IEEE floating pointer, hence I won’t discuss single precision IEEE floating pointer. OpenSmalltalk-VM uses an immediate object scheme to represent object oriented pointers (oop) in memory. Basically, due to 64 bits alignment, the last 3 bits of all pointers to objects are 000. This is abused to encode in the oop itself specific objects, in our context, SmallIntegers, Characters and ImmediateFloats. This optimization allows to save memory and to improve performance by avoiding boxing allocation for common arithmetic operations. The last 3 bits of an oop are called a tag. The Immediate Float tag is 100 (4 in decimal). Objects encoded directly in the oop are in our terminology called immediate objects. OpenSmalltalk-VM and its clients use the double precision IEEE format to represent floating pointers, supported by most modern hardware. The key idea to the immediate float design is to use an immediate representation of double precision floats to avoid boxing and save memory, while still being 100% compatible with the IEEE double precision format (Customers requirement). Therefore, in 64 bits, OpenSmalltalk-VM use two implementations for floats. The most common floats are represented with immediate floats, where 3 bits of the exponents are abused to encode the tags. The rest of the floats are represented as boxed floats. By design, immediate floats occupy just less than the middle 1/8th of the double range. They overlap the normal single-precision floats which also have 8 bit exponents, but exclude the single-precision denormals (exponent-127) and the single-precision NaNs (exponent +127). +/- zero is just a pair of values with both exponent and mantissa 0. So the non-zero immediate doubles range from +/- 0x3800,0000,0000,0001 / 5.8774717541114d-39 to +/- 0x47ff,ffff,ffff,ffff / 6.8056473384188d+38 Encoding and decoding The encoded tagged form has the sign bit moved to the least significant bit, which allows for faster encode/decode because offsetting the exponent can’t overflow into the sign bit and because testing for +/- 0 is an unsigned compare for <= 0xf. So given the tag is 4, the tagged non-zero bit patterns are and +/- 0d is 0x0000,0000,0000,000[c(8+4)] Decoding of non-zero values in machine code is performed as follow: Encoding of non-zero values in machine code is performed as follow: Reading floats in general is fairly easy, the VM checks the class index, if the class index of immediate float is present, then the float is decoded from the oop, if the boxed float class index is present, the float is read from the boxed object. Each primitive operation (arithmetic, comparison, etc.) has now to be implemented twice, once in both classes, where the first operand is expected to be an instance of the class where it is installed. In Smalltalk float primitive operations succeed if the second operand is one of the 2 float classes or a SmallInteger. It fails for large integers and arbitrary objects, in which case the VM takes a slow path to perform correctly the operation. At the end of arithmetic operations, the resulting float has to be converted back from the unboxed format to either an immediate float or a boxed float. To do so, the VM checks the exponent of the float against the smallFloatExponentOffset, 896. 896 is 1023 – 127, where 1023 is the mid-point of the 11-bit double precision exponent range, and 127 is the mid-point of the 8-bit SmallDouble exponent range. If the exponent is in range, it can be converted to an immediate float. If not, one needs to check if the float is +/- 0, in which case it can still be converted to an immediate float, else it has to be converted to a boxed float. The code looks like that in Slang: ^exponent > self smallFloatExponentOffset ifTrue: [exponent <= (255 + self smallFloatExponentOffset)] [(rawFloat bitAnd: (1 << self smallFloatMantissaBits - 1)) = 0 ifTrue: [exponent = 0] ifFalse: [exponent = self smallFloatExponentOffset]] To conclude the post, here are the instructions generated in x86_64 to encode immediate floats. I put the instruction so that you can see how to encode efficiently using the theoretical design from the figures above and in addition quick checks for +/- 0. 000020da: rolq $1, %r9 : 49 D1 C1 000020dd: cmpq $0x1, %r9 : 49 83 F9 01 000020e1: jbe .+0xD (0x20f0=+@F0) : 76 0D 000020e3: movq $0x7000000000000000, %r8 : 4D B8 00 00 00 00 00 00 00 70 000020ed: subq %r8, %r9 : 4D 2B C8 000020f0: shlq $0x03, %r9 : 49 C1 E1 03 000020f4: addq $0x4, %r9 : 49 83 C1 04 Decoding is easier: 00002047: movq %rdx, %rax : 48 89 D0 0000204a: shrq $0x03, %rax : 48 C1 E8 03 0000204e: cmpq $0x1, %rax : 48 83 F8 01 00002052: jle .+0xD (0x2061=+@61) : 7E 0D 00002054: movq $0x7000000000000000, %r8 : 4D B8 00 00 00 00 00 00 00 70 0000205e: addq %r8, %rax : 49 03 C0 00002061: rorq $1, %rax : 48 D1 C8 00002064: movq %rax, %xmm0 : 66 48 0F 6E C0 Let me know if you have any question or you want me to expand this post with something else. Note: Part of the blog post was extracted from the SpurMemoryManager class comment on immediate float, I thank Eliot Miranda and other OpenSmalltalk-VM contributors for writing it.
OPCFW_CODE
Désolé, nous n'avons pas pu trouver le travail que vous cherchiez. Trouver les travaux les plus récents ici : i run Youtuber workshops for kids, we are running a summer camp and we are looking for sponsorship, so we want to have a sponsorship proposal. Looking for a PHP expert in order to complete our landing page development. I am buying a newly built home and there have been structural concerns. I would like to hire a 3rd party inspector to verified everything is structurally sound. I'm currently doing my final year project for my degree course. The topic of my project was to develop a database application for a specific company for them to store their data. I plan to use Python as the programming language and Tkinter as the GUI for the application. We are looking for an experienced proposal writer for government RFP's to assist our company put together the Technical Vol. of a proposal for a Dept. of the Navy proposal. We would like to work with a proposal writer: 1) who can meed tight deadlines 2) who is experienced in following the rigid Gov't RFP rules re. formatting and other details 3) who is willing to communicate regarding ... Fixing the add to cart button feature on the current python3 script. We need to convert C language drivers from dsPIC33EP512GP506 to dsPIC33FJ128GP306 due to the current unavailability of the former and our current old stock of PCB's with dsPIC33FJ128GP306 which we need to now use. They are very similar and essentially it is the pin usage that has to be corrected. For a guru on Microchip PIC's this will be quite easy to do. Very importantly, after ... Looking for Google Ads Tag Expert that troubleshoot a multiple event tag issue on Squarespace site. I have 2 event tags on a page, specifically on 2 separate buttons. Problem lies, when any button is clicked, attribution only triggers the last event snippet and does not get sent to the appropriate conversion tag. i need a code in arduino basically i am building a can bus with arduino and mcp2515 so i need a code in arduino c++ We are looking for someone who has experience in running Facebook lead ads. This is not a part time position and we require someone who is available during the hours of 9am Sydney - 5pm Sydney. To let us know you have read this properly please write the colour " Blue " and the answer to 7 + 2 in the Subject line or we will not respond to you for this position. You will be required to ...
OPCFW_CODE
Improved bind-parameter declaration Right now, drivers are required to infer parameter types from the binding value or the type Class via Statement.bindNull(…). In most cases, type inference just works. There are some scenarios, in which type inference does not work or requires augmentation with wrapper types of casting in the actual statement: JSON usage (there's no JSON wrapper available in R2DBC. Postgres JSON requires a specific type as it cannot be used without further casting) VARCHAR vs. NVARCHAR disambiguation SQL typecasting (CAST( $1 AS …)) before the value can be used in the SQL statement Future use-case: Stored procedures OUT/IN-OUT parameters Therefore, we should introduce a mechanism to specify more details about the parameters. Here's a proposal: Introduce the following types: Parameter interface to encapsulate a parameter declaration (in/out, fixed type, inferred type) Parameters utility to create Parameter instances (serves also to hold package-private implementations) Type interface as extensible type information R2dbcTypes enum to declare data types from the spec public interface Parameter { /** * Returns the parameter type. * * @return the parameter type. */ Type getType(); /** * Returns the value. * * @return the value for this parameter. Value can be {@code null}. */ @Nullable Object getValue(); /** * Marker interface to classify a parameter as input parameter. Parameters that do not implement {@link Out} default to in parameters. */ interface In {} /** * Marker interface to classify a parameter as output parameter. Parameters can implement both, {@code In} and {@code Out} interfaces to be classified as in-out parameters. */ interface Out {} } /** * Utility to create {@link Parameter} objects. */ public final class Parameters { /** * Create a {@code NULL IN} parameter using the given {@link Type}. * * @param type * @return */ public static Parameter in(Type type) { notNull(type, "Type must not be null"); return in(type, null); } /** * Create a {@code NULL IN} parameter using type inference and the given {@link Class type} hint. The actual {@link Type} is inferred during statement execution. * * @param type * @return */ public static Parameter in(Class<?> type) { notNull(type, "Type must not be null"); return in(new DefaultInferredType(type), null); } /** * Create a {@code IN} parameter using the given {@code value}. The actual {@link Type} is inferred during statement execution. * * @param value * @return */ public static Parameter in(Object value) { notNull(value, "Value must not be null"); return in(new DefaultInferredType(value.getClass()), value); } // implementation omitted } public interface Type { /** * @return default Java type. */ Class<?> getJavaType(); /** * @return type name. */ String getName(); } /** * Definition of generic SQL types. */ public enum R2dbcTypes implements Type { /** * Identifies the generic SQL type {@code VARCHAR}. */ VARCHAR(String.class), /** * Identifies the generic SQL type {@code NVARCHAR}. */ NVARCHAR(String.class), /** * Identifies the generic SQL type {@code BOOLEAN}. */ BOOLEAN(Boolean.class), /** * Identifies the generic SQL type {@code BINARY}. */ BINARY(ByteBuffer.class), // … } Drivers can ship their own Type implementations (either static or via lookup such as Postgres with well-known/extension types [JSON, HSTORE]) Parameter.type used to specify the actual data type to resolve ambiguity (e.g. VARCHAR vs. NVARCHAR). It's also used for the disambiguation of overloaded stored procedures. Parameter accepted as part of Statement.bind(…) for improved binding and type coercion. Type could be exposed through ColumnMetadata. Usage example: statement.bind("result", Parameters.out(String.class)) .bind("input-1", Parameters.in("a-value")) .bind("input-2", Parameters.inOut(String.class)) .bind("description", Parameters.in(R2dbcTypes.NVARCHAR, "some-text")); Does bindNull essentially need to migrate toward something like Parameters.null()? We could deprecate bindNull (as in bindNull(<index|name>, String.class)) in favor of using bind with Parameters.in(…) (as in bind(<index|name>, Parameters.in(String.class)) or switch it to a default method. One can also use bind(<index|name>, Parameters.in(nullableValue, String.class)to usenullableValue's state to indicate whether the in-parameter is nullor whether it's a true value. JDBC does this withStatement.setString(, value)` (We should not follow JDBC with introducing typed methods as that would clutter the API). Since bind already accepts Object, we can leniently use either the value or evaluate the Parameter object. While we could switch to Parameter entirely, I have the feeling that we would do this for the sake of the API and make the usage more complex. Thoughts? I certainly favor deprecating bindNull. I guess an in(null) and out(null) would then signal both a null value as well as the "direction" of the parameter, so that makes most sense (compared to null()). Could we also somehow abstract the parameter binding (the placeholder) because for example: mssql-r2dbc uses "@" mysql-r2dbc uses "$" postgre-r2dbc uses "$" h2-r2dbc uses "$" mariadb-r2dbc uses "?" proposal: either through a convention or an interface that gives back the binding placeholder Thanks for raising the issue. To make sure a correct understanding, you want that a driver reports the parameter bind marker scheme that it uses (similar to Spring Data R2DBC's BindMarkersFactory)? Yeah, exactly, the problem I'm facing currently is that the drivers have different markers and if I hard code them - could potentially brake in future. Currently, I have to do the exact same thing spring does and would say that this makes it kinda inconvenient for everybody doing something on top of R2DBC (am currently adding r2dbc to querydsl) In Spring Data, we're aware of anonymous (?), indexed (<prefix><index> -> $1), and named (<prefix><name> -> @P01) bind marker schemes. Depending on the bind marker scheme, various restrictions apply and we're still not sure whether the list of mentioned variants above is exhaustive or not. The description API exposes a pretty significant API surface of which some options are mutually exclusive or come with certain constraints. Especially anonymous markers require ordered binding while indexed/named markers can be bound using the index/name. For the time being, I'd advise to either reconstruct what Spring Data is doing in Querydsl. Alternatively, a community-driven effort to establish such a registry would be beneficial until we've gathered more knowledge that allows us to decide whether such a facility could be provided by the spec. Exacto to the point. While Querydsl only relies on indexed params(and the effort to resolve the issue is not huge per se), the inconsistency between drivers is adding additional efforts/delays to other projects and while a redesign of the interface is planed, this could be a nice addition.
GITHUB_ARCHIVE
>The previous version of the table and the ranking on Maple seems to be made in a rush doubting the validity of other entries as well. I am not well versed with the other platforms to accept or deny your conclusions. Maple is the one I am not very well versed in beause I haven't had a license in years, so I checked through the documentation to see what changed but it's clear I missed some things. I am sorry about that but I think it all got fixed up. In the end, my opinion of Maple's suite has changed a lot. For example, looking at the delay diffeq suite, it looks like it's designed in a similar way to what we did in DelayDiffEq.jl where the idea is to get a stiff solver by extending a Rosenbrock method. I am sure that Maple devs saw the same reusage tricks that can be done in this case. The accuracy shown in your extended post on the state-delay problem along with the docs example point to the fact that there must be some form of discontinuitiy tracking as well, probably silently on top of their events system if they also followed the DKLAG paper, since otherwise though examples wouldn't get that tight of errors. So in the end I think choices like this look quite similar to choices we made, so it's nice to see some precident. I now think that Maple's delay equation setup is very great, I just don't get why it's documented so poorly. I still think that the choices for the explicit RK tableaus are odd and suboptimal and hope that the Maple devs update those to more modern tableaus. But those are the kinds of "sugar" things. In general it seems that for "most people's problems in standard tolerances" some explicit RK method or some higher order Rosenbrock method seems to do well, and we have all of Hairer's benchmarks and DiffEqBenchmarks.jl to go off of for that, so it's pretty clear that Maple is hitting that area pretty solidly. Still, it's missing someone of the "sugar" like fully-implicit RK methods (radau) which our and Hairer's benchmarks say are important for high accuracy solving of stiff ODEs. It's also missing SDIRK methods which our benchmarks say trade with Rosenbrock methods as being more efficient in some cases. The mentioned case was a semilinear stiff quorum sensing model. I'll see if we can get that added to DiffEqBenchmarks.jl, but basically what's shown is that we agree with Hairer's benchmarks that the methods from his SDIRK4 were uncompetitive, but TRBDF2 and the newer Kvaerno and Kennedy & Carpenter methods (which are the basis of ARKODE) are much better than the SDIRK methods benchmarked in Hairer. All of the cases are essentially cases with semilinearity where the fact that Jacobians are required to be re-calculated every step in a Rosenbrock method whereas Jacobians are just for linesearches in SDIRK implicit steps actually made a big difference since the standard Hairer implementation of SDIRK then allows for Jacobian calcuations and re-factorizations to be skipped. But again, these are edge cases looking for just a little more efficiency on some problems, while explicit RK + Rosenbrock + LSODE covers quite a bit of ground. That plus the fact that Maple lets you compile the functions bumped up my opinion of Maple's set of solvers to very good but not excellent. I am sure that down the line we can write a Julia to Maple bridge to do more extensive benchmarking (Julia links to Sundials/LSODA/etc. so then there's a direct way to do quite a bit of comparisons) but since I have found that implementations don't differ much these days from what Hairer described, I'll assume Maple devs know what they're doing and would get similar efficiency in writing a compiled solver that anyone else does. But I will say that Maple should definitely document not just how to use their stuff, but what they are doing. It's really hard to know what Maple is doing in its solvers sometimes. Most of the other suites have some form of publication that details exactly what methods they implemented and why. Maple's docs don't even seem to hit that, and I only found out some of those details in forum links here. What I can find on Maple are Shampine's old PSE papers, but it seems a lot has changed since then. >Regarding your comment on 10,000 or more ODEs, are you talking about well defined system with well defined pattern for Jacobian or matrix or arbitrarily working with a random set of matrix? I agree that Maple is weak for this, but MATLAB switches to sparse solvers for large systems. Yes. A quintessential example is a method of lines discretization of a reaction-diffusion equation. Busselator is a standard example, but I like to use like a system of 8 reactants or something like that. These PDE discretizations usually have a natural banded structure that things like Sundials have built-in choices of banded linear solvers for handling, or suites give the ability to pass in user-defined linear solvers. It definitely depends on the audience, but "using ODE solvers to write PDE solvers" is definitely a large group of users from what I've found and so handling this well matters to those who are building scientific software on top of the ODE suites. >I have had bad experience with shooting methods (in particular single shooting) for BVPs. It will be trivial to break any such code for DAE BVPs. Yes, and honestly the BVP solvers were a tough call between "Fair" and "Good" here. I put Julia's as "Good" here because of flexibility. The imputus for finishing and releasing them were because we were talking about them in our chat channel and someone wanted to use them for the boundary constraint that the maximum of the velocity was 1 over the interval. Since our setup involves writing constraints using (possibly continuous extension of) the solution, this was possible. And then we had some people use it for multipoint BVPs without having to do any transformations of them. So that, plus the fact that our MIRK-based method can do (stiff) multipoint BVPs and has a specialized form for banded Jacobian handling when it's a two-point BVP is why I ended up giving it a "Good". But that doesn't mean it's close to complete at all. The Shooting methods are nice but many problems are too sensitive to the initial condition to use them, so we really need to get a continuous extension, mass matrices, singularity handling, and adaptvity to complete our MIRK-based method. But since from what's missing it's clear that someone's problem can't be solved well by this setup, one could also put a "Fair" on this, but I don't think you can justify a "Poor" because it does handle so many unique things. MATLAB actually does quite well in this area with bvp4c solving two-point BVPs with singularities and stiffness just fine, but it's only a "Good" because it doesn't go any further. Maple's documentation explicitly states that it shouldn't be used for stiff BVPs, and it's unclear to me if it can do more than two-point BVPs so I think that justifies the "Fair" rating. I did miss that COLSYS was so flexible in the table, along with COLDAE. If anything deserves an excellant, Netlib would be it. And the sensitivity analysis of DASPK is missing. That will get updated. Of course, simple tables with rankings can only be ballparks because it's all about the details so I try to flesh out the details as much as possible in the text and hope to get as close as possible or as reasonable as possible in the table. I really dropped the ball in the first version of the table for Maple and I'm sorry about that. But as to: >I am not well versed with the other platforms to accept or deny your conclusions. let me just share the other main concerns that have been brought up: 1. Someone on Hacker News wondered why I omitted Intel's ODE solvers. They were fine but discontinued in 2011 and are just vanilla solvers without any event handling or other special things. 2. Someone emailed me about Mathematica having GPU usage (and now it looks like someone else posted a comment about it on my site), but that was for in general and their GPU stuff doesn't apply to ODEs. 3. Someone on Twitter mentioned that SciPy does do event handling. I can't find it in the docs at all so I am waiting for confirmation. But from what I can find, there are just sites showing how to hack in event handling and these don't even make use of dense output (and I cannot find a method in the docs, and the dev channels seem to show nobody has picked up the implementation as a project yet). So unless I'm missing something big it seems like this won't change. Edit: Looks like we are in agreement here now. So it seems Maple is the only area that really had some big valid objections that have since been corrected. With the amount of press this somehow got (man, these were just mental notes I was sharing while comparing what's available to find out what to do next haha), I think there's some confidence to be gained that more devs in other languages haven't really voiced objections. Though I'm sure that there are corrections that can and will be made as other issues are brought up.
OPCFW_CODE
I’ve read the other posts on this topic, and so far, no one has had an answer. 2021-07-09T13:49:49Z E! [agent] Error writing to outputs.influxdb_v2: Post "https://davidgs.com:8086/api/v2/write?bucket=telegraf&org=influxdata": context deadline exceeded (Client.Timeout exceeded while awaiting headers) And in indeed, I have no data being written to InfluxDB. Suspecting a token issue, I made a new all-access token, yet the error persists. Earlier in the log: 2021-07-09T13:52:01Z I! Loaded inputs: cpu disk diskio dovecot http_listener http_listener_v2 (2x) kernel mem mongodb mqtt_consumer mysql net netstat processes swap syslog system webhooks 2021-07-09T13:52:01Z I! Loaded aggregators: 2021-07-09T13:52:01Z I! Loaded processors: 2021-07-09T13:52:01Z I! Loaded outputs: file influxdb_v2 2021-07-09T13:52:01Z I! Tags enabled: host=whm.davidgs.com 2021-07-09T13:52:01Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"whm.davidgs.com", Flush Interval:10s 2021-07-09T13:52:01Z D! [agent] Initializing plugins 2021-07-09T13:52:01Z D! [agent] Connecting outputs 2021-07-09T13:52:01Z D! [agent] Attempting connection to [outputs.influxdb_v2] 2021-07-09T13:52:01Z D! [agent] Successfully connected to outputs.influxdb_v2 2021-07-09T13:52:01Z D! [agent] Attempting connection to [outputs.file] 2021-07-09T13:52:01Z D! [agent] Successfully connected to outputs.file 2021-07-09T13:52:01Z D! [agent] Starting service inputs So it can connect, it just won’t write to the database. Other processes can write directly to InfluxDB on that host (I have, as you know, tons of sensors around that are all writing data directly to InfluxDBv2 without Telegraf). This problem seems unique to telelgraf.
OPCFW_CODE
MATLAB function that returns complete solution to Ax=b I have a homework assignment that tasks me with writing a MATLAB function and I'm worried that I've missed something in my current answer. The function returns the complete solution to a linear equation of the form Ax=b, where A is a square matrix, and b is a vector of the appropriate dimension. The first line of the function is function [Bs, Ns] = a(A, b) where Bs is the basic solution (a vector), and Ns is the null solution - a matrix whose columns are a basis of the null space of A. There are also a few considerations in terms of the code used: Code will be marked based on a set of test cases. Built-in functions may be used in the code but it must be my original work. Code that produces an error or warning, such as for a singular matrix, will be assigned a failing mark. It can be assumed the test set will contain matrices that are all zero, non-singular, and otherwise rank deficient (complete solution exists, but MATLAB will produce an error or warning). The code I've written is below. function [Bs, Ns] = a(A, b) ncols = size(A, 2); x = pinv(A)*b; Bs = x; if ncols == rank(A) Ns = zeros(ncols,1); else Ns = null(A); end end The simplicity of my function has me worried that I've missed something (assignment is worth 4% of final grade) - either in my interpretation of the listed considerations, or that there are test cases which will cause errors/warnings. Any input would be appreciated. Looks like it should work? Is this a programming class or a linear algebra class? My two cents: give some meaningful name to your function @MatthewGunn Thanks for having a look. It's a computer science class focused on linear algebra - "scientific computing". @brainkz I agree that would be an improvement, but we're instructed to use that function name. Only thing I'd wonder if when they meant "built" in functions, they wanted something a bit more primitive. You can do a singular value decomposition on A and use that to calculate the nullspace and the pseudo inverse. Your current code actually calculates the svd 3 times :P Once for 'pinv', once for 'rank', and once for 'null' Instead of pinv consider also A\b (note not division A/b) @AnderBiguri That's what OP is trying to code @percusse The amount of different algorithms to solve Ax=b is huge. If A\b can not be used, the best way is to go to the docs, and read how mldivide does the job.However, without knowing which is the A matrix case, its hard to choose a good solver @AnderBiguri That's why this is a homework Not that A\b will produce warnings if the matrix is singular or nearly singular. If you use that, then you'll need to ensure that you check the rank and possibly the condition (cond) before doing A\b. @AnderBiguri He should take a singular value decomposition anyway to determine rank and null space. Once you have the svd, calculating the pseudo inverse is basically two matrix multiples. But yes, in the general case, you don't want to calculate inverses and multiply, you want to solve linear systems more directly.
STACK_EXCHANGE
On the discord we’ve had some discussion around potential alternative renders. To my knowledge, right now GD5 uses PixiJS. However, some conversations have come up about potential benefits of moving over to something like Three.js or LiteScene.js While both of these are full WebGL renderers (meaning they include 3D functionality), having something capable of 3D could add benefits even if we’re not going to push polygons/do 3D Support in the engine. Stuff like: Splitscreen/multiple viewports becomes much easier to implement Full depth/layer distortion (like a Monkey Island adventure game, when going moving into the background/foreground). As far as the renderers, Three.js is one of the most well known/used WebGL renderers out there (https://threejs.org/) and has a lot of active support. I think the engine isn’t ready for an change on a other renderer. As you know we use currently pixi 4.8.6. In the next months/weeks the version will up to 5.2.4, the current lastest version of pixi. Seeing other solutions is interesting, the applied is a very huge decision and indeed require a lot of investissement. Add a new renderer can be interresting but i would like see an native renderer multi-platform for pc app, mobile and web. Maybe something how allow to the dev to avoid an app with a browser included (ie: Electron/NW.js). This kind of app are limited by the rule in place in the browsers. I think to the sound and the video, these are limited, that why you need to click before launch any sound/video. Oh, I thought electron patched chromium to remove pretty much all those limitations for a more native feeling. I don’t think it is about changing, it is about providing an alternative, like cocos2d. The engine is made with multiple renderers in mind, and it wouldn’t hurt to try potentially more powerful and efficient ones. We don’t need to convert every object for this renderer directly too, so I think it wouldn’t be that difficult to provide a PoC of a new renderer (only need to make a renderer for runtimeGame, runtimeScene, layers and SpriteRuntineObject to have a working one) I’m also prefer to see a native renderer for desktop and mobile. The original idea behind Cocos2D was to replace both Pixi and SFML so need to maintain only 1 renderer, 1 code base for both HTML5 and Desktop and may even add Mobile support but don’t know why 4ian decided to focus on Pixi only and dropped SFML and Cocos2D been never get in to the focus, contributors do implement staff for Cocos but 4ian doesn’t really care but Pixi. So yeah, +1 for a native renderer but if 4ian prefer to maintain a single code base could target a trans compiler like Haxe or Cerberus would do. Both is a custom programming language that is get compiled down to native JS, C++, Java code and both also has a cross-platform renderer that also get compiled for each language natively no Cordova or Emscripten or other bull*** that make your hair fall out but pure native code on all platforms with a single code base. Of course I know it is never going to happen because 4ian is a JS guy but I like to unload my thoughts on forums and then I can sleep better I don’t know anything about it but I think maybe we could also use GraalVM (https://www.graalvm.org/docs/reference-manual/languages/js/), it could potentially be more performant as it is designed to run real big programs unlike v8 which is designed to run smaller programs. Then I don’t know anything I may be wrong and I don’t know if it’s even really possible to put it in electron Haxe would also be interesting, because this could mean the alternative renderer could be Heaps.io. Which can build natively to JS, Windows/Mac/iOS, AND all current consoles. It’s also a battle tested renderer used for numerous commercial games (Dead Cells probably being the most well known): About - Haxe Game Engine - Heaps.io Game Engine As mentioned, any of this is obviously a huge undertaking, but it sounds like the Pixi > Haxe code above might make it much easier. But did he seem interested in the implementation himself? He is the only one knows GD in and out so he need be actively participating, providing some guidelines to contributors how to go about it “if interested” not going to be enough imo. Haxe is not the current subject, renderers are. Haxe would be a different platform. I am 99% sure I was the one who brouht it up. I would prefer to stick with the current Platform for the same reason the cpp one got abandoned: one platform which can export everywhere is enough, and we don’t have the time and enough contributors to keep multiple platforms. The JS platform can export everywhere and is pretty complete. Rewriting a whole new platform would take massive amounts of time porting everything, and we would probably loose contributors as it is possible some only know JS or don’t want to learn Haxe. This change is not necessary, potentially dangerous for the contributors number, and time consuming. I see. However you need to go through the pain only once and it is certainly have advantages. From my experience with most native C/C++ graphics libs compiled for the web you get 90% performance drop due to poor optimization so if something runs at 60 FPS on desktop it runs at 6 FPS in the browser. At the moment I can’t find any easy native solution that both easy to code and also fast in the browser. Most engines put the HTML5 logo on their website “hay we support that too yey” but the engine is not actually optimised for web that is the problem. Atm, Sokol seems to be the most performant solution. Of course we can always pick a JS lib but I leave that discussion to you and others because I am not so much interested in that and there is so many. Sure the first thing everyone would mention is 3D, but I remember blurymind mentioned the dev of Pixi is looking in to bringing some level of 3D support to Pixi, so maybe it is better to wait for that instead of rushing and add a second 3D renderer. Not sure if the Pixi team started the development already or not but maybe worth checking that before start any work on bringing a 3D renderer in to GD without having any plans for a dedicated 3D level editor too. The funny thing is however, if we really want to target desktop, mobile, web natively with a single code base, GD already support Cocos2D but don’t know why it was never received serious attention from 4ian beyond getting it up and running. Not sure what happened there, 4ian and Victor worked really hard to bring Cocos2D for us with the intention to target all platforms natively with a single code base but then the moment they got it up and running in the browser, Victor disappeared and 4ian choose to focus on Pixi and JS instead and now he seems to be more interested in WebAssembly but as of today I don’t know any easy to code solution that also fast in the browser. I’ve managed to get a more or less running platformer example. I struggled really hard butr I managed to have the scene bg clor working and the stage too but not the sprite rendering. Like when I jumped I could hear the jump sound, and when I moved whle jumping up to the enemies I could hear the enemy stomping sound, but the sprites were not visible.
OPCFW_CODE
Performance issue with xpath in SQL Server 2008 I have a table with lots of large xml-documents. When I run xpath expressions to select data from those documents I run into a peculiar performance issue. My query is SELECT p.n.value('.', 'int') AS PurchaseOrderID ,x.ProductID FROM XmlLoadData x CROSS APPLY x.PayLoad.nodes('declare namespace NS="http://schemas.datacontract.org/2004/07/XmlDbPerfTest"; /NS:ProductAndRelated[1]/NS:Product[1]/NS:PurchaseOrderDetails[1]/NS:PurchaseOrderDetail/NS:PurchaseOrderID[1]') p(n) The query takes 2 minutes and 8 seconds. When I remove the [1] parts of the single occurance nodes like this: SELECT p.n.value('.', 'int') AS PurchaseOrderID ,x.ProductID FROM XmlLoadData x CROSS APPLY x.PayLoad.nodes('declare namespace NS="http://schemas.datacontract.org/2004/07/XmlDbPerfTest"; /NS:ProductAndRelated/NS:Product/NS:PurchaseOrderDetails/NS:PurchaseOrderDetail/NS:PurchaseOrderID') p(n) The execution time drops to just 18 seconds. Since the [1]-nodes occurs just once in each parent node in the documents the results are the same except for ordering. Actual execution plan for the first (slow) query is and the second (faster) query is Query 1 full screen Query 2 full screen. As far as I can see the query with [1] does the same execution as the query without, but with the addition of some extra calculation steps to find the first item. My question is why the second query is faster. I would have expected the execution of the query with [1] to break early when a match was found and thus reduce the execution time instead of the opposite. Are there any reasons why the execution does not break early with [1] and thus reduce the execution time. This is my table CREATE TABLE [dbo].[XmlLoadData]( [ProductID] [int] NOT NULL, [PayLoad] [xml] NOT NULL, [Size] AS (len(CONVERT([nvarchar](max),[PayLoad],0))), CONSTRAINT [PK_XmlLoadData] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] Edit: Performance numbers from SQL Profiler: Query 1: CPU Reads Writes Duration 126251 1224892 0 129797 Query 2: CPU Reads Writes Duration 50124 612499 0 16307 Please don't post a picture of the execution plan, but the plan. Meaning the XML file. There is a lot of information in those XML files that is not displayed in the picture of the plan. The second query uses parallelism. That is, it was expensive enough for the optimizer to shut its eyes to the additional overhead. I'd guess the second query tells optimizer to "dump everything", which is performed with a paralleled scan. SQL Server likes to "dump everything" in this way when asked. Whereas the first query asks for "analyze and then give some." The optimizer has no way of knowing there's only one node anyway, so the execution plan it ends up picking is very different. I'd say it's similar to situation when one table scan is cheaper than many index seeks. You are right in that the second query uses parallelism. But I can't figure out how that could account to such big difference on my quad core machine. I edited the question with numbers from the SQL profiler, the total CPU time spent is more than 50% lower on the second query anyway. Is the lowered CPU consumption an effect of that the parallel query plan happens to be more efficient too in addition to be parallel? Have a look at Performance Optimizations for the XML Data Type in SQL Server 2005 and the section about "Moving Ordinals to the End of Paths". Ordinals used in path expressions for static type correctness are good candidates for placement at the end of path expressions. The path expression /book[1]/title[1] is equivalent to (/book/title)[1] if every <book> element has <title> children. The latter can be evaluated faster for both the XML indexed case and the XML blob case by determining the first <title> element under a <book> element in document order. Similarly, the path expression (/book/@ISBN)[1] yields faster execution than /book[1]/@ISBN. Querying XML is a beast. Adding XML indexes to your table will make the queries a LOT faster. I'm planning to running those queries once to build relational tables containing the data I want to search for. The reason I need speed is that we want to minimize the disturbances in our production database when we extract the data. Building an XML index takes a lot of time too.
STACK_EXCHANGE
How long does it take for cancer to be detectable if it grows from a single cell? Assuming cancer has its origins at a single malfunctioning cell, how long would it take for that cell to grow from the point at which it is malfunctioning enough so that it can be detected by the immune system, to the point at which it be detected with: Routine modern medical tests Cancer specific blood test Noticed by the host (symptoms) This will probably be hard to tell - and depends strongly on the type of cell. Some cancers grow very slow, while other are rather fast. I know some cancers grow fast, but I doubt there is any cancer type where the single cell becomes a symptomatic cancer overnight. Or within a month. I mean the single cell has some barriers to overcome, like the ability to grow blood vessels. I am not familiar with all the barriers however. If one is aware of them, it should be possible to get an average, and min, max estimates for the time. The barriers you mention are outlined well in the classic Hallmarks of Cancer paper: http://www.cell.com/abstract/S0092-8674(11)00127-9 The main problem with finding a general answer to this question is the large diversity in cancers, and the resulting problem in defining what hallmarks are sufficient to call an individual cell cancerous. There are cancers that can't be detected by the immune system or blood tests and will hide for a very long time symptom-wise, and non-cancerous tumours that can be detected in various ways very early on. Let's say for your timing reference you simply take the point of neoplasm diagnosis (for neoplasms that turn out to be cancers after testing). At this point, figuring out how much time this cancer has spent being cancerous is very difficult, and to my knowledge has never been done. It could be possible to determine: The number of genetic or epigenetic malfunctions that made the difference between benign tumour and cancer for this particular neoplasm (once again, hard to define) An approximate rate of acquiring those malfunctions. Conceptually, one could ex vivo culture the cancer and monitor the rate of aquisition of further mutations in cancer-related loci or epigenetic markers. Depending on how that rate looks, it may be possible to reverse extrapolate the rate of acquisition before diagnosis, and thereby estimate an approximate timepoint of acquisition of the first proper cancerous malfunction. Besides that, "it probably took more than a few days" is the only statement I can confidently make, although as far as biological theory goes, it's perfectly possible (though highly unlikely) for a cell to acquire a set of mutations sufficient to become cancerous in a single round of replication.
STACK_EXCHANGE
Over the past 3 months, I’ve been taking Harvard College’s online Intro to Computer Science course, CS50x. It’s being offered through edX, a relatively new non-profit addition in the world of MOOCs (Massive Open Online Courses), an initiative that’s been gaining traction in the last year or so in education. In fact, edX (founded by MIT and Harvard University) proclaims itself as “the future of online education.” The courses through edX are free, and they are essentially mirror images of their respective University undergraduate courses, loaded with lecture videos, resources, and in many ways most importantly – an active online community. Tons of resources, a helpful online community, and free of charge? Sign me up. I’m going to fast forward to my last 3 weeks of the course, whereupon I spent my time coding a zombie-themed computer game as my final project submission for the class. Before I get to the details I present to you, “Sanitation: Z” (excuse the choppy video quality, I promise it’s not actually this laggy): The object-oriented game was programmed using Python and Pygame. Here’s the gist: - You roam around finding a way out of the maze while avoiding contact with zombies. - You collect supplies like extra ammo and a flashlight which will aid you along the way. - All zombies spawn with randomized health, speed, and sight range to add some dynamism into each game. - The maze itself is formed by reading from an external text file and constructing the level based on information on where obstacles should be placed. My biggest challenge was detecting collision between two different coordinate planes: the viewable screen and the world. As you will notice from watching the video clip, the rectangular camera stays centered on the player and is a fixed size, but the area of the actual world is much larger. Essentially both had their own “surfaces”, and so figuring out the formula for translating the coordinate (x,y) position of the screen relative to that of the game world was the trickiest part. Oftentimes while programming, there’s a level of intimacy with the project that other people just won’t share. The challenges and bugs that took me days to figure out how to fix won’t be immediately obvious to anyone else when looking at the end product. There’s a tremendous sense of satisfaction for me though, knowing all that I had to troubleshoot and debug my way through in order to get to the finish line. So there it is. There’s definitely room for improvement and expansion, (new weapon types, environmental power ups, mini maps, boss encounters…) but I’m quite proud of what I was able to accomplish in the final weeks of the course. (See my overall thoughts on the CS50x class here) P.S. And just for fun, here’s the Scratch game I made at the very start of the course for comparison:
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace HumaneSociety { public class Record { private string name; public string Name { get { return name; } } private bool validName = false; private string type; public string Type { get { return type; } } private bool validType = false; private double price; public double Price { get { return price; } } private bool validPrice = false; private DateTime arrivalDate; public DateTime ArrivalDate { get { return arrivalDate; } } private bool validArrivalDate = false; private int age; public int Age { get { return age; } } private bool validAge = false; private DateTime? shots; public DateTime? Shots { get { return shots; } } private bool validShots = false; private string food; public string Food { get { return food; } } private bool validFood = false; private int consumption; public int Consumption { get { return consumption; } } private bool validConsumption = false; private string color; public string Color { get { return color; } } private bool validColor = false; private int height; public int Height { get { return height; } } private bool validHeight = false; private int weight; public int Weight { get { return weight; } } private bool validWeight = false; private int dishSize; public int DishSize { get { return dishSize; } } private bool validDishSize = false; private int activityLevel; public int ActivityLevel { get { return activityLevel; } } private bool validActivityLevel = false; private int spaceNeeds; public int SpaceNeeds { get { return spaceNeeds; } } private bool validSpaceNeeds = false; public bool ValidAddInput(string[] row) { CheckFromFields(row); if (validName && validType && validPrice && validAge && validArrivalDate && validFood && validDishSize && validConsumption && validSpaceNeeds && validActivityLevel && validHeight && validWeight) return true; else return false; } private void CheckFromFields(string[] row) { CheckNameField(row.ElementAt(0)); CheckTypeField(row.ElementAt(1)); CheckPriceField(row.ElementAt(2)); CheckArrivalDateField(); CheckShotsField(row.ElementAt(7)); CheckFoodField(row.ElementAt(8)); CheckColorField(row.ElementAt(3)); CheckAgeField(row.ElementAt(12)); CheckDishSizeField(row.ElementAt(9)); CheckConsumptionField(row.ElementAt(10)); CheckSpaceNeededField(row.ElementAt(11)); CheckActivityLevelField(row.ElementAt(6)); CheckHeightField(row.ElementAt(5)); CheckWeightField(row.ElementAt(4)); } private void CheckNameField(string aName) { if (aName != null && aName != "") { name = aName; validName = true; } else { validName = false; } } private void CheckTypeField(string aType) { if (aType != null && aType != "") { type = aType; validType = true; } else { validType = false; } } private void CheckPriceField(string aPrice) { price = 0; if (aPrice != null && aPrice != "") { try { price = Convert.ToDouble(aPrice); if (price > 0) validPrice = true; else validPrice = false; } catch { validPrice = false; } } else validPrice = false; } private void CheckArrivalDateField() { arrivalDate = DateTime.Now; validArrivalDate = true; } private void CheckAgeField(string anAge) { if (anAge != null && anAge != "") { try { age = Int32.Parse(anAge); validAge = true; } catch { validAge = false; } } else { validAge = false; } } private void CheckShotsField(string aShotDate) { if (aShotDate != "") { try { shots = Convert.ToDateTime(aShotDate); validShots = true; } catch { validShots = false; } } else { validShots = true; shots = null; } } private void CheckFoodField(string aFood) { if (aFood != null && aFood != "") { validFood = true; food = aFood; } else { validFood = false; } } private void CheckDishSizeField(string aDish) { if (aDish != null && aDish != "") { try { dishSize = Int32.Parse(aDish); validDishSize = true; } catch { validDishSize = false; } } else { validDishSize = false; } } private void CheckConsumptionField(string aConsumption) { if (aConsumption != null && aConsumption != "") { try { consumption = Int32.Parse(aConsumption); validConsumption = true; } catch { validConsumption = false; } } else { validConsumption = false; } } private void CheckSpaceNeededField(string aSpaceNeed) { if (aSpaceNeed != null && aSpaceNeed != "") { try { spaceNeeds = Int32.Parse(aSpaceNeed); validSpaceNeeds = true; } catch { validSpaceNeeds = false; } } else { validSpaceNeeds = false; } } private void CheckColorField(string aColor) { if (aColor != null && aColor != "") { color = aColor; validColor = true; } else { validColor = false; } } private void CheckActivityLevelField(string anActivityLevel) { if (anActivityLevel != null && anActivityLevel != "") { try { activityLevel = Int32.Parse(anActivityLevel); validActivityLevel = true; } catch { validActivityLevel = false; } } else { validActivityLevel = false; } } private void CheckHeightField(string aHeight) { if (aHeight != null && aHeight != "") { try { height = Int32.Parse(aHeight); validHeight = true; } catch { validHeight = false; } } else { validHeight = false; } } private void CheckWeightField(string aWeight) { if (aWeight != null) { try { weight = Int32.Parse(aWeight); validWeight = true; } catch { validWeight = false; } } else { validWeight = false; } } } }
STACK_EDU
When Victoria Jordan first enrolled in CodePath's Technical Interview Prep Course, she was skeptical of the program. As a first-generation student and Latina, there were numerous hurdles she knew she'd have to navigate if she wanted to start a career in Computer Science. A free 10-week course designed to do just that seemed too good to be true. After just one week, Victoria realized how important CodePath would be in her CS education and career as a Software Engineer. The course showed her how to navigate a Technical Interview and how to solve the algorithm problems asked by all the top tech companies. 81% of CodePath's students will maintain careers in tech for more than a year after graduation –compared to the 61% national average. Victoria --a student at Texas State University-- has seen firsthand how vital her time at CodePath was. "After taking the CodePath course, I feel miles ahead of all my classmates in terms of just knowing about the industry and feeling prepared," she said. "It's really sad to see all these students struggle, and I'm like, 'I swear if you take this CodePath course, it will change your trajectory.'" Victoria Jordan, who also works as a CodePath iOS Tech Fellow, will start an internship at Amazon in Seattle this summer. She fully credits CodePath for helping her secure such a competitive role. Combining creativity and Computer Science I'm a student at Texas State University. Initially, when I transferred, I was an Electrical Engineering major; going through that curriculum, I realized I enjoy software much more. There's a lot more creativity in it. I'm an artist, so Computer Science is a great avenue to combine my current interest in Tech with my creative side. I wasn't sure about it [CodePath] because it almost sounded too good to be true. I was like, 'It's free? I just have to sign up?' The time commitment also wasn't exorbitant. They're not asking you to commit a ton of time, and they don't have unrealistic expectations, so I expected it to be helpful. I honestly didn't expect it to be as helpful and integral to my journey as it was. The content is well-crafted --seeing the curriculum, how it's all presented, and the skills of the teachers leading the courses. It was impressive and well put together. It made me all the more excited once I got started. Jumping hurdles as a first-gen, Latina CodePath made success feel possible. Everybody finds challenges in Computer Science. For women, especially, all the barriers and obstacles are higher. And that's also true for minorities, people of color, and first-generation students. Incredibly, CodePath's target demographic is those kinds of students. CodePath gave me the tools to feel confident about getting a job and going into interviews. It made getting a job feel possible. A reoccurring thing for first-generation students is a need for more guidance. None of my family members were in corporate roles or tech and could tell me: ‘This is how the industry works, and this is what is expected of you.' CodePath taught me a lot of those things. CodePath gave the context and guidance for how to get into the industry and succeed in interviews. They did a great job of demystifying how to implement data structures [in the real world]. I liked how they ran through specific examples. I'd see problems on LeetCode.com, and it was cool how somebody would walk you through those problems at CodePath and methodically explain them to help you solve them. Online, getting a straightforward answer is difficult, and there are many ways to solve these problems. CodePath just took the complexity out of it and gave everybody tools and walkthroughs to solve difficult things. Also, their career fair allows you to use these skills after they teach them. Getting mentorship and guidance from real-world engineers My group had a great mentor from Lyft. CodePath took somebody that's a Senior Level Engineer and humanized him. They took them off the pedestal, so I could better imagine being in that role or knowing somebody in it. Having them as a weekly mentor to be understanding and positive and walk us through problems made me think, 'Oh my God, I can do this!' Plans for her future I accepted an offer with Amazon this summer as a Software Engineer Intern in Seattle. It was completed thanks to CodePath. I would not have known how to solve the different steps of the interview without them! Interested in CodePath's Technical Interview Prep course? You can learn more here.
OPCFW_CODE
Bergson’s view of man as a creator, above the approval of fellow humanity, reads as Nietzschean. In Mind – Energy he wrote ‘the joy he feels is the joy of a god.’1 He equated this person with ‘superman’2 – in Nietzsche’s philosophy the higher state of Übermensch embodies the ‘will to power’ and creation. Another parallel between these two philosophies is that just as creative intuition entails a willed effort to transcend logical patterns of thought, Bergson’s élan vital and Nietzsche’s ‘will to power’ both represent a struggle to gain freedom from the social and material environment. Bergson also distinguished between the artist or poet and ‘the common herd.’3 He wrote that the aim of art is to lay bare the secret and tragic element in our character,4 and that ‘True pity consists not so much in fearing suffering as in desiring it.’5 Bergson wrote that the ‘inward states’ of creative emotion are the most intense as well as the most violent.6 His words ‘for what interests us in the work of the poet is the glimpse we get of certain profound moods or inner struggles’7 are closely echoed in those Picasso used with regard to Cézanne and Van Gogh. ‘It is not what the artist does that counts, but what he is…What forces our interest is Cézanne’s anxiety – that’s Cézanne’s lesson; the torments of Van Gogh – that is the actual drama of the man. The rest is a sham.’8 Bergson held that the object of art is to put to sleep the resistance of the viewer’s personality (a spiritualised hypnosis), to bring the viewer ‘into a state of perfect responsiveness, in which we realise the idea that is suggested to us and sympathise with the feeling that is expressed.’9 To provoke an intuitive response, the elements of the canvas must first arouse the viewer’s emotions and sensitivity to the flow of true duration.10 This can be achieved in a number of ways. Devices include the rhythmical arrangement and effect of line and words Juan Gris. Still Life with Checkered Tablecloth, 1915, Private collection ‘it is the emotion, the original mood, to which they (artists) attain in its undefiled essence. And then, to induce us to make the same effort ourselves they contrive to make us see something of what they have seen: by rhythmical arrangement of words.’11 Bergson also gave the example of letters (of words) which are parts of a poem which one knows, but randomly mixed. Because one knows the poem, one can immediately reconstitute the poem as a whole. This is an example of the reconstitution of the real parts of intuition (and metaphysics), distinct from the partial notations of analysis and the positive sciences, which cannot be reconstituted. It was Bergson’s philosophy that the Cubists drew on in their use not only of material not previously associated with art (sand, wallpaper etc.) but also of part words and lettering. ‘Now beneath all the sketches he has made at Paris the visitor will probably, by way of memento, write the word “Paris”. And as he has really seen Paris, he will be able, with the help of the original intuition he had of the whole, to place his sketches therein, and so join them up together.’12 Negation also affirms and suggests aspects of an object.13 Another device is the conveyance of the notion of passage. The technique of passage derives from Cézanne, but its stimulus may well lie in Bergson’s philosophy.14 Not only did Cubism develop on this, a similar treatment can be seen in art contemporary with it and which has established connections with Bergson’s philosophy – that of Gleizes, Metzinger, the Futurists and Delaunay.15 Bergson wrote of flexibility, mobility, ‘almost fluid representations, always ready to mould themselves on the fleeting forms of intuition.’16 Evocative of the refined and far more relaxed methods of so-called Synthetic Cubism are Bergson’s words ‘Intuition, bound up to a duration which is growth, perceives in it an uninterrupted continuity of unforeseeable novelty.’17 Pablo Picasso, ‘Ma Jolie’, 1913-14, oil on canvas, Indianapolis Museum of Art, Indianapolis ‘So art, whether it be painting or sculpture, poetry or music, has no other object than to brush aside the utilitarian symbols, the conventional and socially accepted generalities, in short, everything that veils reality from us, in order to bring us face to face with reality itself…realism is in the work when idealism is in the soul and…it is only through ideality that we can resume contact with reality.’18 Bergson’s entire philosophy, and the fundamental problem with it, lies in his distinction between the ‘mind’ (consciousness) and the brain, between subjective reality and objective reality. This is encapsulated in the following ‘That there is a close connection between a state of consciousness and the brain we do no dispute. But there is also a close connection between a coat and the nail on which it hangs, for if the nail is pulled out, the coat falls to the ground. Shall we say, then, that the shape of the nail gives us the shape of the coat, or in any way corresponds to it? No more are we entitled to conclude, because the physical fact is hung onto a cerebral state, that there is any parallelism between the two series psychical and physiological.’19 Georges Braque, Violin and Pitcher, 1910 It is my contention that it was very likely to this most fundamental of philosophical issues than a play on illusion that the nail in Braque’s Pitcher and Violin 1909-10, referred. As Bergson and Braque would have been aware – a lot hangs on it. Part thirteen/to be continued… 1. Selections from Bergson, op. cit., 114 ↩ 2. Ibid., 101, from Creative Evolution, op. cit. ↩ 3. Laughter, op. cit., 151 ↩ 4. Ibid., 160 ↩ 5. Time and Free Will, op. cit., 19 ↩ 6. Laughter, op. cit., 158 ↩ 7. Ibid., 166 ↩ 8. From an interview with M. de Zayas in Theories of Modern Art, op. cit., 272 ↩ 9. Time and Free Will, op. cit., 14 ↩ 10. Antliff wrote that for Bergson, the provocation of an intuition depends on the activation of the beholder’s subliminal ‘mind’. ↩ 11. Laughter, op. cit., 156 ↩ 12. An Introduction to Metaphysics, op. cit., 33 ↩ 13. Creative Evolution, op. cit., 288 ↩ 14. See G. Hamilton, ‘Cézanne, Bergson and the Image of Time’ Art Journal, xvi, Fall, 1956, 2-12 ↩ 15. See Antliff on the use of passage to evoke the apprehension of the dynamism of form. Definition was not sought but suggestion ‘so that the mind of the spectator is the chosen place of their concrete birth.’ Inventing Bergson, op. cit., 52 ↩ 16. The Creative Mind, op. cit., 198 ↩ 17. Ibid., 39 ↩ 18. Laughter, op. cit., 157 ↩ 19. Matter and Memory, op. cit., 13 ↩ Image sources: 1st/2nd/3rd
OPCFW_CODE
I recently built a fairly expensive and powerful machine, however I have run into nothing but problems with it. My specs are: Asus Sabertooth z77 i7 3770k @ 4.4 GHz G.Skill Memory @ 1866 OCZ Vertex 4 - 64 GB (Boot drive) Seagate 7200 RPM 1 TB HDD Antec High Current Pro 750 EVGA FTW Geforce GTX 670 x1 Initially, my computer would not boot at all (turned out to be a BIOS issue, flashback fixed this). The second problem I ran into was the computer would not fully boot (Windows logo then crash). This was fixed, somehow, by switching BIOS option for SATA config to IDE. Now that I could reach the desktop, the machine was blazing fast. The problem, however, is that my video card would not turn on. I figured I hadn't put it in correctly, so I shut everything off, removed it, and tried it again. Still no luck. Removed the PCI-e power connectors and re-inserted them. Didn't work as well. Assuming the PCI-e slot was bad, I stuck it in the second slot. Upon turning the computer on, the fans on the video card would still not turn on. I now assumed the video card was bad, replacing it with my Radeon HD 6870 (I'm currently using this on my old rig). My heart sunk when this card didn't turn on either, knowing it was functional. I was now looking at a bad PSU or 2 dead PCI-e slots on a very reliable MOBO. Just throwing this in, I had 2 DOA MSI boards prior to the sabertooth. Needless to say, I was not happy and wanted to say it was the PSU. Extremely frustrated, I took my 650w PSU from my old rig and hooked it up with the 6870. Everything worked!..except the video card again. The rig continued to run perfectly fine (running off of integrated graphics) for about two days. Then, suddenly, it shut off, never to turn on again. Neither PSU could power it up, every connector was properly in place. My end assessment is...unfortunately...the MOBO. I would really appreciate if anyone could shed some light on my errors (if any) or any actions I may have taken that could have led to this result. The RAM is fine, functions in both computers. And the Ivy bridge recognizes the 1866..that's the 3rd gen intel processors. It's booted as 1333, 1600 and 1866. The problem is it shut off, and will not boot. It won't post either, and no LED's turn on the MOBO. It won't POST. I cleared CMOS and reflashed BIOS. I said it won't boot, I meant POST/Boot. Fans spin a little, then shut off. The only way to get it to go that far is to shut off power the MOBO and turn it back on.
OPCFW_CODE
Far more than 45 million U.S. workers could be displaced by automation by 2030 amid advancements in the field of artificial intelligence, according to 2021 estimates from the investigation business McKinsey World wide Institute. With the emergence of on line AI chatbots like ChatGPT, which can successfully mimic human crafting and make code, could application developers be amid them? Are the architects of AI chatbots correctly software-designing on their own out of a task? Several specialists question it. Ever since OpenAI launched ChatGPT late last yr, the World-wide-web has been abuzz with debate about whether or not continually improving AI instruments can or should really switch individuals in a assortment of employment. But in accordance to Alan Fern, professor of pc science and government director of AI study at Oregon Condition University’s Faculty of Engineering, AI chatbots nonetheless primarily work ideal as instruments for programmers alternatively than as programmers by themselves. He believes that when it will come to the a lot more considerate layout decisions, individuals are not likely any place at any time shortly. “There is already a ChatGPT-design process for coding referred to as Copilot, and it is fundamentally a GPT product whose education was centered on code, GitHub code. I have read many very superior programmers say that device has enhanced their productivity, but it’s just a software and is great at the mundane factors that consider programmers time to search up or master,” he explained in an e mail to Federal government Technology. “Copilot even now will generate faulty code, just like ChatGPT creates incorrect statements, so human beings have to continue to be in the loop. These types never really purpose at a deep stage and there isn’t a crystal clear path to receiving them there. It is for that explanation that I assume programmers will be employed for a extensive time, but the effectiveness will increase considerably.” “The kinds of work opportunities that might come to be out of date or a great deal diminished [by AI advancements] could be all those that are primarily about eloquence but do not call for deep contemplating. Some customer provider positions are like that,” he added. “The challenging detail to predict is what careers, providers, industries, will be produced.” Dakota State College personal computer science professor Austin O’Brien agreed that while AI has designed spectacular leaps in its potential to duplicate human creating, for instance, it even now has a long way to go in advance of it can be trusted to do coding. He explained the technology is still prone to making errors like AI hallucinations, which materialize when an AI model generates output to an inquiry that will make tiny to no feeling. “ChatGPT was experienced on human language with the goal of producing human-like text. It’s very clear that code repositories ended up also utilized for training, and I have witnessed some extremely extraordinary output when questioned to create code related to assignments I have offered to pupils. That claimed, I have also questioned it to develop a couple of points that aren’t possible in code, and it would give its most effective shot, while it was quite incorrect. This happened when ChatGPT was 1st unveiled, but attempting it yet again just lately, it now allows me know that it’s not attainable, so it appears that they are continuously updating it with new details to make it much better,” he wrote in an e-mail. “Since it is dependent on normal language models, it’s mimicking what it has noticed prior to in that context and doesn’t have a further comprehension of the algorithms, data buildings, or have basic problem-fixing capabilities. It can’t certainly extrapolate new options to mysterious problems and will likely wrestle when new types are presented.” Though applying existing AI technology to replace coding experts may be several years and decades down the line, and primarily for far more state-of-the-art computer software development functions, O’Brien expects some positions a lot more frequently to grow to be out of date owing to developments in AI. He stated this is currently going on in occupations this kind of as knowledge entry and purchaser assistance, gradually but undoubtedly. “With the reduction of these careers, there is generally an enhance in task generation in other locations, normally in the technological innovation market alone, like AI, cybersecurity and info analytics. I never, however, really think it is fair or affordable to tell someone who might lose their job to simply learn a new technology skill,” he stated. “I assume it is vital for transition systems to be in location to assistance these men and women procure new careers in the modifying market. … Record is full of illustrations where personnel have been displaced by new know-how and the position marketplace had to adapt. It is not always a new challenge, but a single that have to be tackled once more soon.” Saurabh Bagchi, a professor of electrical and laptop engineering and computer science at Purdue University, reported ChatGPT-like AI instruments show up to be acquiring far better at generating “snippets” of code, but agreed that the technology is nevertheless not fully reliable by any usually means. He added that when ChatGPT puts with each other a piece of code, there is no way of tracing it back again for attribution to see no matter if it comes from certified application offers, which could present intellectual house considerations for these making use of it in its recent type for computer software improvement. “It’s a quantum leap in excess of wherever AI code assistants ended up even two several years back again,” he mentioned. “But a ton of the marketplace colleagues that I work and collaborate with that I hear from are a tiny cagey about using code created by ChatGPT. It is not distinct how safe or reputable ChatGPT-produced code is. This is less than active investigation in tutorial labs, including ours, and we hope to get a superior strategy inside of six months or so.” Though the know-how is even now not capable of changing human programmers responsible for updating and keeping significant-scale software program involving effective algorithms, legacy devices and languages, computer science professor Amanda Fernandez of the College of Texas at San Antonio explained in an electronic mail that AI chatbots could demonstrate valuable in employment like journalism for preliminary topic research, or for assisting teachers make lesson plans, for instance. Still, she claimed, “a human will generally will need to be in the loop to validate accuracy” with the AI’s output. “There have been several improvements in programming meant to ‘remove the programmer’ from needing to publish code about the a long time. For illustration, COBOL programming language was intended to make it less difficult for anybody to compose code [through] a programming language which reads much more like English, as opposed to assembly language or binary,” she stated. “These applications and ideas have modified the way programmers comprehensive jobs, and systems like ChatGPT will likewise affect these work opportunities as a handy source.” But O’Brien mentioned that improvements in the discipline of AI outside the house of packages employing purely natural language processing could sooner or later be a unique tale when it comes to whether humans will be changed in any given work industry, or for a lot more innovative software package improvement roles. “One day, a new technology that possesses these attributes may perhaps arrive along, but I really do not feel ChatGPT is it,” he reported.
OPCFW_CODE
Looking for online ASP.NET Core coding help with coding problems is in order. Usually one will not build from well-known implementations. Some years ago I used to do the following: Build Project Create a file called project.asp.nodocs with the path to the ASP.Net Core project you are trying to follow – where are you located? The project.asp.nodocs directory is located in the apache subdirectory /ProjectRoot and must have the following elements: Your project.asp.nodocs is located in /ProjectRoot. Because the project.asp.nodocs may contain code generated by ASP.NET Core in helpful site directory, it makes no sense to call the project.asp.nodocs if you do not exist. Make sure you have the project.asp.nodocs directory enabled and you know where you are. Do My Stats Homework Be aware that the site directory (the project file that should show up in your read the article Preferences) is contained within the project.asp.nodocs directory as well. If you have an existing ASP.Net Core project that does not have the project.asp.nodocs directory, you may find that it is located at /ProjectDir/ project.asp.nodocs. Adding the project.error.inc to file /ProjectDir There may be a problem with the project.asp.nodocs or project.error.inc files you have included but never encountered in the main project. Apache Error Log (Elog) shows that the project.asp.nodocs directory does not appear to have any errors generated on it. First, add the project. Pay Someone To Do Essay nodocs to your Apache Preferences (/ProjectDir/project.nodocs) file as a resource in your project.asp.nodocs. Include such a resource in your project. Second, add the projectLooking for online ASP.NET Core coding help articles and books? If this question is accurate, there are many. It is one of the world’s most popular resources for web development, how to develop an ASP.NET MVC application. Find all of the useful resources in one place. Hashing everything onsite is the key to finding tutorials online. If you are a developer interested in database optimization and have the skills needed to develop and manage models for database consolidation, why don’t you invest in data-driven applications? Data-driven databases now work in a lot of ways: document properties, indexes and fields, state next model properties, security and usage attributes, data set, view, view model file and model to view model settings, etc. Additionally, databases that are constantly updating and running up to are growing over time. Easily developing and maintaining YOURURL.com database or its objects yourself often involves great stress for the developer as he would expect that his knowledge could have better value. A few examples: A bad way to add objects in PostgreSQL database is to add them to the best site like in a view – This is a tedious tedious thing and makes it even more frequent to add properties and fields to it. A better way to manage real-time databases is to configure the database, in-line so that the new controls that are added in a view are displayed next to the current view. Another way to maintain a database is to create its own view… You will be using a new controller, something to check for changes of a table. Where Can I Pay Someone To Take My Online Class
OPCFW_CODE
|Script Name: License GUI| Script Author: ImperiumXVII The current license system has been around since the dark ages, easily coming up on ten years now. Given SA-MP 0.3DL's ability to add custom textures, I never understood why we didn't make the game world more immersive and prettier. So I revamped the /licenses command. This new license GUI also gives fake licenses a new lease on life by allowing the player to falsify how many driver warnings they have and whether or not their license is suspended, making fake licenses useful for more than just showing to police. The License GUI script allows for a graphical user interface when interacting with licenses, i.e. driving license, weapon license, et cetera. The script shows an image of the player's character (or the off-duty equivalent if on-duty) and their name and address, accompanied by what licenses they hold. To prevent a person from blocking a recipient's screen, a non-intrusive prompt is shown on the recipient's screen for the recipient to hit Y to show the graphic or N to simply print the license details to the chat box. A person may purchase fake licenses and use this script by falsifying which licenses they have, and how many driver warnings they have/if their license is suspended. This is an easy system and can be used with the command /fakelicenses. Note that at the top of every license is a 'CN', or Citizen Number. This is the most reliable way to tell if a license is fake, and this is only if you're smart enough to figure out how it works. That's the only info you're gonna get on the CN, so good luck! There are other ways to detect if a license is fake however, but you're going to have to figure that out yourself. With the new license GUI comes a new way for people with more than 1 house to set their primary address, with the addition of the /setaddress command. This changes which address shows up on your license. You can't, however, change your primary address more than once every 48 hours without losing your property (e.g. through selling). As a player can only own 4 properties, /setaddress will give you a choice of each of your 4 properties, plus 1 rented property, to choose to display on your license. /licenses [ID] - shows the licenses prompt shown in 'Media' below. /fakelicenses [ID] [Driving (Y/N)] [Flying (Y/N)] [Weapon (N/PF/GC/CCW/ALL)] [Medical (N/BLS/ILS)] [Driver Warnings (4 = Suspended)] [Name (with _)] [Address] - shows a fake license with the parameters you input. /setaddress [1-5] - set your primary address to 1 of 4 of your owned houses, or your rented property /close - closes the license GUI. - closes the license GUI - closes the license GUI and displays the info in the chatbox
OPCFW_CODE
package mirrg.boron.util.suppliterator; import static mirrg.boron.util.suppliterator.ISuppliterator.*; import static mirrg.boron.util.suppliterator.SuppliteratorCollectors.*; import static mirrg.boron.util.suppliterator.SuppliteratorCollectors.cast; import static org.junit.Assert.*; import java.util.Optional; import java.util.stream.Collector; import java.util.stream.Collectors; import org.junit.Test; import mirrg.boron.util.UtilsString; import mirrg.boron.util.struct.ImmutableArray; import mirrg.boron.util.struct.Tuple; import mirrg.boron.util.struct.Tuple1; import mirrg.boron.util.struct.Tuple3; import mirrg.boron.util.struct.Tuple4; public class TestSuppliteratorCollector { @Test public void test_1() { { assertEquals('9', (char) characters("739184562") .collect(teeing( max() )).x.get()); } { Tuple<Optional<Character>, Optional<Character>> t = characters("739184562") .collect(teeing( SuppliteratorCollectors.<Character> max(), SuppliteratorCollectors.<Character> min() )); assertEquals('9', (char) t.x.get()); assertEquals('1', (char) t.y.get()); } { Tuple3<Optional<Character>, Optional<Character>, Long> t = characters("739184562") .collect(teeing( SuppliteratorCollectors.<Character> max(), SuppliteratorCollectors.<Character> min(), counting() )); assertEquals('9', (char) t.x.get()); assertEquals('1', (char) t.y.get()); assertEquals(9, (long) t.z); } { Tuple4<Optional<Character>, Optional<Character>, Long, Tuple1<Long>> t = characters("739184562") .collect(teeing( SuppliteratorCollectors.<Character> max(), SuppliteratorCollectors.<Character> min(), counting(), teeing( counting() ) )); assertEquals('9', (char) t.x.get()); assertEquals('1', (char) t.y.get()); assertEquals(9, (long) t.z); assertEquals(9, (long) t.w.x); } { Tuple4<Optional<Character>, Optional<Character>, Long, Tuple3<Long, String, String>> t = characters("739184562") .collect(teeing( SuppliteratorCollectors.<Character> max(), SuppliteratorCollectors.<Character> min(), counting(), teeing( counting(), joining(), joining("|") ) )); assertEquals('9', (char) t.x.get()); assertEquals('1', (char) t.y.get()); assertEquals(9, (long) t.z); assertEquals(9, (long) t.w.x); assertEquals("739184562", t.w.y); assertEquals("7|3|9|1|8|4|5|6|2", t.w.z); } { ImmutableArray<Object> t = characters("12345") .collect(teeingOf( joining(), joining("|"), joining(","), joining("\n"), counting(), joining(";"), joining("-") )); assertEquals(7, t.length()); assertEquals("12345", t.get(0)); assertEquals("1|2|3|4|5", t.get(1)); assertEquals("1,2,3,4,5", t.get(2)); assertEquals("1\n2\n3\n4\n5", t.get(3)); assertEquals(5, (long) t.get(4)); assertEquals("1;2;3;4;5", t.get(5)); assertEquals("1-2-3-4-5", t.get(6)); } { ImmutableArray<Object> t = characters("12345") .collect(teeing( ISuppliterator.rangeClosed(0, 4) .map(i -> joining(UtilsString.repeat("-", i))) .toImmutableArray() )); assertEquals(5, t.length()); assertEquals("12345", t.get(0)); assertEquals("1-2-3-4-5", t.get(1)); assertEquals("1--2--3--4--5", t.get(2)); assertEquals("1---2---3---4---5", t.get(3)); assertEquals("1----2----3----4----5", t.get(4)); } } @Test public void test_ofCollector() { Collector<CharSequence, ?, String> a = Collectors.joining(","); ICollectorFactory<CharSequence, String> b = ofStreamCollector(a); assertEquals("1,2,3,4,5", characters("12345") .map(c -> Character.toString(c)) .collect(b)); } @Test public void test_cast() { ICollectorFactory<Object, String> a = joining(","); ICollectorFactory<Object, CharSequence> b = cast(a); @SuppressWarnings("unused") ICollectorFactory<Character, CharSequence> c = cast(a); assertEquals("1,2,3,4,5", characters("12345").collect(a)); assertEquals("1,2,3,4,5", characters("12345").collect(b)); } }
STACK_EDU
I see he commits some stuff to XServer, but who knows if these commits were part of Canonical's business strategy. I'm sure they have a policy which allows developers to fix bugs when they encounter them. Judging from the size and type of these commits, there is no indication for a real direction. If you value those petty bugfixes as real contributions, then you're right. But look at other distributions: Gentoo developers for example work at their own devfs in userspace (eudev), Debian has an own Linux-Kernel team and Red Hat is the largest contributor to Xorg. It is an insult to every one of them when somebody like you tries to bring Ubuntu on par with them. The Ubuntu-developers may not "never" contribute upstream, but compared to other distributions, it's a bloody joke. And you know that. BTW: Stop being so cocky. Take this as a friendly advice. Sometimes, forking is a good way to contribute to upstream, especially when the forked project goes in a bad direction (Xonotic, Mage+, LibreOffice, ...). The problem with udev is that they are heading to be systemd-specific. The lack of interest by most distributions is derived from the fact that they are using systemd anyway. As I don't consider init-system-specific solutions to be ideal, forking is a valuable contribution to the project itself. Gentoo is specifically interested in eudev, because it is one of the few distributions which allow you to use multiple init-systems. Please read into the topic before giving unqualified statements. As pointed out, if other distros don't care about it, then it's ultimately not upstream contribution, and it's no different than Mir or Upstart.But look at other distributions: Gentoo developers for example work at their own devfs in userspace (eudev) Likewise...BTW: Stop being so cocky. ...uh... wait... How come then that the currently most selling hardware platfrom (ARM) aren't even x86 compatible? If we follow your logic, any computing device should only be a x86_64 variant running some custom Windows built. Well the situation has changed a lot since the 80s (and early 90s). Back them every single computer platform, had a completely different set of innards, *BUT ALSO* each one ran a completely different set of operating systems and software, all almost completely hand-written assembler for the peculiar type of CPU in that machine and optimized for its hardware quirks. Today, thanks to opensource (with source available everywhere and most of the software being written cross-platform using common languages), getting Linux running on anything is usually mostly only a compile away. Linux runs on x86 & x86_64 (most popular on desktop & laptops), but also on ARM (most popular on smartphones/tablet/ultra-light-netbooks), but also on MIPS (very popular on router/modems) but also on PowerPC (Playstation, some servers) and several other platforms (other server CPUs like Sparc, etc.). Not only that, but more and more software doesn't even care what the CPU is: - software compiled into bytecode (Android is built around a Java-like Dalvik). - Even Windows 8 (though not opensource): since they started offering also ARM platforms, they strongly recommand and support cross platform application, either in HTML5 or compiled into .NET bytecode. In short, in modern days, architecture don't impact that much. You'll still get your Linux flavour for that one. (You already have x86_64, ARM and MIPS which are *very* widespread). Whereas in the old days, it wasn't only Z80 vs 6502 vs 8088/x86 vs 68000, but also MS-DOS vs. CP-M vs. AMOS vs. STOS vs. C64 BIOS, etc. all with a bunch of different hardware to directly talk to. Leon", a SPARC based CPU whose VHDL has a LGPL license. Sun themselves have also released a few cores under an opensource license as OpenSPARC. There's also the OpenRISC. There are opencores out there. What is needed is a whole market for them. (Not much interest beyond academia, for now). You can play stuff one-shot: You can load a small audio file and simply tell the sound chip to play it. (That's what happens when a desktop applications plays a small sound effect). You can also do something which looks like double buffering: You fill a buffer with audio, send that buffer to the sound card, then while it is playing, fill another buffer, the when the card has finished the previous one send that current one to play, then proceed to the next buffer, etc. (That's what happens when you play a long audio sequence: a big MP3 file, stream a web radio, or mixing several sound source with a software mixer). Note that this kind of buffereing requires making compromise between un-interrupted autio playing and latency. Either you use BIG buffer (with each 1s worth of sound in them) and the chance is very low that the playing will get interrupted (you alsways have a few 1sec buffers of headroom before reaching the point when you have nothing left to play), but have a huge latency (if an app want to play a sound effect, it will only be added in the next buffer being processed and thus will only be heard in a few seconds, once the previous buffer already in line have finished playing). Or you have the opposite of this (CPU usage is high as it is constantly filling small buffers, and sound glitches have a high risk of happenning in case of bad scheduling of threads, but at least since the buffers are small, latency isn't that bad). You can also do something which looks like a ring buffer: audio is continuously looping over the same buffer, pulse audio is filling this buffer continuously with a varied amount of "ahead" time. If you're listening to a radio, pulse will completely fill all the the buffer ahead, then put the CPU to sleep, then wake up a half second later and append half a second-worth of audio the go back to sleep. At any point of time there's a lot of headroom between which part of the buffer are playing and which part are getting filled). If suddenly an immediate sound is needed, with low latency, pulse will start re-writing the buffer which a few samples ahead of the "pointer" where sound is read. Pulse is only slightly ahead and finishes write audio, almost before it's getting played. Pulse is almost feeding the audio real time as it is played. The latency is minimal (though the CPU usage gets higher but only during this time). Thus unlike the previous solution, you don't need to make compromises. Pulse is constantly tuning it self by varying how much ahead of the currently played sample it is in the circularly playing buffer. This last one is a perfectly "normal" mode of work. The problem is: Pulse is the only piece of software that works this way under Linux. Every other sound system use exclusively one of the first 2 methods. So even if this mode is "supposed to work", you might find bugs in the audio driver that pulse is the only one to hit because it's the only software functioning that way. You though that alsa is functioning perfectly, whereas actually it is not. Alsa is buggy, but it happens so that only pulse finds the bug. Or maybe the driver is technically correct, but your piece of hardware is half broken. It works under windows because it's driver is accordingly twisted to adapt to the quircks of the hardware, but it doesn't under Linux because the circumventions are known/are there. Except that, the weirdness only happens with Pulse. Probably that either the other modes of play where fixed before because people noticed the problem, or the problems only arises when circular buffer is used and nobody noticed until pulse. In the end, there are needed fixed that should go in Alsa, but aren't there. Pulse can't do much (no matter what the programmers of pulse do, they are stuck. There nothing you can do, if the underneath ALSA stack can't correctly return the "currently playing" pointer). Now the thing is, this feature (low latency when playing realtime sound, or conversly the ability to put the CPU to sleep and save power when it's just predictable audio playing) is actually important. Low latency is really important is several end-user scenario (mostly for VoIP calls, and for games), and keeping CPU usage low is important (while playing music, specially if the device in question is portable and runs on a batterie). What Pulse tries to do is to emulate the functioning of a hardware mixer (mixes sound with very low latency and low power). The problem, is that such mixer are getting rarer: the current tendency is to just put a chip that is basically a multi-channel duplex DAC/ADC and do everything in software. (How many people in this thead have Audigy sound card with hwmix vs. how many people are just using their on-board "Intel HDA" chip ?) So you can't pretty much run away from pulse, it's the only viable way to have sound in games, skype and webradios. But... as with any newer technology, it will require testing, fixing broken drivers, circumventing broken hardware, etc. There are good distros making a decent job when packaging pulse (my opensuse seems to be one). There are also very bad distro which tend to think things along the line of "hey, pulse version 0.0.1-prealpha is out! Let's put it as an obligatory requirement!". That's the behaviour which is bringing problems to pulse (which would otherwise be an useful piece of technology). Same thing happened also with KDE4 (with several distro switching to the "technological preview" without much thinking). And the same will very likely happen in the future with Wayland, with on one side distro taking great pain to make sure that provide a well intergrated preview experience and also provide a decent fall back for users prefering to wait. And you'll have probably a bunch of distro just throwing in whatever is the current version deemed releasable (you'll find KDE5 preview running on Wayland beta and the whole things crashing like it is windows 9x). is literally incomplete and caused crackling and all sorts of other problems. (Obviously incomplete, even!) The patch has since been reverted and this will presumably be included in the next release - along with a bunch of new regressions and bugs because they never actually stop changing things for long enough to stabilise their code. More annoyingly, I'm not sure it was even the source of the crashes I was seeing in the resampler.
OPCFW_CODE
There are currently up to twenty categories per game, and one page for each of them. Each of these twenty category pages is the same, only the data found within is different. At the very top, as with all pages, is the date selector, allowing you to refine your results by time period. This can show you how a specific event, such as a patch rollout, content update, marketing campaign, widely shared story in the media, etc. affected your game. The orange dropdown next to it provides several shortcuts to quickly select a date range, or you can instead click on the start and end date to bring up calendars with which you can select a custom date range. The date selector moves onto the left-hand panel as you scroll down the page so that you don't have to scroll back up to the top of the page to change the date range. Below that are the links to the other pages, as well as a simple visual to explain the sentiment ranges. The first data aggregate visualisation on this page is a pair of sentiment diamonds representing the current sentiment of the category and the average sentiment of the category over your selected date range. To the right of those is a line graph displaying how sentiment has changed over your date range compared to how it has across all of the other categories. Hovering over each point will show you the exact average sentiment score on either line. Depending on the game you're reviewing, the next visualisation could be a competitor comparison, showing you how the category sentiment compares to that of your game's competitors. Remember that you can ask us to add one or more competitor comparisons if you don't already have one and would like to see them in your data. There is a bar chart next, detailing the number of new interactions detected regarding the category per month, week or day, represented by each bar. This can be expressed as a percentage of the total, or as the raw numbers with a pair of buttons in the top left of the chart. Hovering over each bar with your cursor shows you how the numbers are broken down by channel, including the total interactions represented by the bar, numerically and as a percentage. Below the bar chart, as long as you didn't arrive on this category page via a link from a channel page, is a channel breakdown, detailing the number of interactions per channel, the average sentiment for each, and a comparative visualisation. If you did arrive via a channel page, you will only see the interactions pertaining to the channel that you arrived by and therefore will not see the channel breakdown. Next is a table containing a list of topics. Each topic is a word or short phrase related to the category, sorted in order of how many times the topic is mentioned. The colums list the number of positive mentions, negative mentions and the current sentiment and the change in sentiment over your selected date range. There are also two buttons, in the final two columns, the first of which sends you to a pre-filtered version of the Interaction Explorer at the bottom of the page (see the Interaction Explorer How To page for more details). The other opens up a new page dedicated entirely to that row's topic. You can search the table, which can stretch to quite a few pages, for topics in the top right corner. This new topic page is almost identical to the category page, with its one major difference being, just above the interaction explorer at the bottom of the page, there is a word cloud centered around the topic. With this word cloud, you can see which words are most closely associated with the topic, as well as a Most Recent Mentions list of the topic that Player XP has picked up. Clicking on any of the words in the cloud filters this Most Recent Mentions list to the mentions that only include the selected word. You can also access topic pages by clicking on the Subject Search link in the left-hand panel. This will open up a search box where you can input a topic that you would like to directly navigate to, as well as filter that topic by category. If Player XP has a topic matching the search term that you typed in, you will be taken straight to it. In the footer of the category and topic pages is a button that links to our email address, as well as a button that will return you to your account page.
OPCFW_CODE
from typing import Tuple import numpy as np import png from skimage.transform import resize def load_world(filename: str, size: Tuple[int, int], resolution: int) -> np.array: """Load a preconstructred track to initialize world. Args: filename: Full path to the track file (png). size: Width and height of the map resolution: Resolution of the grid map (i.e. into how many cells) one meter is divided into. Returns: An initialized gridmap based on the preconstructed track as an n x m dimensional numpy array, where n is the width (num cells) and m the height (num cells) - (after applying resolution). """ width_in_cells, height_in_cells = np.multiply(size, resolution) world = np.array(png_to_ogm( filename, normalized=True, origin='lower')) # If the image is already in our desired shape, no need to rescale it if world.shape == (height_in_cells, width_in_cells): return world # Otherwise, scale the image to our desired size. resized_world = resize(world, (width_in_cells, height_in_cells)) return resized_world def png_to_ogm(filename, normalized=False, origin='lower'): """Convert a png image to occupancy grid map. Inspired by https://github.com/richardos/occupancy-grid-a-star Args: filename: Path to the png file. normalized: Whether to normalize the data, i.e. to be in value range [0, 1] origin: Point of origin (0,0) Returns: 2D Array """ r = png.Reader(filename) img = r.read() img_data = list(img[2]) out_img = [] bitdepth = img[3]['bitdepth'] for i in range(len(img_data)): out_img_row = [] for j in range(len(img_data[0])): if j % img[3]['planes'] == 0: if normalized: out_img_row.append(img_data[i][j]*1.0/(2**bitdepth)) else: out_img_row.append(img_data[i][j]) out_img.append(out_img_row) if origin == 'lower': out_img.reverse() return out_img
STACK_EDU
With the ASoC driver it is quite easy - you only have to specify the correct format in the machine driver. I can help you to build a machine driver, but first I have to understand your setup:Sniper435 wrote: could either Philpoole or koalo fill me in with what changes or driver options need to be supplied to run the Pi as an I2S slave? Why are you using a DIR9001 as well as a PCM5141? Are you using the DIR9001 only for clock generation (this seems bloated - even if you want to use an external clock) or is there anything else that this chip does for you?Sniper435 wrote:From what I understand (and more importantly what my collaborator and hardware wizard understands) we're all set up in terms of supplying the correct clock to the Pi (we are using a DIR9001 with an external crystal to generate and supply the clocks both for the pi and for the DACS which are PCM5141) ideally we want to be able to run at 48KHz and 24bits. Code: Select all git init git fetch git://github.com/koalo/linux.git rpi-3.8.y-asocdev:refs/remotes/origin/rpi-3.8.y-asocdev git checkout rpi-3.8.y-asocdev Code: Select all sudo modprobe -a snd_soc_bcm2708 snd_soc_bcm2708_i2s bcm2708_dmaengine snd_soc_tda1541a snd_soc_rpi_tda1541a 3.8. is not absolutely required, it should also be possible to apply those patches to the 3.6. branch. However, I would like to switch to 3.10. sooner or later, because it will have some new features in the ASoC core. Why are you restricted to the 3.6. branch?mhelin wrote:Great someone has took time for implementing the ASoC driver support. Is the RPi kernel version 3.8.x absolutely required? Got to upgrade the kernel first then. It should have been tested..... I forgot to register the device. I am sorry!koalo wrote:I have not tested it.... Code: Select all $ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA] Subdevices: 8/8 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 Subdevice #3: subdevice #3 Subdevice #4: subdevice #4 Subdevice #5: subdevice #5 Subdevice #6: subdevice #6 Subdevice #7: subdevice #7 card 1: sndrpitda1541a [snd_rpi_tda1541a], device 0: TDA1541A HiFi tda1541a-hifi-0 Subdevices: 1/1 Subdevice #0: subdevice #0 I am happy to see more and more people working on this topic!steveha wrote:My intention is to set up raspberry PI to connect my NOS TDA1541A dac via I2S. Oh, yes, you find the corresponding setting understeveha wrote:When performing modprobe, I come across 'bcm2708_dmaengine' is missing. That is normal - you have to repeat the modprobe after each reboot. Later this should be done via modules.conf, but that is another topic...steveha wrote:Now, I have the correct 'aplay -l' screen. However, ' card 1: sndrpitda1541a [snd_rpi_tda1541a], device 0: TDA1541A HiFi tda1541a-hifi-0 ' disappeared after poweroff. You buy a revision 2 board and solder a pin header at P5. Then you have PCM_FS on pin 4. (Maybe I misunderstood you question...)Wavelength wrote:So basically to get I2S working on the GPIO port we need to get GPIO19 (PCM_FS) out. So what is needed to do that? Thank you It is great, that someone - besides me - gets my driver running!steveha wrote:Well done. I would like to inform you that your drivers are really great. I have been suffering from USB packet loss for a long period. Your I2S driver is just another story - smooth and noise free. Maybe I have to disappoint you a little bit: The TDA1541A only supports 16 bit, the 24 bit material is rounded inside of ALSA. However, 96 kHz should be ok. I never tested that and since there is a little trick for multiples of 8000 kHz, it is great to hear that it seems to work!steveha wrote:So far, I have no problems on streaming 44/16 and 96/24 materials. I don't know about any limitation. If you can find one, I would be nice to know! (What is this 176.4?)steveha wrote:BTW, is there any limitation on the brandwith (e.g. 176.4 ) ? I will perform some more testing and let you know the results. In fact the driver currently runs at 40fs for 16 bit (that is the trick to get better clocks out of the Raspberry Pi). I don't know if that is a problem for some codecs, but let me know if it is. Furthermore, I think you should use an external clock source (or the one from a codec) if you are going to such high quality. I don't think that the Raspberry Pi has such a good clock (but I don't know...).steveha wrote:The main difference between them is ESS9018 is running on 64fs bit clock I2S and TDA1541A only does 48fs. I didn't even know that there are devices with such a high sampling rate I will enable that inside the driver and maybe you have lucksteveha wrote:176.4 khz or 192 khz Users browsing this forum: No registered users and 6 guests
OPCFW_CODE
How to design an interface to call functions with a parameter of varying type polymorphically? Say we want to be able to call functions run of ImplementationA and ImplementationB polymorphically (dynamically at runtime) through a yet to design interface in this example: struct Input {}; struct MoreInputA {}; struct ImplementationA { void run(Input input, MoreInputA more_input); }; struct MoreInputB {}; struct ImplementationB { void run(Input input, MoreInputB more_input); }; Both take some Input in the same format, but some MoreInput in different formats. ImplementationA and ImplementationB can only do their job on their specific MoreInputA and MoreInputB respectively, i.e. those input formats are fixed. But say MoreInputA/B can be converted from a general MoreInput. Then a simple polymorphic version could look like this: struct Input {}; struct MoreInput {}; struct ImplementationBase { virtual void run(Input input, MoreInput more_input) = 0; }; struct MoreInputA {}; MoreInputA convertToA(MoreInput); struct ImplementationA : public ImplementationBase { void run(Input input, MoreInputA more_input); void run(Input input, MoreInput more_input) override { run(input, convertToA(more_input)); } }; // same for B However, now the more_input has to be converted in every call to run. A lot of unnecessary conversions are forced on a user of the polymorphic interface if they want to call run repeatedly with varying input but always the same more_input. To avoid this, one could store the converted MoreInput inside of the objects: struct Input {}; struct MoreInput {}; struct ImplementationBase { virtual void setMoreInput(MoreInput) = 0; virtual void run(Input input) = 0; }; struct MoreInputA {}; MoreInputA convertToA(MoreInput); struct ImplementationA : public ImplementationBase { void run(Input input, MoreInputA more_input); MoreInputA more_input_a; void setMoreInput(MoreInput more_input) override { more_input_a = convertToA(more_input); } void run(Input input) override { run(input, more_input_a); } }; // same for B Now it is possible to do the conversion only when the user actually has new MoreInput. But on the other hand, the interface is arguably more difficult to use now. MoreInput is not a simple input parameter of the function anymore, but has become some sort of hidden state of the objects, which the user has to be aware of. Is there a better solution that allows avoiding conversion when possible but also keeps the interface simple? what do you actually mean when you say "be able to call functions run of ImplementationA and ImplementationB polymorphically" ? When the user needs to know whether to provide either MoreInputA or MoreInputB then it isnt polymorphic. How would the calling code look like? @463035818_is_not_a_number An example for some calling code for the first suggested option could be Input input = /*...*/; MoreInput more_input = /*...*/; ImplementationBase& impl = /*...*/; impl.run(input, more_input); where impl could be a reference to an object of type ImplementationA or ImplementationB. @463035818_is_not_a_number I am aware that there is nothing to be called polymorphically in the first code snippet. I edited the first sentence to try to clarify that. The question is indeed how to best add a polymorphic interface on top of the example in the first snippet. Two possible options for that are given in the 2nd and 3rd snippet. Since you want to hide the complexity from your interface and differentiate through polymorphism, i think that this is a nice candidate for the bridge design pattern If you want to call method exist only at A, then cast it to A, why not? and imo it make no sense that MoreInput can be converted to MoreInputA and MoreInputB without any additional information while they're 3 different class...
STACK_EXCHANGE
If you are on the market for the best programming languages, you want the best paying to recoup your money and set the pace for the future. A great programming language is one that is in demand and will be valuable even as things shift. One way to determine what to study is by looking at the market and what is in demand and then using that to forecast the future. Languages that can be expanded into several other uses and ways are always in demand as the experts will be using them as the foundation for future programs. Learning them will give you the edge you need in an ever-changing tech world. Here are the top 10 highest paying programming languages on the market today. This is one of the highest-paying languages, with an average annual gross income of $138,000. It is used to develop scalable real-time systems applications used primarily by telecoms, banks, and eCommerce. The term is used interchangeably with Open Telecom Platform (OTP), and you can learn it online as it is offered at a fee. If you already have a background in other programming languages, we will discuss here, it takes a week to start writing nontrivial code through Erlang. You will need a few more months of constant practice to internalize even more and write more scalable code. You may have to complete a few written assignments along the way, with services such as Edusson.com making it possible to finish the course on time. The time it takes to fully learn this language depends on an individual’s learning curve and their previous experience and knowledge. It is estimated over 80% of developers use Python, showing just how popular and versatile it is on the market. It pays well too, and several organizations use it for various activities, all of these variables making it highly sought-after by those in the industry. You can learn Python online as several accredited sites offer it to expert level up from the basics. This unique name for one of the most popular programming languages to learn came from the BBC Comedy series Monty Python’s Flying Circus, and today most programmers recommend it highly to anyone that wants to scale the career ladder. It is used in website design, machine learning, software testing, and several other ways. When it came into the market in 1995, it was known as LiveScript and was often called the younger brother to Java, which it is sometimes confused for by some people. Though the two programs are similar in some ways, they are distinct. You can earn an average of $112,152 in annual income from this course. This object-oriented language is mainly used to organize objects around a design. Most users like its usability and stability, and it is easier to learn than C and C++. You want to advance from C and C++ to C# to improve your coding skills and design development, especially in the current world where design has taken center stage. Its compatibility with Windows and Linux makes it the ideal language for those interested in GUI-based desktop applications. Learning this language is much easier when you have the others as predecessors and foundation, and these days, several accredited sites and online schools offer it at reasonable costs. It has been used to develop mobile and enterprise software for Android and iOS apps. The gaming industry uses it regularly, too, making this language an on-demand product at all times. As a primary language for WordPress, PHP is used by 78% of websites, making it a must-learn for back-end developers. Some career options for coders with this and other skills include web and app development, both continually sought-after and profitable now and in the future. Being an open-source product, the support community is large and well-informed, making it so that anyone that needs the help of their peers gets it in record time. As one of the oldest programming languages in the industry, you have lots of material for learning if you are getting started. Which Programming Language Should You Learn? Start with the simplest to form a foundation for the more complex ones. It helps not to settle for only one program as the market is constantly growing with new programs being introduced.
OPCFW_CODE
How to do multithreading or multiprocessing while calling function in python I have sample dictionary is below I need to find the count of each attribute I need to implement using multithreading total(todos) is the main function. In this there are 3 other functions called. which i need to implement using multithreading. Right now first it will do sequentially like first it will do userid_count(todos) followed by title_count(todos) and complete_count(todos). Since each of these function is independent of each other I need to do with multithreading/multiprocessing todos = [{'userId': 1, 'id': 1, 'title': 'A', 'completed': False}, {'userId': 1, 'id': 2, 'title': 'B ', 'completed': False}, {'userId': 1, 'id': 1, 'title': 'C', 'completed': False}, {'userId': 1, 'id': 2, 'title': 'A', 'completed': True}, {'userId': 2, 'id': 1,'title': 'B', 'completed': False}] def total(todos): user_count = userid_count(todos) ###### Multithreading need to implement ########## title = title_count(todos) completed = complete_count(todos) search_count_all = {**user_count, **title, **completed} return search_count_all def userid_count(todos) #return userid count there are 2 user id 1, 2 #{"userid":2} pass def title_count(todos) #return title count there are 3 title A, B, C #{"title":3} pass def complete_count(todos) #return completed count there are Tru and False #{"True": 1, "False":3} pass total(todos) Expected out is [{"userid":2},{"title":3},{"True": 1, "False":3} ] Is this a homework assignment? It's OK if it is, we just need to know because there's really no point in doing multi-threading or multiprocessing for this problem, it's almost certainly going to make your program more complicated and slower. @Blckknght its not homework problem, i made it is smaller my actual scenario is working high volume of data. So each function is taking some time so i need to try using multithreading once Because of the GIL, multithreaded code in Python doesn't ever run in parallel. At best it can let you do multiple IO-limited tasks interleaved (like reading data from several network connections into memory). For a CPU-limited task (like counting something in memory), it's just overhead. Multiprocessing can run things in parallel on different CPUs on your system. But there's a lot of overhead, since all the data you need the code to process needs to be copied between the processes. Often that overhead is greater than what you save by parallel processing. @Blckknght, you mean to say that there is no way to run each function in parallel even if they are independent of each other. Sryy for asking questions not good in parallel processing You can do this using multithreading. import threading, queue user_count_result = queue.Queue() title_count_result = queue.Queue() complete_count_result = queue.Queue() def total(todos): user_countThread = threading.Thread(target=userid_count,args=(todo,user_count_result,)) titleThread = threading.Thread(target=title_count,args=(todo,title_count_result,)) completedThread = threading.Thread(target=complete_count,args=(todo,complete_count_result,)) user_countThread.start() titleThread.start() completedThread.start() user_countThread.join() titleThread.join() completedThread.join() search_count_all = {**user_count_result.get(), **title_count_result.get(), **complete_count_result.get()} return search_count_all def userid_count(todos, user_count_result) #Your logic here user_count_result.put({"userid": 2}) def title_count(todos, title_count_result) #Your logic here title_count_result.put({"userid": 2}) def complete_count(todos, complete_count_result) #Your logic here complete_count_result.put({"userid": 2}) where are three functions > user_count, title, completed is the function name is user_count_result, title_count_result, complete_count_result line number 3, 4, 5 I have updated the answer u can check it. what is user_count_result.put({"userid": 2}) and line number 6 there is user_count_result . what it is mean Whatever the result you wanna return from that function u can pass there. line number 2 and line number six, can you please explain "user_count_result " My user_count_result is None so it didnot execute succesfully
STACK_EXCHANGE
Not getting values shown in Results window when using Open With Excel feature Description When I run a query and get results, I use the Open With Excel feature. When i view the results in Excel, I am getting different values than what was in the Results Window. And if I use the Open With Excel feature again on the same exact table in the Results window, i still get incorrect values, but a different set of incorrect values. This is obviously very concerning. DBeaver Version Community 22.3.4 (current version) Operating System Windows 10 Enterprise Database and driver No response Steps to reproduce I connect to a database I run a query that I created I right click on the results and press on Open With Excel I compare the results to what's in Excel and they don't match Additional context Since I am using the Community edition, I have to install the Office extension every time there is an update to Dbeaver. Hello @ginevra12 It is not clear - what is the difference between representations? Please, provide examples. Let's say my output was one column of numbers and they were 1, 2, 3, 4. When i right clicked on the output and used Open With Excel, and the Excel spreadsheet opened with the column of values, I would not always get 1,2,3, 4. I would get random output. Here is one more thing we just discovered after I posted the issue that may focus the what's going on, or muddy the waters :)....If I include an order by statement in my query and then use the Open With Excel feature, the results in Excel will match the results shown in Dbeaver. So again, if I don't use the order by statement, I was sometimes getting wrong results in Excel. If I do use the order by statement, I was getting correct results in Excel. My apologies, In my original post, in the steps to recreate section, #4 was incomplete. I updated it to say the following... "4. I compare the results to what's in Excel and they don't match" Ok. Were you able to use the Export to XLSX file? Does the result also have a difference from the table? Yes, After more investigating, it appears that the issue may be isolated to the following: It happens on columnstore tables that have significant numbers of obs (e.g., > 50M) AND “Order by” statement is not used I did NOT get a different results (between what was shown in the Results window and what was put into Excel via the Open With Excel feature) when I did the following… Ran the query against the innodb version of the same large tables. Ran a similar query against a column store table that had < 200K obs. Here’s a made-up example… Table name: sample_data Field in table: names Number of obs in table: 80M Table exists in columnstore and innodb If I run the query shown below against the columnstore version of the table, and put the results into Excel via Open With Excel, I get different values than what’s shown in the Results window in the IDE. If I run the query shown below against the innodb version of the table, and put the results into Excel via Open With Excel, I get the same values as what’s shown in the Results window in the IDE. SELECT names FROM sample_data LIMIT 1000 ; If I run the query shown below against the columnstore version of the table, and put the results into Excel via Open With Excel, I get the same results as what’s shown in the Results window in the IDE (notice that the difference between this query and the one above is the addition of the “Order by" statement: SELECT names FROM sample_data Order by names LIMIT 1000 ; My expectation is that whatever I see in the Results window in the IDE should show up in Excel with the same values and in the same order. Thank you for your help. The results shown in excel is based not on results shown in dbeaver, but are taken from results of running the query again. It seems that your table doesn't have an index so the results may vary in evere query. Would you get the same result if you use SORT BY clause My apologies for not responding sooner. So to be clear, in certain circumstances, the values shown in the dbeaver results window are not matching what's in Excel when using the Open With Excel feature (a feature which I really like and hope y'all enhance). When you asked would I get the same results if I used a "sort by" clause, I wasn't aware mysql had a "sort by" clause. I looked it up on the internet and couldn't find a "sort by" clause. If by "sort by" you mean "order by", then Yes, I get the same results in Excel as in the dbeaver results window. When I used an "order by" clause, the values shown in the Results window do indeed match the same results in Excel shown when I use the Open With Excel feature (as mentioned in my last entry above, using the example with the order by statement). Why wouldn't the values put into Excel ever not match what's in the dbeaver results window? Unless I'm missing something, I would expect the values shown in Excel when using the Open With Excel feature to always match the values shown in the dbeaver results window. I guess I'm thinking the Open With Excel feature is akin to copying and pasting the results shown in the Results window into an Excel file, but doing it in a more succinct way (by creating a new Excel file and ensuring the values are put into the worksheet correctly...e.g., values with leading zeroes will always have the leading zeroes shown). If the above isn't clear, I'd be willing to discuss this via a phone call. Thanks! We rerun the query if not all data was fetched. So, it works as expected. Thanks Elizabeth. I appreciate the response/clarification. Have a great day! :)
GITHUB_ARCHIVE
Real Time Clock On 20×4 I2C LCD Display with Arduino Sometimes it may be necessary to use a display while making a hardware project, but the size and the type of the display may vary according to the application. In a previous project, we used a 0.96″ I2C OLED display, and in this project we will have an I2C 20×4 character display. This tutorial will describe how to use 20 x 4 LCD display with Arduino to print a real-time clock and date. This liquid crystal display has 4 lines, 20 character in each line and cannot be used to display graphics. The main feature of this display that it uses I2C interface, which means that you will need only two wires to connect with Arduino. At the back side of the screen there is a small PCB soldered in the display, this circuit is a serial LCD 20 x 4 module and it also has a small trimpot to adjust the contrast of the LCD. Display’s backlight is blue and the text is white. It is fully compatible with Arduino and has 5V input voltage. Its I2C address could be 0x27 or 0x3F. You can get it for about $7 from Bangood store. DS3231 is a low-cost, accurate I2C real-time clock (RTC), with an integrated temperature-compensated crystal oscillator (TCXO) and crystal. The device incorporates a battery input, so that if power is disconnected it maintains accurate time. RTC maintains seconds, minutes, hours, day, date, month, and year information. Less than 31 days of the month, the end date will be automatically adjusted, including corrections for leap year. The clock operates in either the 24 hours or band / AM / PM indication of the 12-hour format. Provides two configurable alarm clock and a calendar can be set to a square wave output. Address and data are transferred serially through an I2C bidirectional bus. This RTC module operates at input voltage range between 3.3V and 5.5V, so it can be connected with 3.3V or 5V pins. It is available on Banggood store for about $2. Connecting the LCD with Arduino UNO At first we will connect the LCD with Arduino to display some text and to learn how it works. Connect the GND with Arduino GND, VCC with 5V pin on Arduino, SDA with A4 pin, and finally SCL with A5 pin. First we need to download the library of the display, which includes all required functions to configure and write on the display. You can find it here. Unzip the library and add it to the Arduino libraries folder, then run Arduino IDE and copy the following code. The first two lines are to include both of I2C and LCD libraries. lcd.setCursor(3,0) will set the cursor of the LCD in the specified location, the first argument for the column and the second for the row starting form 0. lcd.print(” “) will print the given text at the current cursor position, be careful that the overflowed characters will be discarded. Printing Date & Time on The LCD Now we will use the RTC module with the LCD to print current date and time, each of them in a line with a dashed border around them. Here we will use a small breadboard to connect the RTC module and display with the Arduino’s I2C pins (A4 and A5). The SCL pins are connected with analog 5 pin and the SDA pins with analog 6 pin. The top rail of the breadboard used as I2C bus and the bottom one is power bus. Connect both the display and the RTC module to 5 V and GND pins, and now the circuit is ready. Before we start we have to download RTC library and set its time. The required library is available at github. Download it and extract it into Arduino libraries folder, then open Arduino IDE and from examples choose ‘setTime’ from DS1307 library. Finally upload it while the RTC module is connected with Arduino, and it will set its time as the computer time. In addition to setup and loop function, we will create four other functions to organize the code. As the corners and vertical lines of the frame are special characters, we have to create them manually. So we will use a function to create them and another one to print them on the LCD. Inside the loop function the time will be read from the real time clock module and the printed to the LCD using a custom function for each of time and date. Now, let’s describe each part of code: At first, we have to include the three libraries, I2C, LCD, and RTC and set the LCD address. Inside the setup function the display is initialized, then we will call createCustomCharacters() function and print them. Each character can be 5-pixel long in width and 8-pixel in height. So to create a custom character we need to create a new byte. We need 5 characters, the vertical line and the four corners. The yellow pattern shows you how the character will be displayed on the LCD. Inside createCustomCharacters() function, we called lcd.createChar(#, byte array) function. The LCD supports up to 8 custom characters numbered from 0 to 7. It will assign the index in the first argument to the character given by the byte array. To print this character we can use lcd.write(byte(#)) function. Now after preparing our characters we can now print the frame. This function is very simple, it uses lcd.setCursor(#,#) to move the cursor and lcd.print(“”) to print the given string. The function will print the top and bottom horizontal lines, then printing other custom characters. As we discussed earlier, the loop function will get the current time and date every second and refresh them on the display. First we defined a time element “tm” which has current time data, then if the time is correct and the RTC module working fine the time and date will be printed. We can add some instructions so, if the DS1307 is stopped or there is a circuit error,we can light a LED to indicate the problem. The loop will wait for 1 second before starting the next iteration. PrintTime function uses three arguments, the column and line where it will print the time, and the time element. lcd.print(tm.Hour) will print the hour, then if the minutes and seconds are less than 10 we will add 0 to the left. And the same method is used to print the date. Now everything is ready, upload the code to your Arduino and enjoy watching your new clock. You can find the full Arduino sketches and libraries in the attachment below. This tutorial is made by educ8s.tv channel, and you can find the tutorial video below:
OPCFW_CODE
Traffic Graphs: Cannot get data about interface in our pfsense 2.3.2-RELEASE on APU2, the Traffic Graphs widget on the Dashboard after several hours loses graphing, instead displaying the error Cannot get data about interface ```followed by the interface name, like f.e. igb0. In the system log we see nothing out of the ordinary. This seems a recurrence of the problem discussed in [https://forum.pfsense.org/index.php?topic=57113.0](https://forum.pfsense.org/index.php?topic=57113.0), although that has been declared solved in 2013 with v2.0.3. An suggestions, what to do or where to look further? Some specific log or filter expression? Clarification: we used Firefox v47.0.1, will test with IE v11.0.9600.18500 on Win8.1pro. Mostly graphing in FF stopped after running overnight. To cite from measures taken in the old thread, we got these results: Reloading the Dashboard page, i.e. repeating the Login to it, resolved the issue for the nonce. /tmp/php_errors.txt does not contain any data. Package manager reports no additional packages installed. In other news, apparently nginx is now the main web server; a possible relevant error appears in /var/log/nginx-errors.log:``` 2016/10/04 09:10:26 [error] 16350#100143: *13129 upstream timed out (60: Operation timed out) while reading response header from upstream, client: 192.168.111.100, server: , request: "GET /ifstats.php?if=igb0 HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket", host: "192.168.111.1:80", referrer: "https://192.168.111.1:80/graph.php?ifnum=wan&ifname=WAN&timeint=10&initdelay=2" well, IE v11 didn't fare any better than Firefox, after running over the weekend, both complained of that same error. Will still check with Chrome-based Vivaldi, as well. Meanwhile, on reload all browsers showed 502 Bad Gateway ---------------------------- nginx Under /var/log, neither system.log nor nginx-error.log had entries pertaining to that. A reboot fixed this, so we're now looking at Vivaldi for the time being. Vivaldi, and thus implicated, Chrome, didn't fare any better. Same old soup, "Cannot get data about interface" on any interface graph we'd care to look at. Seems browser independent, now. That timeout in accessing fastcgi://unix:/var/run/php-fpm.socket (in the error log from Oct. 28) looks weird though, now doesn't it?
OPCFW_CODE
Nyckel, tempo av Whatever Happened to My Part Diva's 1997-09-19 · Directed by Peter Cattaneo. With Robert Carlyle, Tom Wilkinson, Mark Addy, Wim Snape. Six unemployed steel workers form a male striptease act. The women cheer them on to go for "the full monty" - total nudity. With music and lyrics by David Yazbek, The Full Monty has crowd pleasing tunes like 'Life with Harold' and 'Breeze off the River' May 9, 2019 A slew of stage stars recently took part in an industry presentation of the 2001 Tony-nominated musical The Full Monty, producer Tom Kirdahy. Feb 16, 2020 Tony-nominated musical, 'The Full Monty,' opens month-long run in San Francisco SAN FRANCISCO, Calif. (KRON) – The Full Monty is baring it Nominated for 9 Tony Awards including Best Musical, Best Original Score, and Best Based on the cult hit film of the same name, The Full Monty, follows six The Full Monty (musical). 160 likes. The Full Monty is a musical with a book by Terrence McNally and score by David Yazbek. A one-of-a-kind Broadway musical. I loved it!" Clive Barnes, The New York (Applause Libretto Library). Musical Monday's welcomes the cast of The Full Monty (playing now through March 15th at The Victoria Theater!) Performances start at 10pm with 2-4-1 drinks, Musical The Full Monty. Offentligt De gelijknamige musical werd eveneens een wereldhit. Portfolio - Teater - Ljus & Dekor tmp.org (253) 565-6867 Parking in the Park and Ride across the street. FULL MONTY SYNOPSIS Lyrics Georgie Bukatinsky bounds onto the stage of Tony Giordano's club and welcomes us to Girls' Night Out. While her husband is at home doing the dishes, she introduces us to the featured attraction of the evening - Buddy "Keno" Walsh - the personification of male physical perfection in an expensive business suit, though not for long. Annie Golden - Georgie Bukatinsky Patrick Wilson - Jerry Lukowski John Ellison Conlee - Dave Bukatinsky Marcus Neville - Harold Nichols Emily Skinner - Vicki Nichols Jason Danieley - Malcolm MacGregor Romain Fruge - Ethan Girard Lisa Datz - Pam Lukowski Kathleen Freeman - Jeanette Burmeister Andre De Shields - Noah The Full Monty by Simon Beaufoy Productions 2000 San Diego 2000 Broadway 2002 West End 2003 Copenhagen 2004 Melbourne 2005 Liberec 2006-07 Seoul 2008 South Africa Tour 2009 West End revival 2009 Millburn 2009-10 Netherlands Tour 2013 Italian Tour 2013 Makati 2013 Paris 2014 Tokyo 2017 Melbourne revival Music from the Motion Picture Soundtrack 'The Full Monty' RCA es una marca registrada de General Electric Company, USA. Editado y distribuido por BMG Music Spain S.A. Una compañía BMG Entertainment. Tandtekniska arbeten stockholm Sunflower Productions: The Full Monty – The Broadway Musical. Dates. From Wednesday 26 February 2020 Until Saturday 29 February 2020. See below for available dates and times. SHARE ON … The Full Monty is a musical with a book by Terrence McNally and score by David Yazbek.. A British film about six out of work Sheffield steelworkers with nothing to lose, took the world by storm!Based on his smash hit film and adapted for the sta The Full Monty (1997) SoundTracks on IMDb: Memorable quotes and exchanges from movies, TV series and more Berry is werkeloos. Bra leasingbil privat ikea place android heta linjen gratis moms farmhouse new liskeard - Min pension uk - Hur många invånare har portugal - Thai restaurang skarpnäck - Storvreta vårdcentral provtagning - Programmering i matematik - När får man pengar tillbaka på skatten - Business of sport management The Full Monty MusikalNet The Full Monty is no longer available · Tips on similar products · -10% · Top 10 Musicals · Similar categories. Dag Joakim Tedson Nätterqvist is a Swedish actor, theatre director, musical artist, singer and Dostoyevsky's The Idiot, Stockholm City Theatre · Mio, My Son,, Stockholm City Theatre · The Full Monty (musical), Stockholm City Theatre 6 jan. 2021 — Monty Python Musical 'Spamalot' Acquired By Paramount From Fox. Eric Idle, who wrote the music and lyrics for the original Broadway musical, Musical gigs. Skrivet den 15 november, 2015. Good times playing in the musical The Full Monty with great musicians. Full Monty band Action · Adventure · Animation · Biography · Comedy · Crime · Documentary · Drama · Family · Fantasy · History · Horror · Musical · Reality · Romantic · Sci-Fi The Full Monty Broadway @ Eugene O'Neill Theatre - Tickets and Discounts | Playbill. Six unemployed steelworkers try to make some quick cash by taking off 2 jan.
OPCFW_CODE
At times they need online quiz help from authorities. In almost any circumstance connecting with Request Assignment Help is the best concept for them as We now have various panel of gurus who constantly execute well giving online exam help, online quiz help or online examination companies. Candidates who would like to test internationally are necessary to spend a world scheduling charge of $150 plus a worth Included Tax (VAT) in which applicable. Get ready to the exam by initially reviewing the CRC® Test Requirements. The 4-hour, two hundred-query several-decision exam addresses the domains of practice and understanding necessary to execute jobs detailed during the CRC® Test Specs, and may be nicely understood in advance of using the exam. With online examination college students can perform the exam online, in their very own time and with their own personal machine, No matter wherever they life. You online need a browser and Connection to the internet. This will Improve their morale and self confidence for all upcoming exams and jobs. We've got our staff of experts who will be Outfitted with contemporary implies to aid students in all their online exam and quiz linked queries. You should Test your spam and/or junk folder in advance of calling us at email@example.com for support. Essential Details All Internet Tests is often taken only once for each obtain. e. written exam exactly where students are analyzed actual time and energy to enter their reaction within an online atmosphere. Similarly, Various selection sample ask you to select existing response among the different possibilities within an online atmosphere. It is actually thus to clean up pupil’s interior why not try here concern, a help application is developed at Assignment Consultancy that will help you in securing most effective grades in exam. Online examination is conducting a check online to measure the knowledge of the members on a supplied topic. Inside the olden times Most people experienced to assemble within a classroom at the same time to consider an exam. “Believe in me they have got a few of the ideal to help in your maths, finance or figures online exam. I have faith in them and obtained very good grades.” Degree and amount of performance is resolved by the Student who will reappear while in official site the examination as persistently as just one needs, till satisfied. Following analysing every single depth on the exam supplied by the student, our specialists begin to style and design a teaching system. Our Online Exam Help is one of A sort company which helps pupils to find out better ways of scoring great marks and become accustomed to This technique. Before mailing in the appliance, make certain to complete the form by attaching every one of the requested information, like a duplicate of your completion certification, a duplicate of one's Energetic CPR card, and so on. The exam assistance service at Tutorial Assignments is minimal priced and of top of the range, 100 percent authentic answers are presented to The scholars, your academic achievement is our best achievements. We have confidence look at this now in providing very best exam help services to secure good marks. All our specialists are extremely proficient and they are acquiring big expertise in helping college students in their online exam.
OPCFW_CODE
We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and keep maintaining provenance from the potentially huge amounts of data produced during a extensive research study. uses the Simulator bottom class to take care of most areas of NeuroManager Simulator procedure, with sub-classes to help make the Simulator particular to confirmed SimCore and additional sub-classes to help make the Simulator particular to a user’s Model and analysis goals. An edge of this strategy is normally that, as analysis demands instruction the researcher through several simulator/simulation configurations, the nuances are captured in the Simulator object tree, therefore the researcher could make usage of inheritance to simplify the introduction of a new RG7422 settings; see Amount ?Amount33. Amount 3 Simulators hierarchically are defined. Left, the overall course hierarchy to put into action a Simulator. Best, example to put RG7422 into action three different Simulator types. Arrows indicate the super-class. The execution of the Simulator could be divided into Primary, RG7422 … The course hierarchy provides isolation and inheritance of every component of the heterogeneity of devices and job distribution resources that comprise the device Established. An excerpt in the Class Hierarchy is seen in Amount ?Amount4;4; the entire tree is within the User Instruction. A couple of three ancestral lines which combine to create RG7422 a SimMachine RG7422 with specific and whole functionality. The initial ancestral line supplies Rabbit Polyclonal to TNFSF15 the infrastructure to perform jobs in a particular resource, within this whole case an SGE cluster. This comparative series is normally divided in two areas, the initial section handles building a universal machine, using the course, and provides the efficiency to transfer documents between sponsor and remotes after that, as well as the configurations essential to compile MATLAB code. The next part of the line adds the essential and particular job submission methods culminating within an object that may submit careers to a SGE cluster (MachineSGECluster). The MATLAB compilation ability, through the course, can be a subclass of because MATLAB compilation needs the capability to transfer documents through the host towards the remote. The next ancestral line may be the line which gives all the elements to cope with Simulators and relationships with Simulations. The 3rd may be the comparative range that delivers area about the simulation engine to be utilized on the precise machine, in cases like this NEURON. The three lines are mixed using multiple inheritance to create machine classes that may sponsor NEURON-based Simulators on a particular SGE cluster. Instantiations of the class are Devices utilized by NeuroManager. Shape 4 The SimMachine Course Hierarchy. A SimMachine may be the mix of three inheritance lines of items. With this example an initial ancestral range (1) comprises classes offering basic communications; document transfers between host and remote; ability … NeuroManager properties Provenance Provenance is the documentation of the processes and data that have produced a digital object (Simmhan et al., 2005; Moreau et al., 2011). For simulations provenance requires recording the data, processes, and conditions under which a given simulation’s output products were obtained, together with those products, with the intent of proof of correctness and reproducibility and test cases and conditions software validation (Miles et al., 2008; Gewaltig and Cannon, 2014). In order to record provenance of neuroscience simulations, NeuroManager puts all program output and simulation results into a time/date-stamped directory; keeps a detailed.
OPCFW_CODE
...with the potential for long-term work in the future if things align. I have few mockups (in Figma) and few low-fidelity sketches that I need help with. We will start with a web application preferably using React and a nodejs and python back end (flask/Django). We'll be creating a fully functional rich website - Some primary features include - creating ...warehouse stocking many thousands of SKUs (both items for sale and materials), but without the means to manage and control it. We have developed an internal backend service, with a web services API described in YAML, which already provides all of the necessary management functions, including managed Stock Take events. We also have an administrator's dashboard Automobile Vehicle Service Scheduling Scope of Website Service Customer data management/monitoring/Analysis/MIS Add Customer and Vehicle Details. Auto Generate Schedule service Dates for all Input data with predefined scheduling formula. Generate reports of Service due database based on given criteria. Book service data for given dates. Update and Mark open/completed after each due date or on aft... Requirements for conference I need the current antmedia conference interface to be adjusted to look like whats in the attached mockups. The user should have different presentation options as seen in the attached image, if they click on multiple persons then we will have two on the view at the same time. The user should have the ability to add and remove panelist from the main view without removing... I already have the software that i would be using although i would not mind making it a bit more user friendly for me. I would like to create a wonderful push to start Start Screen or Welcome screen along with short animation, a option screen on payment and an end screen with push button for printing. I really like the quality of animation in the newer casino slot machine games where the characte... We need a logo s...CRM ([login to view URL]) it is a Travel agency management tool. Thanks all for your designs! We are looking for a clean well designed logo NO Stock Logos we need a custom design - All images will be double checked for uniqueness. I have uploaded a screenshot of the app to give an idea of where it will sit and the style of the app ...Theme & Beaver Builder to match the attached mockup. You will be provided with all staging server, theme and plugin files and all graphics etc. Your work will be overseen by a web developer. All work must be responsive. We have access to BB templates from Astra that you may use as the base template. This site is only one page. CSS, HTML, WordPress ...Code/ Other (Nintex,K2) Creating views and audience with XHTML/Sliver light MVC .NET will be a plus: The candidate must have strong of experience in ASP.NET MVC, C#, SQL and Web API. The candidate must have good knowledge of bootstrap and able to create good looking UIs. The candidate must have good problem-solving skills having a good understanding Specifically, I need to add a name field for the subscription banner integrated with Mailchimp. Its already set up but just asking for an email address. Looking for overall help with improving the look and feel of the site in line with the brand I want to improve the content of my app (on server side) >> by scrapping websites according to simple queries from results on google (youtube video + web sites) + storage in table dB) A process is already done but the current result is not good enough (bad accurancy, displaying, quantity etc) 1 - improve the current stock 2 - live process every time ...frontend developer who is expert in UI design and fix all the Design issues and make the web app mobile responsive and PWA. Need to add Service workers and work on Push notification for the PWA. Already 60% of the pages are mobile responsive, but just need few of the pages to be aligned and fix few issues around the design aspect such as Inbox layout in the examples : www,[login to view URL] A web site of crypto currency is to be developed for me. 1. The data displayed on this web site has to be taken from Busscan API 2. Trust Wallet integration is also to be done 3. API Cost will be Extra Hi. We are a we design company and looking for someone to assist us with overflow projects. We work on quoting the jobs with fixed prices and will split them 50/50 with what we can get from the customers. I have a small job at the moment, my budget is $250 and a single homepage only with panels done in Porto. Need an experienced wordpress designer who HI , I wanted to make a webssite to sell my medical courses ...it will have option to make payment via paypal ..the 1. how will it work if i want to add products in future .. 2. I do not hav any domain name yet ..we will discus that 3. these are two sites which can be used for reference [login to view URL] [login to view URL] 4. i want facilty to post a sample video too below the selling c... Am looking for experienced web designer who can work on Wowonder theme, who can design based on Adobe XD. Client login, Partner Login, Employees Login, Franchiase login I have already all designs I am looking for an SEO expert to help me optimize some of this pages on my site, some of the tags are done automatically so I need to understand what tags are not being implemented propertly. [login to view URL] Thanks MVA Connect is looking for a new logo design MVA Connect provides services to motor vehicle accident patients by connecting them with lawyers and healthcare providers. We are basically a directory service. We are looking for a professionally designed logo with the word "MVA Connect" in it. The Logo we are looking for should have a professional look PROFESSIONAL DEVELOPER NEEDED TO BUILD AND DELIVER A PROPRIETARY SCRAPPER TECHNOLOGY, WITHIN THE TIME NEEDED. THIS JOB WILL NOT HAVE ANY MILESTONES, PAYMENT WILL BE IN FULL ONCE JOB HAS BEEN UPLOADED AND TESTED. MORE WORK WILL BE OFFERED ONCE A RELATIONSHIP BETWEEN FREELANCER AND COMPANY HAS BEEN FORMED AND TRUST HAS BEEN ESTABLISHED. PLEASE READ THIS AD AND MAKE SURE THAT THE PRICE IS CLEAR, W... Need help with my work project as I have been behind and need to catch up and avoid getting fired! I need help in designing and creating interactive mock-ups for patient portal! And subsequently write user stories and acceptance criteria. I would really appreciate your help and would like to keep contact for regular help! My sister and I are starting a new bakery business and are in need of a website designer to help us get it off the ground. We have secured the domain and have branding elements ready to go. Looking for a simple website to start, but want to think ahead so that adding ecommerce capabilities later is not an issue. Also, we want to make sure the site is optimized for search, includes forms for email ... Logo for an Occupational therapy company that subcontracts other health professionals. It’s located in Kelowna, BC, Canada and will serve the Okanagan Valley. The business name is Valley Integrative Health. Ideally the logo will have simple valley/nature theme ...we will have another 100 to do later on if this goes well. We estimate about 1 hour per a VM, so we are setting our budget for about 25-30 hours. We found these steps on the web to complete this: 1. create new vps with KVM with exact disk / ram (maybe 1gb above because space is not always exact) 2. shut down new vps 3. lvsnapshot existing LVM (not necessary Below is a description of the requirements of the project. 1. There will be a mobile application that could work for both Android and iOS platfor...data can be stored reliably and systematically. 22. The development framework does not matter as long as it both supports android and iOS platforms. Cross-platform tools and web based frameworks are okay. Looking for a Django expert to develop our website ( Main, admin panel ). We are looking for a Full-stack Django Developer. Our website will aggregate research publications and allow authors to create profiles, upload and manage their publications. The application will be hosted on Heroku. Key Features: - Account creation/ Membership management - Search function based on database fields (simple ... ...not possible, we will ask for a corresponding new design, which is based on Drupal TaraPro and would be responsive on our e107 homepage. Our system runs on the latest PHP 7.4 version. We can not change our CMS, because there is no support for the forum/plugins etc. anymore! Who can help make our design responsive and modern by using the Drupal theme? I have an old dating web-app that needs a redesign. Let's say that I want to keep some of the style. but want to change some views and the way user navigates. I'm thinking about 7 o 8 views. Something like: 1. Chat view 2. User chat list 3 and 4. profile edition screen (photo uploads and personal Info) 5. Main photo view (with some user info) 6. Settings I have started to make meditation music and soundscapes. I need a logo to use as my "Brand" "Moniker" when releasing this music. I would like the logo to include a Buddah and I would like the image to give the sense of sleep, calm, relaxation.
OPCFW_CODE
Output encode file audio/video plays too fast in ffplay So I've been experimenting with making a screen-casting application and have run into another obstacle. When I play my output file in ffplay.exe, my video and audio are correct, but extremely fast (like they're in fast-forward). When I play the same file in VLC, WMPClassic, or open it in Sony Vegas, they detect/play no sound at all and the video is the proper speed, but it detects the file as being 28 seconds long when it's really only 20. I'm assuming this has to do with the frame->pts and dts, but it seems like I've tried just about every combination of av_rescale_q() before and after encoding both video and audio, manually tweaking pts/dts values, changing the frame rate, etc. but to no avail. In ffplay (the only player that seems to get the audio along with the video) it at least seems as if the audio and video are synced, but then again it's hard to tell since everything seems to be at 3-4x speed. I started working from the muxing.c example, so most of my code is the same (except for how I'm generating the audio and video, and the settings to make it encode in the baseline profile). So with all that, I'm really just stumped as to how I could have gone so wrong with so few changes to the documentation example. I'm not sure how much of my code I should post here, but just about all of my changes were made in write_video_frame/ write_audio_frame functions, please let me know if more information is needed. FFMPEG Build: Zeranoe's ffmpeg-20130428-git-0fb64da Video Container: MP4 (baseline profile) Video Codec: H264 Audio Codec: AAC Re: Output encode file audio/video plays too fast in ffplay I've faced a similar problem before. I was just doing stream copy i.e changing the container format to avi. I think you care correct your pts and dts may not be correct. I too ended up playing with various combination. Try following. it worked for me. This is just for stream copy which involves no decoding outpkt.pts= av_rescale_q(inputpkt.pts, input_video_strm->time_base, output_video_strm->time_base); Similar rescaling for dts. Also you will need to do it for audio stream too. One thing to note it that if you are encoding decoded packet then you can forget about pts and dts. Encoder automatically does that for you.
OPCFW_CODE
Feature Request: add (generic) private git repository Description First of all, great product. I really like the work you're doing! Instead of just adding a public git repository, it would be nice to have the ability to add a generic git repository authenticated through https/ssh. In addition, it would be nice if the git repo is regurarly checked for updates and an update will trigger an auto-deployment of the stack or there is a rest endpoint to manually trigger the update. This would make it possible to host a swarm definition on a private git (like Bitbucket, former Stash) and deploy from that source using portainer. The overall achievement is a better support of continuous deployment by portainer. I would offer myself to prepare a pull-request, but I sadly don't speak any Go. I can get involved if it's any JavaScript stuff. First step on this is available via portainer/portainer:pr1722. It gives the ability to enable authentication when creating a stack from a git repository and allows the user to specify a username/password combo. @chrisandchris @olilfly @dcharbonnier @gopi-para-mpf @zeenlym any of you keen to try this image and give us some feedback? I will take a look at it next week. I just tested the image. It pulled the compose file from a private git repository with username and password passed into authentication toggle, deployed a stack. It works. Best, Gopi From: Anthony Lapenna<EMAIL_ADDRESS>Reply-To: portainer/portainer<EMAIL_ADDRESS>Date: Tuesday, March 13, 2018 at 4:59 AM To: portainer/portainer<EMAIL_ADDRESS>Cc: Gopichandu Para<EMAIL_ADDRESS>Mention<EMAIL_ADDRESS>Subject: Re: [portainer/portainer] Feature Request: add (generic) private git repository (#1554) First step on this is available via portainer/portainer:pr1722. It gives the ability to enable authentication when creating a stack from a git repository and allows the user to specify a username/password combo. @chrisandchrishttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_chrisandchris&d=DwMFaQ&c=N-xPqDyeLJg5V3gLll2thA&r=N50YFQj0j853jPylPHBqk275nxHmUoGQq2XdkEN6hK8&m=DKe8b2sR5DWQhveHYPC1BrJPWfqzTbTrB5UbpSg4Ui4&s=XdnAxAkp1zsL3VkXmhQT2Inv8nNZJ3AV9aCD5TPuyLs&e= @olilfly @dcharbonnierhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_dcharbonnier&d=DwMFaQ&c=N-xPqDyeLJg5V3gLll2thA&r=N50YFQj0j853jPylPHBqk275nxHmUoGQq2XdkEN6hK8&m=DKe8b2sR5DWQhveHYPC1BrJPWfqzTbTrB5UbpSg4Ui4&s=Q3Ndzvl2hXdGSCXuMYOJuseTk6KY3K-1eOcgfbpvSOc&e= @gopi-para-mpfhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_gopi-2Dpara-2Dmpf&d=DwMFaQ&c=N-xPqDyeLJg5V3gLll2thA&r=N50YFQj0j853jPylPHBqk275nxHmUoGQq2XdkEN6hK8&m=DKe8b2sR5DWQhveHYPC1BrJPWfqzTbTrB5UbpSg4Ui4&s=QixohPavw2yOd4fkyjxv6Y3LZvZp9OqTs72OWMtngI8&e= @zeenlymhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_zeenlym&d=DwMFaQ&c=N-xPqDyeLJg5V3gLll2thA&r=N50YFQj0j853jPylPHBqk275nxHmUoGQq2XdkEN6hK8&m=DKe8b2sR5DWQhveHYPC1BrJPWfqzTbTrB5UbpSg4Ui4&s=k5CPHQXiuVTAS6WpdFe4wAD-0Dpuj-v8Aor5ZME6ptI&e= any of you keen to try this image and give us some feedback? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_1554-23issuecomment-2D372592559&d=DwMFaQ&c=N-xPqDyeLJg5V3gLll2thA&r=N50YFQj0j853jPylPHBqk275nxHmUoGQq2XdkEN6hK8&m=DKe8b2sR5DWQhveHYPC1BrJPWfqzTbTrB5UbpSg4Ui4&s=UMh8GJMdrUKZUept6ZTjERlGngB70SJ00twtRT4e1uk&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AdzL3NnJTMz9UNwmq9NvGn-5FslJqlBoHJks5td4pXgaJpZM4RUlwx&d=DwMFaQ&c=N-xPqDyeLJg5V3gLll2thA&r=N50YFQj0j853jPylPHBqk275nxHmUoGQq2XdkEN6hK8&m=DKe8b2sR5DWQhveHYPC1BrJPWfqzTbTrB5UbpSg4Ui4&s=TgzRdodQm9nSuXUrlsQ0rmjD9Mcmk6TstDVYVP7R9ro&e=. Privileged/Confidential Information may be contained in this message. If you are not the addressee indicated in this message (or responsible for delivery of the message to such person), you may not copy or deliver this message to anyone. In such case, you should destroy this message and kindly notify the sender by reply email. Please advise immediately if you or your employer does not consent to email for messages of this kind. Opinions, conclusions and other information in this message that do not relate to the official business of Group M Worldwide LLC and/or other members of the GroupM group of companies shall be understood as neither given nor endorsed by it. GroupM is the global media investment management arm of WPP. For more information on our business ethical standards and Corporate Responsibility policies please refer to WPP's website at http://www.wpp.com/WPP/About/
GITHUB_ARCHIVE
If you have an active subscription license to the FLAMES® Development Suite and are having a problem developing your FLAMES-based software, Ternion’s technical support staff may be able to help. In order to solve the problem, we must be able to reproduce the problem quickly and consistently using our computers and our copy of the Development Suite. Therefore, we ask that you take the following steps before you contact us: - Make sure the problem is with the FLAMES application programming interface (API). In other words, make sure you can demonstrate that the problem you are having is a result of a problem within one of the subroutines documented in the FLAMES Developers Manuals. Problems in FLAMES bundled component source code, in subroutines supplied by other vendors, or in source code that you write, including compiling and linking errors, are not covered by Technical Support. - Create the most simple sample source code possible that exhibits the problem you are having. Ideally, you should create the problem within one of the FLAMES bundled models by making only a slight modification to the model source code. The objective here is to create something you can send us that will allow us to quickly and easily reproduce your problem. If your problem is occurring in code that you are not able to send us, or if you must send us more than just a few dozen lines of code, we will not be able to quickly and easily reproduce the problem, and hence we will not be able to help you. - Create the most simple scenario possible that will exhibit the problem you are having. Ideally, base your scenario on one of the tutorial scenarios described in your product documentation. Again, the objective is to create something you can describe to us or send us that will allow us to quickly and easily reproduce your problem. Sometimes, after following the steps described above, customers find that the problem was not really with the FLAMES API and are hence able to solve the problem quickly themselves. If, after following these steps, your simple example source code and scenario clearly demonstrates a problem with the FLAMES API, we want to know about it and we want to help. Please send us the information listed below. - Your product License number. - Your name, phone number, e-mail address, and mailing address. - Your computer’s Host Name and Host ID. (This information is displayed by the product installation program. It can also usually be found in your product license file.) - Your computer’s make, model, and serial number. - The name and version of your computer’s operating system. - The name and version (32 or 64-bit) of the product(s) you are using. - A description of the simple scenario you are using to demonstrate the problem (if applicable), including a list of the datasets in the scenario. Most, if not all, of these datasets should be datasets supplied with the simulation. - An exact copy of any input data required to reproduce the problem (if you had to build any of your own datasets). - An exact copy of the source code to the routines that will illustrate the problem as described above. Large files and binary files may be uploaded to Ternion’s FTP site. Please contact Ternion® Customer Support for instructions. - A complete description of the problem you are having, including the full, exact text of any error messages that are generated and a step-by-step description of how to reproduce the problem. - The severity of the request (High, Medium, Low). - Latest date you need a solution to the problem. After you have gathered all the information that we require, you may provide it to us using one of the methods listed under How to Get Support.
OPCFW_CODE
The DHCP protocol also provides a mechanism whereby a client can learn important details about the network to which it is attached, such as the location of a default router, the location of a name server, and so on. quantis.network - Quantis, A new crypto currency focused Chapter 3: cURL | Conquering the Command Line | Softcover.io Now copy your ICO file to %SystemRoot%\system32 or to the root.Depository Network - Infrastructure Unblocking The Value of Digital Assets. Can I use network upload to import PST files to an online archive mailbox in an.The owner can control the botnet using command and. where individual clients request services and resources. they will initiate a file transfer to. SMB file server share access is unsuccessful through DNSHow to Create a WiFi Hotspot Using the Command Prompt. You can download files from the hosted computer if file sharing is turned. Convert Image to Icon with Any to Icon Converter – Aha. General FAQ — ihsdiag documentationIt looks like I have Request files on my local now though. Instead of going to request.network,. Using Netstat Command to Monitor Network Traffic - PetriThe application has wizard and command line interfaces that make it simple to.Learn more about how to use netstat command to monitor your network traffic and more.VMware Knowledge Base. which vmkernel port is the default for the 10.1.1.0 network by running the command:. Ensure that both your autorun.inf and any.ico files are in the. Planet MobileEducational Got the Request App running on my local computer. Fedora Linux: Restart / Stop / Start DHCPD Server CommandDisplay the network settings currently assigned. If the command was successful,. Depository Network - Official WebsiteYou can start and stop services of the IBM Cloud Orchestrator Servers in a high-availability environment. cs-network -equ. request by using the following command.A Java based icon file tool for reading and writing ico files. the associated network performance such.Useful DNS, DHCP and WINS command-line operations. and are command-line operations for core network.Instead of typing the shutdown command each time, save time by rebooting or shutting down your remote PC with a custom batch file instead.ICO Alert maintains the only complete calendar of all active and upcoming Initial Coin Offerings (ICOs), token sales, and crowdsales. REQUEST REPORT. PHP: Built-in web server - Manual Download Windows Command Reference from Official Microsoft. and to automate command-line tasks by using batch files or scripting.Contact More. Using the Netstat Command to Monitor Network Traffic. ICO File Tool download | SourceForge.net Initiate Remote Assistance from a Command Line or a Script GitHub - caiorss/jmhttp: A portable cross-platform httpBut we know that having to use a command-line tool and CSV file is complex and. GET - Linux Command - Unix Command - Lifewire Windows 7 Initiate Remote Assistance from a Command Line or. such as Windows Mail or by saving the invitation as a file. a local folder or network.
OPCFW_CODE
ArangoDB Networking HTTP Layer Benchmarking | ArangoDB 2012 …or: The Great Server Shootout ArangoDB is a database server that talks HTTP with its clients: clients send HTTP requests to ArangoDB over TCP/IP, ArangoDB will process them and send back the results to the client, packaged as HTTP over TCP/IP. ArangoDB’s communication layer is thus the foundation for almost all database operations, and it needs to be fast to not become a bottleneck, blocking further database operations. To assess the actual performance of the communication layer, we did some benchmarks that compare ArangoDB’s network and HTTP processing capabilities with those of other popular servers. ArangoDB is a database server and thus offers a lot more than just HTTP processing, but in this post we’ll be concentrating on just the networking/HTTP parts of the product. It is likely that we’ll publish results of further benchmarks that involve other parts of the system in the near future. Though we’ll be trying to disclose the methodologies and results of this benchmark, the usual disclaimers apply here as well: - The benchmark results may vary depending on the system you’ll be compiling and measuring on, furthermore, results will differ depending on the product configurations/settings - Just a specific part of the products (networking/HTTP handling) was measured, not the complete functionality of each product - We measured just specific aspects of performance (total time, requests per second) although there are obviously more aspects one could measure (CPU utilitisation, memory usage etc.) - The test cases reflect just a few things of what one can do with a server. They might or might not be realistic depending on your workload/usage patterns - We have compared general-purpose products to their stripped-down counterparts. The use cases for the products tests are not fully identical. - Don’t trust any results with questioning. If in doubt, rerun the benchmarks and measure yourself! We were interested in how fast ArangoDB could handle HTTP requests. And what kind of tools could be better at handling HTTP requests than…, well, web servers? So for this benchmark, we have conducted load tests with the following products: We have picked well-established general-purpose web servers such as Apache httpd as well as their more stripped-down and more specific counterparts such as Nginx and Gatling. Apache httpd 2 has several different multi-processing modules (mpms). As we weren’t sure which one we should as a baseline, We have used the following Apache mpms in the comparison: event, worker, and prefork. We only picked open-source tools for our tests so we could compile everything ourselves with the same settings. No pre-optimised binaries have been used. And as we were interested in measuring the HTTP layer, we only picked tools that speak HTTP out of the box. For all of the above server products, we have measured the total time it took a client to send 100.000 (100K) identical HTTP GET requests to the server and get the servers’ responses back. The total time it took the server to answer all requests was also translated into the “requests per second”. For each product tested, the number of concurrent client connections was increased from 1 to 512 to also assess the servers’ scalability. For each concurrency level, 3 test runs were conducted and the average results of the 3 runs were used as the overall result for that concurrency level. Two different test setups have been used: - In one scenario (“local” scenario), the client was located on the same physical host as the server. - In the other scenario (“network” scenario), the client was located on a different physical host and the requests went over the network. Client and server were located in the same network and using the same switch. The communication between client and server was HTTP over TCP/IP in all cases. HTTPS/SSL has not been tested. HTTP Keep-Alive has been used for all client requests. In the “local” scenario the client and the server parts were running on the same physical host so they might have competed for the same resources. This should not be a problem because this will not change the relative results, only the absolutes (that we’re not too much interested in). All server products were installed on the same physical server. To get comparable results, no prefab server binaries have been used, but all server products were downloaded and compiled from source on the target environment. Compilation was done with gcc/g++ 4.5.1 and -O2 optimisation level for all servers tested. As all products were installed on the same physical host, the server configuration was identical for all products: - Linux Kernel 184.108.40.206-0.11, cfq scheduler - 8x Intel(R) Core(TM) i7 CPU, 2.67 GHz - 12 GB total RAM - 1000Mb/s, full-duplex network connection - SATA II hard drive (7.200 RPM, 32 MB cache) As we were not interested in disk performance, all logging facilities offered by the server products (e.g. access logs, error logs) were turned off. All products were run with the same normal-privileged user account so the operating system imposed the same limits on them. CPU stepping was turned off during the tests so all server CPUs ran at their maximum frequency. Recurring jobs (cron etc.) were disabled on the server for the duration of the tests. The test client used in all cases was ApacheBench 2.3. It is known that ApacheBench is single-threaded and can itself become a bottleneck for high-throughput tests. Furthermore, the aggregation of test results within ApacheBench at the end of each run might also skew the results slightly. However, we found these two issues not to be a problem in our case because with the very small HTTP GET requests sent, ApacheBench produced sufficient load and did not become a major bottleneck. Slightly skewing the results in the aggregation phase was also unproblematic because we knew it would be the same in all test runs and would not affect the relative results at all. So we decided to use ApacheBench because of its ease of use and wide-spread availability (so others can reproduce the tests if they want to). The command used for the tests was: ./ab -k -c $CONCURRENCY -n 100000 $URL The results consist of the following data series: - mongrel2: Mongrel2-1.7.5 - nginx: Nginx 1.2.1 - apache_event: Apache httpd 2.4.2 event mpm - apache_worker: Apache httpd 2.4.2 worker mpm - apache_prefork: Apache httpd 2.4.2 prefork mpm - gatling: Gatling 0.12 - arangod-file: ArangoDB 1.0-alpha2 The actions performed for each series were identical and comparable: the same static file was requested by the client, read by the server and its contents returned to client. Please note that this benchmark’s goal was not set any world records, so the absolute result values aren’t very interesting. They will likely vary with the hardware used anyway, and using better test servers will likely result in getting better results. What was more interesting to us was to see the relative performance of ArangoDB compared to the performance of the other products, and, its ability to scale under increasing load, again compared to the scalability capabilities of the other products. The tests were not conducted to disrespect any of the other products at all. We all know that and believe and the other products are well-established and that there are plenty of use cases for them. They were just used as a reference. Full results can be found in the document accompanying this post: Performance test results Results of the “local” scenario ArangoDB in file server mode (series arangod-file) was able to handle more requests than any of the tested Apache2 variants for all tested concurrency levels. It could handle about the same amount of requests as Nginx with 1, 2, and 4 concurrent connections. Nginx was better at 8 concurrent connections, but after that, ArangoDB was able to handle significantly more throughput. ArangoDB’s throughput increased (though not much more) until up to 128 concurrent connections. Results of the “network” scenario In the “network” scenario, Nginx outperformed all competitors for up to including 8 concurrent connections. Its performance in that low concurrency segment was undisputed. From that point on, the competitors were catching up. ArangoDB started outperforming the others from 32 concurrent connections. Throughput in ArangoDB increased up to until 512 concurrent connections. Overall, it seems that ArangoDB’s network and HTTP layer can generally keep up with those of other HTTP-based products in terms of throughput. For low concurrency situations, some highly optimised products such as Nginx performed better. With increased concurrency, ArangoDB was catching up and outperformed the other products. The test showed that throughput in ArangoDB could be increased up to 128 concurrent connections (local scenario) and 512 connections (network scenario). At these concurrency levels, the other products showed stagnation or a decline in throughput. Overall it seems that the networking and HTTP stack in ArangoDB is not likely to be a major bottleneck for regular operations. However, as some of the other products showed better throughput at low concurrency, it seems that there is still room for optimisation in ArangoDB’s request handling. Especially, it would be interesting to know what Nginx does to achieve that superior performance in low concurrency situations. If anyone knows, please leave a comment. As mentioned before, we haven’t looked at memory consumption in these tests. Neither did we check CPU utilisation and other factors. These would be other interesting things to look at when performing additional tests. Get the latest tutorials, blog posts and news:
OPCFW_CODE
His research interest focuses on urban thermal environment, climate change, ecosystem service and sustainability, by use of geospatial technologies (GIS, RS, geostatistics, geovisualization, etc.) for sustainable development at multiple scales. Since graduating from the University of Graduate, Chinese Academy of Sciences (CAS) in July 2006, he has been working as a professor in Jiaying University, teaching several courses such as Principle of GIS, Remote Sensing, Application of GIS Software-ArcGIS, English for GIS, Environment and Sustainable Development, and so on. He earned my Ph.D. in human geography from the University of Graduate, CAS in July 2007 and earned a postdoctoral certificate of environmental science and engineering from the Research Center for Eco-environmental Sciences, CAS in December 2011. He has been a trustee of Guangdong Society for Remote Sensing and Geographic Information Systems and a member of the American Geophysical Union (AGU) since 2011, a member of the Geographical Society of China (GSC) since 2006, and a member of Guangdong Association for Sustainable Development since 2004. I had been a visiting scholar at the University of Michigan, Ann Arbor, May 2013 -Aug. 2014. And at present I have been conducting a visiting scholar research at the University of Hawaii at Manoa from Sep. 2018 to Sep. 2019. As a principle investigator (PI), he completed one sub-project of the Knowledge Innovation Program of the Chinese Academy of Sciences, Grant No.KZCX2-YW-BR-03 in 2009, one project of Sciences and Technology Program of Guangdong Province in 2016, one project of the Open Fund of State Key Laboratory of Loess and Quaternary Geology in 2015, one collaborative sub-project of high level science and technology innovation platform ("985" Program) by the Institute of Global Environmental Chang, Xi'an Jiaotong University in 2015, one teaching reform project of Guangdong Provincial Colleges (CS100-GDCS041) in 2012, and one key project of New Century Education and Teaching Reform Program of Jiaying University in 2011. And as a main team member, he participated in carrying out two projects of Natural Science Foundation of China, one project of Natural Science Foundation of Guangdong Province, and three projects of Science and Technology Program of Guangdong Province. At present, his ongoing project funded by the Natural Science Foundation of Guangdong Province focuses on investigating thermal environmental effect of landscape pattern evolution in Pearl River Delta Urban Agglomeration. Since 2000, he has so far published more than 60 peer-reviewed papers in the scientific journals/conferences. And he directed several undergraduate teams to attend the Contest of University GIS across China, winning a second place prize, an excellent prize, a consolation prize, and an outstanding instructor award, etc. He also won many honorary titles such as an excellent CPC member by the Guangzhou Institute of Geochemistry, CAS and an excellent individual in scientific research by Jiaying University, and so on. What's more, he was a training object of the “Colleges Qian-Bai-Shi Talents Training Program” of Guangdong Province, and one of the first session training objects of the Young and Middle-aged Backbone Teachers Key Training Program of Jiaying University.
OPCFW_CODE
Bellawatt builds transformative and user-friendly software products for some of the leading energy companies including PG&E, Sunrun, and the DOE. The energy industry is undergoing massive changes and our clients are increasingly using modern software to help solve their biggest challenges and unlock new opportunities. We are growing and looking to add a Senior Product Designer to our small team of energy veterans and software experts. You will advocate for and design the best possible products for our variety of energy industry clients ranging from utilities to novel software & hardware companies. On a day-to-day basis, you will be conducting full-cycle design work including participating in user research, strategic UI/UX and visual design, and delivering high-fidelity prototypes. Your role will involve deeply understanding stakeholder needs and you’ll work directly with our product managers and developers, and they hope to learn from you as much as you’ll learn from them. For more about our team and how we work see our work philosophy: https://bellawatt.com/work. Please note that we are async by default and strive to avoid unnecessary meetings. Instead, we hope the extra time in your day will be spent on creating truly thoughtful designs specific to the unique problem spaces you will encounter. In case it hasn’t come across explicitly, you’ll have a lot of autonomy and we’ll be relying on your sense of product and strategy to make decisions on behalf of all of us. As such, we are requiring 2+ years of experience designing software, and hope you would describe yourself as a: - Fantastic Communicator. You know how to articulate your reasons for or against critical decisions, and are capable of disagreeing and committing. You equally enjoy giving feedback and receiving advice for both soft and hard skills. Your ability to communicate is excellent across all mediums: spoken (meetings), written (reports), and presented (presentations). - Detail-Oriented, Self-starter. You are detail-oriented even when writing an internal message. In addition, you’re able to handle ambiguity, anticipate problems, and don’t need to be handed the whole picture to get started. You may have even been a freelancer, consultant, or entrepreneur previously. - Non-Dogmatic. You understand that most decisions aren’t binary and require an understanding of the trade-offs. Similarly, you do your best to try to see things from other people’s point of view. In addition to your general skills, you have well-established experience fulfilling key Senior Product Designer responsibilities, including: - Full Product Cycle UI/UX Design Experience. You have experience going through a full product lifecycle: performing and synthesizing user research and customer feedback into product requirements, working with product management & engineering to prioritize and work through constraints, and delivering from both low-fidelity to high-fidelity, production-ready designs. - Deep Design Process & Tool Knowledge. You’re able to iteratively translate PRDs into User Journeys, Information Architecture, Light Wireframes, to complete polished High-Fidelity Wireframes and Prototypes. You’re also fluent in working efficiently with collaboration and prototyping tools (e.g., Figma, Miro, Adobe CS). - Visual Design chops. You have a firm grasp of Visual Design, including typography, desktop or mobile user interface, color, layout, and iconography and aren’t shy about completing UI designs with visual design polish or occasionally working on non-UI/UX work, such as marketing material. - Complex Stakeholder Management & Vision Setting. You have experience being the voice of design on a team with a variety of stakeholders ranging from upper-level management, to various teams throughout an organization; in addition, you have a keen ability to translate cross-functional needs into clear direction for your team that ties into a cohesive, motivating vision.
OPCFW_CODE
- Configuration and Testing - Component Configurations - Process Message Based on Property - Validate Input - Error handling configuration - Connection Configuration - Event Configuration - Add Metadata - HTTP Authorization Token - Channel Identifier - Batch Events - SSL Configurations - Threadpool Configuration - Component Configurations - Functional Demonstration The Splunk Event Collector microservice sends application events to a Splunk deployment using HTTP or HTTPS (Secure HTTP) protocols. It generates tokens for Authentication enabling the HTTP client to send data to the SplunkEventCollector in a specific format, thereby eliminating an intermediate microservice to send application events. Configuration and Testing The following attributes can be configured in the Component Configuration panel as shown below. Figure 1: Component Configuration properties Process Message Based on Property The property helps components to skip certain messages from processing. If this attribute is enabled, the service tries to validate the input received. If disabled, service will not validate the input. For more details, refer Validate Input section under Interaction Configurations in Common Configurations page. Error handling configuration The remedial actions to be taken when a particular error occurs can be configured using this attribute. Click the ellipsis button against this property to configure Error Handling properties for different types of Errors. By default, the options Log to error logs, Stop service and Send to error port are enabled. Refer the Error Handling section in Common Configurations for detailed information. Figure 2: Connection Configuration The name or address of the machine on which Splunk server runs. The port on which the above server runs. Click the Event Configuration ellipsis button to provide Event Configuration values. Figure 3: Event Configuration This returns a list of source, source types, or hosts from a specified index or distributed search peer. Enable this option to configure the following properties that appear. This identifies the index in which the event is located. The source of an event is the name of the file, stream, or other input from which the event originates. The source type of an event is the format of the data input from which it originates . The source type determines how your data is to be formatted. An event host value is typically the hostname, IP address, or fully qualified domain name of the network host from which the event originated. HTTP Authorization Token The Event Collector Token. Creating an HTTP Token To send all events received by the component as raw events. Send request in batched events. Number of events in a batch. Click the SSL Configurations ellipsis button to launch the editor to set SSL configurations. Refer the SSL Security section for more information. This property is used when there is a need to process messages in parallel within the component, still maintaining the sequence from the external perspective. Click the Threadpool Configuration ellipsis button to configure the Threadpool Configuration properties. Figure 4: Threadpool Configuration Enable Thread Pool Enable this option to configure the properties that appear as below. Number of requests to be processed in parallel within the component. Default value is '1'. Batch Eviction Interval (in ms) Time in milliseconds after which the threads are evicted in case of inactivity. New threads are created in place of evicted threads when new requests are received. Default value is '1000'. Sending the application event to the SplunkEventCollector microservice. Configure SplunkEventCollector as described in Configuration and testing section above and use the Feeder microservice and Display microservice to send a sample input and check the response respectively. Figure 5: Demonstrating a scenario with sample input and output Figure 6: Input message sent using feeder for S3Upload Figure 7: Output demonstrating the success
OPCFW_CODE
Hello, today we will talk about detection methods for the new version of BlackEnergy (4.0?) using QualysGuard Policy Compliance module. According to the results of our research as well as some other reports that will be published later, we found some common signs of attack, known as IOC (Indicator Of Compromise) and we managed to test them using the abovementioned module. IOCs were based on the analysis of the infected systems’ behavior and the fact that malware was configured for each attack individually. IOCs were divided into 5 groups according to the following criteria: We can state that the presence of two or more controls from the Groups 3, 4, 5 is a proof of system compromise. We checked which files and system settings were changed and based on this analysis we created User Defined Control (UDC) for Qualys Policy Compliance. The controls can be divided into 5 groups: acpipme.sys signature looks like this: While there is usually no signature for reference file The other reference files have signature like this: And this is typical driver with BE backdoor with self-signed certificate: Also pay attention to the file details description for any inconsistencies or mismatches, for example on the left you can see a screenshot of the infected file and on the right a reference: And here is another comparison, infected file is on the left and reference is on the right In order to analyze the suspected systems (and these can be any Windows machines) you need to download controls listed in the end of the article, create a policy and run a scan by QualysGuard Policy Compliance. Here is a step-by-step guide: We will be grateful to receive your feedback, good hunting! Take care of yourself. P.S. You can download all the mentioned UDCs in zip archive below. Trial QualysGuard Policy Compliance account can be requested here. SPID_0004_QUALYS_ adpu320_Correct.xml control checks the standard hash value. SPID_0003_QUALYS_acpipmi_Correct.xml control checks the standard hash value. SPID_0007_QUALYS_aliide_Correct.xml control checks the standard hash value. SPID_0010_QUALYS_amdide_Correct.xml control checks the standard hash value. SPID_0006_QUALYS_aliide_Compromised.xml control checks the hash of the file, uses two hashes of available malware, and you can also view the file size. SPID_0009_QUALYS_amdide_Compromised.xml control checks the hash of the file, uses two hashes of available malware, and you can also view the file size. SPID_0004_QUALYS_adpu320_Compromised.xml control checks the hash of the file, uses two hashes of available malware, and you can also view the file size. SPID_0002_QUALYS_acpipmi_Compromised.xml control checks the hash of the file, uses two hashes of available malware, and you can also view the file size. SPID_0012_QUALYS_svchost_Location.xml control looks for svchost.exe file in the wrong locations. SPID_0001_QUALYS_ Registry_IOC_MicrosoftSecurity.xml control checks for suspicious registry key. SPID_0008_QUALYS_aliideStart.xml checks the status of services and whether they are run as default. SPID_0011_QUALYS_amdideStart.xml checks the status of services and whether they are run as default.
OPCFW_CODE
For some time ago an opportunity came up to do a review of a new book written by MVP Julian Sharp. I took that opportunity and lots of good things came with it – one thing being inspiration for writing new blog posts. I will dedicate a whole future blog post to Julian’s book – stay tuned! Until then, I’ll tell you a bit about my experience from taking Microsoft certification exams. I have taken a few exams over the years. I am far from being a collector of Power Platform Certifications, but recently I took one (thanks to Julian I scheduled for the exam PL-200) and I consider that as a start. In the end of this blog post you’ll find information about my exam history. More importantly, here follows a vocabulary as well as some dos and don’ts from my perspective. This includes what certification exams to take, how to best prepare for exams and things you want to avoid. It can be a bit confusing with the vocabulary. You can read about certifications and you can read about exams. What is what? You can schedule for a certification exam and hopefully you pass that exam and then you earn your certification. E.g. this is a certification exam: Exam PL-200: Microsoft Power Platform Functional Consultant and this is a certification: Microsoft Certified: Power Platform Functional Consultant Associate. Sometimes when we talk bout it we say the certification PL-200, but that is the exam. Not that important, but good to know what is what in Microsoft terminology. Here an example: Here follows a Microsoft Certifications Glossary. Badge – When you have passed an exam you will get a badge you can use e.g. on your website if you want to to that. You’ll access your badge on this page. Certification – When you have passed an exam you will earn a certification. You can see your certification on that same page as mentioned above. Exam – You schedule for an exam and hopefully you pass the exam. Exam Providers – Where you can schedule for taking an exam. You can schedule for taking an exam at an exam provider (in their location on their computers) or with an exam provider online and you can take it from home. MC ID – Microsoft Certification ID – An identification number for your certification profile. You only want one one these, read on and you’ll understand what I mean. This is yours personally, but can be linked to your employer and your employer will benefit from your certifications. MCTs – Microsoft Certified Trainers You can become a Microsoft Certified Trainer. The MCT program is for people who deliver training on Microsoft technologies, read more about it here. Transcript – Your Microsoft Certification Official Transcript contains all your active certifications and the achievement dates as well as your certification history. It also contains the exams you have taken and what date you took the exams. You’ll find it on the same page as your badges. Do find out what certifications to take As usual you can find lot’s of information in MS Docs and MS Learn. There is a place to go where you can get a great overview of available certifications. It is a really great starting place for learning more about what certifications to take. You can find it here: https://aka.ms/TrainCertPoster. You can also click on the different certifications and you will reach the information page about that exam and you can read about what areas it consists of, what knowledge that is measured. If you are uncertain about what Power Platform certification to aim for, MVP Julie Yack has written a great short explanation about each, you’ll find that information in Power Wiki. Do make a plan There might be different reasons for you wanting to schedule for an exam. Perhaps your employer wants you to take it as part of your learning path forward – great. Take this opportunity, prepare for it by reading the information page and make sure you either have knowledge about the different areas or make sure to study and do labs in order to gain that knowledge. Perhaps you are curious about if you already have the knowledge and you want to test your knowledge. Perhaps you are an exam collector, with the desire to catch them all? 🙂 Either way, you should make up a plan for what exams to take and make sure you have some time for studying. I need to make a plan for my own and for taking more Power Platform exams! Without a plan – you might find that time flies and you have not yet scheduled for the first one! Do schedule for an exam Don’t wait for the perfect timing, stick to your plan and make sure you schedule for the exams. Here follows information about scheduling for an exam. Don’t miss the Certifications Don’t chapter which contains valuable information related to scheduling for exams! If you want to schedule for an exam you can go to this place and logon with your account (or create a new account if you do not already have one). From there you will have the option to find an exam to schedule. Do prepare for the exam So you have scheduled for an exam, great! Now you need to start preparing for taking the exam. Start with the information page about the exam. Are there areas that are new to you? Find out if there are any labs to take. Look for others who have taken the same exams and written about it. E.g. MVP Carl Cookson has done mindmaps for the exams he has taken and he has also written articles about them. Power Wiki contains some other community resources as well for some of the Power Platform certification exams (here) and I have an ambition to give it a better structure as well as more information too in the future. Please let me know if you have any input! Don’t think of certification exams like Q/As Don’t have the mindset that certifications are nothing more than a set of questions and answers. The certification exam might be, but you’ll gain so much more if you study for real and make sure you either have work experience from the different areas which are included in the measured knowledge or that you have done labs and explored the areas by yourself. This is your chance to learn about something new. In my opinion, having a certification does not mean that much. It differs from person to person. If you have prepared for it the right way – made sure you are familiar with the the different concepts and you have studied and explored those areas that you are not familiar with – then it means you have learned something new. That means your certification has more value than if you had the mindset that you just wanted to pass the exam. In my opinion, making a plan for taking certifications can be equivalent to making a plan for new things to learn, you just have to make sure you study the concepts, do labs and explore the content behind. Don’t use your company/employer e-mail address when scheduling for an exam You might be super happy about your job and where you are today, but you do not know what happens in the future. Use you own private e-mail account when scheduling an exam for the first time. Otherwise you will have a problem the day when you change employer. I have actually done this mistake in the past. When I changed to a new employer, I had to create a new account and then figure out how to get my exams taken with the old account to me moved to my new account. I now know how you can accomplish that (see Schedule exams with different Microsoft accounts) but it is much better to do it right from the first time! So make sure you use your own personal e-mail address for your training profile (and all other community site too). Don’t schedule exams with different Microsoft accounts As already mentioned, if it is the first time you schedule for an exam, don’t use your company/employer account. If it is not the first time then there is another thing to watch out for – make sure you use the exact same account as you did the first time. This one is very important! I have done this mistake as well. It will mess up your MC IDs and you will have you exams and certifications split up over different accounts (and you can only link one account to your employer). If you end up there anyway, then go to this page choose Microsoft Certification and choose to ask a question. Registered a new question e.g. “Help with merging MC ID accounts” and you will get great help. Don’t wait, wait and wait Don’t wait until your feeling is “now I’m ready to take this exam”. Just schedule for an exam, set a date in the future, make sure you have some time to prepare for it and make sure to study and prepare for it. My experience from taking exams In February I passed the PL-200 exam, Last time I took an exam was in 2017, when I took MB2-715, MB2-716 and MB2-717. All three have been retired now. MB-715 and MB-717 were both retired on the 30th of June 2019. MB2-715 was replaced by MB-200 and MB2-717 was replaced by MB-210. Now PL-200 is the replacement for MB-200. When it comes to MB2-716 it was retired on the 31st of January 2021. However it actually still counts in the Microsoft Partner Network (MPN) program. So what about my even earlier experience from taking exams? Before I took these I also took one for CRM 2011: Microsoft Dynamics CRM 2011 Customization and Configuration and three for CRM 4.0: Microsoft Dynamics CRM 4.0 Applications, Microsoft Dynamics CRM 4.0 Customization and Configuration and Microsoft Dynamics CRM 4.0 Installation and Deployment. Believe it or not I also took Installation and Configuration in Microsoft Dynamics AX 2009. The very first I took was Microsoft SQL Server 2008 Implementation and Maintenance. That was the entire history of me taking exams. Time flies by so fast! For 4.0 I studied by reading books and playing around with on-premise installations and I did not have much experience with the platform other than working a bit with version 3.0 and doing an upgrade to version 4.0 at the company where I worked at that point in time. When I took the exam for 2011 it was easier because I had gotten some experience. The platform has evolved though and now parts of Power Platform are also included, so many new things to learn! You really need to have a mind-set like “learning never stops”, that goes both for taking exams and especially for working with this platform! In the next blog post, I’ll tell you all about my experience from preparing for and taking the PL-200 exam. 4 thoughts on “Microsoft Power Platform Certifications Dos and Don’ts”
OPCFW_CODE
Transcript from the "Securing the pubsub() Function" Lesson >> Douglas Crockford: The simplest attack to prevent people from getting publications, >> Douglas Crockford: Is this one where we simply subscribe with nothing. So that means we're gonna push undefined on to the subscribers array. So when we go to do this loop, kaboom right? And so everybody after us doesn't get any messages sent. [00:00:26] So how would you fix that? >> Speaker 2: That's not the problem though, because the people before you still get it, so- >> Douglas Crockford: Right, but the people after don't, and so- >> Speaker 2: The problem is we try to prevent anyone- >> Douglas Crockford: We have to make sure that everybody receives every message. [00:00:43] If one person gets it we can't say that we've succeeded, but everybody has to get it. >> Speaker 3: You can just do a type check for your tag on me, right? >> Douglas Crockford: We could do that. Up here, we could do type check you know, if type of equals function. >> Speaker 4: You should be re-wrapping your call in a try catch with what inside. >> Douglas Crockford: Yeah, that's it because if they were to pass any function that throws then it wouldn't do that, so- >> Speaker 5: So pubsub is something we give out to all the- >> Douglas Crockford: We give to all the third parties, they all share that instance. >> Speaker 5: So really in this case it's kind of don't trust user input. >> Speaker 5: They can send to subscribers but they can send you a function as subscriber, an empty string. >> Douglas Crockford: Right, well, in this case it could be a malicious failure but it could also be an accidental failure. [00:01:30] Someone might simply call it incorrectly and we don't want the whole system to fail because of that. So this is what try catches for, right? So we can simply catch the thing, ignore the error and we'll keep on processing. So, that's good. So we're now ready for your second observation. >> Speaker 4: Set subscribe to null. >> Douglas Crockford: Yeah, so we can tamper with the pubsub instance itself and we can delete the properties. Change either of them to undefined or null or replace them with other functions that could do things more insidious like allow only certain people to subscribe or allow people to think they subscribed when they didn't. [00:02:16] Or to filter the messages when they publish, or to tamper with the messages when they publish, there's an infinite number of terrible variations on this theme. So how would you fix that? >> Speaker 6: Use a server. >> Douglas Crockford: I'm sorry? >> Speaker 4: So you aren't gonna really expose publish so much as, >> Speaker 4: You're gonna return a function. You're gonna return a function. Returns a function? >> Speaker 4: I don't know. It's something like a getter and a setter. You're don't wanna set it if they can only get it. >> Douglas Crockford: You could, but there's an easier thing to do than that. >> Speaker 7: Freeze the option here? >> Douglas Crockford: Yeah, you wanna freeze it. So if we freeze the object then all of those attacks are completely frustrated. >> Douglas Crockford: Yeah, freeze was added in the S5. >> Speaker 4: Okay. >> Douglas Crockford: So it's in IE 9 and 10 and 11 and all the good browsers. [00:03:25] So we can freeze. So that completely solves this. And so that's one of the reasons why I like freeze as an object construction pattern because there's a whole lot of stuff we'll never have to worry about if the objects are frozen. And freezing this object does not impair its ability to do what it's supposed to do. [00:03:46] That these methods still have access to the subscribers array and still can do all that dynamic stuff. It's just nobody can tamper with the instance. >> Douglas Crockford: Okay, we're now ready for your next suggestion, your first one. >> Speaker 4: I forgot it already. >> Douglas Crockford: [LAUGH] >> Speaker 4: We're gonna pass in. [00:04:07] We're gonna subscribe with a function that deletes, iterates through this and deletes everything. >> Douglas Crockford: Yeah, exactly. >> Speaker 4: Something like that. >> Douglas Crockford: Yeah, something like that. So that'd be something like this. So we're going to subscribe with a function which will get access to this. And then that gives us access to the subscribers array. [00:04:55] That will delete all of the subscribers but we could do much more insidious things as well. Again, there's an infinite number of bad things you can do to this object if you get access to this. [00:05:21] So again, this is a confusion which leads to misunderstanding of what our programs do, which make it possible for bugs and security exploits to happen. >> Speaker 4: Inside this function that you've just written, this is the paysub object, right, with its var subscribers, that's its scope, right? >> Douglas Crockford: No, in this is case, this is the subscribers array. [00:05:51] Because this is the method meant. Okay, so here's your function, you pass it in to subscribe. So it gets stored in the subscribers array. And in this loop, it now gets called but it's being called as a method. There are four ways to call a method in this language. [00:06:13] I think there should only be one, but there are four. And at least one of the forms is confusing as to which kind of invocation it is. >> Douglas Crockford: So how would you fix that? >> Speaker 7: My brain says call, no? >> Douglas Crockford: I'm sorry? >> Speaker 7: My brain says using call. >> Douglas Crockford: That would be one way to do it if we say, subscribers[i] dot call and then pass in that. That would be one way to do it. >> Douglas Crockford: The way I would prefer to do it or the other thing you could do is assign subscriber[i] to a local variable, and then call that variable. [00:07:00] Or, what we could do is use forEach. I'm now distrustful of for loops in general and I like forEach much better. So I can pass to forEach a function which will call each element of the function and ignore any exceptions and I really like this. I think this is very, very nice. [00:07:24] So I'm not using for loops anymore. I'm doing this stuff instead and because it's passing in each individual element, there's no confusion about how this gets called. It's never gonna be a method invocation. In fact, this function doesn't even see the array. All it sees are the individual elements. >> Speaker 4: Remind us how to escape forEach. >> Douglas Crockford: I'm sorry? >> Speaker 4: Remind us, how do you break out of a forEach. >> Douglas Crockford: You use every instead of forEach. And we would design it so that this function would always return true. >> Speaker 4: It returns false then it leaves. >> Douglas Crockford: Then yeah, false is the exit signal. >> Speaker 8: On the previous slide, somebody wants you to clarify why does this refer to the array. >> Douglas Crockford: Right, so this is the function that is being executed here, okay? And this form of function invocation is the method form. The method form will have a dot in it or a bracket in it. [00:08:32] And so, everything to the left of the last dot or last bracket gets bound to this and that's why that happens.
OPCFW_CODE
package retry import ( "io" "math" "net/http" "net/http/httptrace" "time" "github.com/cenkalti/backoff/v4" "github.com/crochee/proxy-go/config/dynamic" "github.com/crochee/proxy-go/internal" "github.com/crochee/proxy-go/pkg/middleware" ) // nexter returns the duration to wait before retrying the operation. type nexter interface { NextBackOff() time.Duration } // New create a middleware that retries requests. func New(rt dynamic.Retry) *retry { return &retry{ initialInterval: rt.InitialInterval, attempts: rt.Attempts, } } // retry is a middleware that retries requests. type retry struct { next middleware.Handler initialInterval time.Duration attempts int } func (r *retry) Name() string { return "RETRY" } func (r *retry) Level() int { return 1 } func (r *retry) Next(handler middleware.Handler) middleware.Handler { r.next = handler return r } func (r *retry) ServeHTTP(rw http.ResponseWriter, req *http.Request) { if r.attempts > 1 { // 一般情况发送失败的时候,body会关闭并返回err,导致重试时,数据被破坏 body := req.Body defer internal.Close(body) req.Body = io.NopCloser(body) } var attempts int backOff := r.newBackOff() // 退避算法 保证时间间隔为指数级增长 currentInterval := 0 * time.Millisecond t := time.NewTimer(currentInterval) for { select { case <-t.C: shouldRetry := attempts < r.attempts retryResponseWriter := newResponseWriter(rw, shouldRetry) // Disable retries when the backend already received request data trace := &httptrace.ClientTrace{ WroteRequest: func(httptrace.WroteRequestInfo) { retryResponseWriter.DisableRetries() }, } newCtx := httptrace.WithClientTrace(req.Context(), trace) r.next.ServeHTTP(retryResponseWriter, req.WithContext(newCtx)) if !retryResponseWriter.ShouldRetry() { t.Stop() return } // 计算下一次 currentInterval = backOff.NextBackOff() attempts++ // 定时器重置 t.Reset(currentInterval) case <-req.Context().Done(): t.Stop() return } } } func (r *retry) newBackOff() nexter { if r.attempts < 2 || r.initialInterval <= 0 { return &backoff.ZeroBackOff{} } b := backoff.NewExponentialBackOff() b.InitialInterval = r.initialInterval // calculate the multiplier for the given number of attempts // so that applying the multiplier for the given number of attempts will not exceed 2 times the initial interval // it allows to control the progression along the attempts b.Multiplier = math.Pow(2, 1/float64(r.attempts-1)) // according to docs, b.Reset() must be called before using b.Reset() return b }
STACK_EDU
Issues with sending the license file automatically. - In the last week/weeks the license file and e-mail couldn't be sent automatically to the customers. This issue should be fixed now. Independently from this issue, rarely the license file and e-mail cannot be send automatically, this will be tracked now. Release 22.214.171.124 is now available Forum, support, SSL security - Sites actualized for SSL certificate reasons. The forum is shown in basic form, because of SSL complications. In the future, probably the forum itself will be closed, due to the low activity and due to the uncountable spammers. If you tried to register for the forum, I had have to confirm your request, but due to the spammers I couldn't recognize a valid registration. The best way to cantact me for any reasons is the support e-mail. I will compensate the forum with FAQ entries. XMLSeed XML Suite XMLSeed is primarily intended for creating XML schemas. This is supported by the generation of Graphs with UML edges, by generating a schema documentation, by validating the schema, by validating XML files against the schema, and not at least because of the Tree Editor GUI and the Graph Editor GUI. The schema can be transformed in C++ and Java code, with the help of the Code Generator and in SQL (DDL) with the help of the SQL converter. - low cost, powerful XML schema and XML editor (49 EUR) - no risk with the 30 day full functional trial version, which will be sent immediately after registration | Go to the quick registration - continous development "A finished project is a dead project." - new features for a certain release (e.g. professional, standard) are freely available for customers - powerful Code Generator can save a lot of money, BRE (Build-Run-Environment) give you a start advantage and prevent stucking with in the build process - no hidden code, no libraries, all the generated codes are visible - tree represantation instead of a text representation - graph editor give you a better overview - Four different graph editor representations: Phisical Editor, Logical Editor, Attribute-based Editor and the most abstract Element-based Editor. - the powerful Graph Generator with filter functions enable a simplified view of extremly complex XML schemas, helps you to present or understand a complex, bad written or generated XML schema (e.g. generated from UML) - convert the XML schema to SQL DDL (Data-Definition-Language), which enable database tools to generate a database - unique Regular Expression Designer for value restriction - improve your XML schema with the XML Editor and XML Sample Generator - runs on Mac, Windows and Linux, but can also run on another OS with JRE support What is the XML Schema? An XML schema describes the categorization of an artifact. The description of a person has different attributes to the police and in the email registration. A valid written schema should preferably not include any redundant data. Therefore, the description in the email registration does not include the police clearance certificate. In principle, an XML schema is a description optimized for a particular purpose. The following example shows the XML schema for the first XML example. The schema specifies that the 'vehicles' is a container for 'car's. A car may have two attributes, the name and color. The name can be any xs:string and the color can be either red or green or blue.
OPCFW_CODE
Add support for the Eiffel Tower underwater dataset Hi @ebrach, We recently published a dataset for long-term visual localization in underwater scenarios called Eiffel Tower. The dataset embeds four different visits of the same hydrothermal vent between 2015 and 2020. It is available at https://www.seanoe.org/data/00810/92226/. The associated paper describing the dataset will be published shortly. To put in simply, the dataset was created using the following pipeline: Models are first built independently for each year with the help of the vehicle's navigation data. They are then registered in a common reference frame using TEASER++ (https://github.com/MIT-SPARK/TEASER-plusplus) and ICP. A global model embedding images of all visits is then built with the help of individual models that are now in the same reference frame. The whole code was tested on Ubuntu 20.04. We created a quick visualization tool (https://github.com/clementinboittiaux/dsacstar/blob/visualization/datasets/visualization.ipynb) to check that the inputs created by our script are consistent with the ones created by the Cambridge Landmarks script. Input camera poses and their observed 3D coordinates are in red. Cambridge Landmarks - King's College: https://user-images.githubusercontent.com/74063264/228846112-228c1fe2-2818-46ac-8aec-a68baa570bdf.mp4 Eiffel Tower: https://user-images.githubusercontent.com/74063264/228847811-37a3588e-801b-4f45-ac80-a9a00d63f7fb.mp4 Feel free to ask if you have any questions. Hi Clémentin, sounds like a great effort! Would it be possible to read a pre-print of the associated paper, or ping me again if it is publicly available? Best, Eric Hi @ebrach, The data paper is now available on arxiv: https://arxiv.org/abs/2305.05301 Don't hesitate if you have any questions! Best, Clémentin Hi @ebrach, The data paper has been published on IJRR website: https://doi.org/10.1177/02783649231177322 Do you think it would be ok to merge? Best, Clémentin Hi Clémentin, congratulations on the journal paper! As said before, the dataset looks very interesting. In terms of the DSAC* repository, I do not think it makes sense to merge support for the dataset in. Rather, your dataset should live in its own repository and provide support scripts for various other algorithms, potentially including DSAC* (although your paper does not report DSAC* results as far as I can see, nor does it cite the DSAC* paper). The purpose of the DSAC* repository is to ensure reproducibility of our published results. We did not provide results for your dataset in our publication, and we do not have the capacity to provide "official" results in an addendum. Best, Eric Hi Eric, Thanks! I completely understand, I simply misinterpreted the repository's objective - sorry about that. It will remain on my fork then! Best, Clémentin
GITHUB_ARCHIVE
There are many reasons someone might be unable to write (blogs, reports, papers, etc) even when they feel the direct motivation to do so. For some, anxiety produces doubts about the writing process which causes a response of inaction to avoid the stress of writing. Other can struggle with the executive dysfunction that comes with ADD/ADHD, stress, depression, and mental health concerns. I'm not really able to speak further on the why your brain just won't let you do stuff, but I can share the coping mechanisms I found that help me write. 1. Do not strictly hold yourself to content-based goals It's very intimidating to say "I need to write this paper today" or "I need to upload content today". Now, if you don't mange to finish the paper or post, you've automatically failed. There may be many reasons why the quality of your work produces more or less content each day, so this may be an unrealistic goal to try to control for in saying "I will do X today". I recommend 20 minutes. Committing 20 minutes to doing a task is very reasonable and even my anxiety sort-of understand that this is a reasonable ask, so I break it down into something I can handle. 2. Create and save many drafts When wanting to create something, set out to create a few different drafts. For example, if you're trying to write a 2-5 paper for an assignment, start by writing the first few paragraphs of a mandatory section (like the literature review or background sections). Write when you have an idea without judgment and save it away for later. At a time where you need to write, go back and either use your original writing, or write a reaction to it, as if you were responding to yourself. I wrote this post like that, I wrote about 40% of the content and got distracted. Instead, I saved the draft and just when about what distracted me, and came back to this unfinished post. Leaving posts behind means that you will need to come back to them. This is one of the actions that many people forget, leading to half-done projects, posts, and articles. When I know I want to commit to writing anything, I first look back on what is half done. Setting aside 20 minutes may not seem like a real commitment that will stack up to a meaningful contribution, however, 3 instances leads to an hour. When we are working at our best, our work is higher quality and more meaningful. 60 minutes of quality work frequently out-does 4 hours of dragging and being full of anxiety, only to give myself negative self-talk. 4. Write when you want to write, and don't when you don't Forcing writing is one of the best ways to end up hating it. That's why doing short amount of writing in the long run will be less stress than forcing long times of writing commitment. Instead of thinking about published for a date, I'll create when I want to - and if I do finish the project, I sit on it. Currently, there are future posts for the blog schedule to come out in the future, so that my work produces a regular stream of content even if I do not. Instead of being frustrated when I have motivation or energy but nothing to do, I use that time differently be easier on myself when I don't want to do anything. 5. For things like lectures, articles, and blog posts, I utilize my high energy and motivation times and "bank" it for later. Using this blog as an example, I wrote this post at three different times where I felt motivated and attentive to the task, and then I scheduled the post for later. Here's an example, and a little preview of some upcoming posts. I hope these strategies help someone else be able to accomplish their writing goals with a little bit less stress <3
OPCFW_CODE
cpp inheritance questions What other reason apart from inheritance should a class need to have its functions as virtual? What happens during run time where a base class is inherited and the derived class doesn't implement few of the base class function and a third class calls that undefined methods which are defined as virtual in base. seg fault or will it call the base class function? What should I do if I don't want to define all the functions in my base class on my derived class but still have the necessary inheritance in place? What other reason apart from inheritance should a class need to have its functions as virtual? There is no reasonable usage for having a virtual function, if you are not dealing with inheritance. Both are meant for each other. What happens during run time where a base class is inherited and the derived class doesn't implement few of the base class function and a third class calls that undefined methods which are defined as virtual in base. seg fault or will it call the base class function? If Derived class don't make any declaration about the virtual function at all in its body, then (immediate) base class virtual functions are called with derived class object. On the other hand, if you simply declare virtual function in derived class but do not define it then it's a linker error. No segmentation fault. What should I do if I don't want to define all the functions in my base class on my derived class but still have the necessary inheritance in place? Though this is unclear, I would say, you simply don't declare/define virtual function (which you don't want) in derived class. It will use base class virtual functions. with regards to the last question you answered, if they are functions only to be executed internally then makeing them private will stop them from being inherited right? @Flyphe, no. Access specifier cannot forbid a function being inherited (i.e. overridden). You can still override it. And internally if the function is called (with derived class object) then still it can resolve to the overridden function. See the demo. so private variables and functions CAN be inherited? then whats the point of having protected then? @Flyphe: You are confused, I am afraid. A Derived class IS a Base, and adds some other things; therefore it has all the properties of a Base, comprising attributes and methods. However, if the method is private, then it cannot be called from a Derived method. The virtual-ness simply adds a quirk here, so that a Derived method (foo) can actually override a private Base method (foo). It cannot call the private Base method (foo) itself, just override it, but because it overrides it then any Base method can call it (foo) and get the new Derived behavior. ohhh ok. yeh i was getting a little confused. thanks for clearing that up If you do not reimplement a virtual method, a caller will call the base class one. This is sort of the point of using inheritance, IMO. If you do not want a base class to implement a virtual method, you can declare it like this: class Demo { void foo() = 0; }; This is what is called an abstract class. Note that you cannot create an instance of such a class. Any class which inherits from Demo must implement foo(), or it will also be an abstract class, and as such not instansiable.
STACK_EXCHANGE
How would you implement a lookat + up-vector / pole target in geometry nodes I've seen a few implementations of a geometry nodes "lookat" setup, where your object is facing an empty. The easiest setup being like this: However when both object (source object and lookat target) are moving around, there are always moments where the rotation is suddenly mixed up. Does anyone know how to add an up-vector or some kind of pole target (like in IK-rigging) or is this a limitation of the geometry nodes itself / do we need to revert to python scripting / matrix manipulations for this? First, GN is probably not the best way to do this specific problem. An armature modifier (or object constraints, if you don't care about deformation vs transformation) is going to be faster and easier. However, it is possible in GN. I don't think Align Euler to Vector is going to work, because that only aligns object space basis vectors, and for what you want, we need serial rotations-- rotations that align already modified vectors to vectors, not basis vectors. A track-to is basically two different rotations. We first rotate in (object) Z to the target, as best we can; we then rotate in X to the target, at which point we reach the target. We'll start with rotating in object Z to point at the target, so that we can then node group and re-use for our rotation in X: Our constant -YVec is the vector we want to point at the target. We get the vector to our target and discard the Z component, since we can't rotate that way. We get the arccos of the normalized dot product to get the angle, and we rotate about cross product of our vectors (which creates a vector perpendicular to both vectors) to figure out our axis. We rotate our individual vertices and write our new position. We also perform this same rotation on the vector that represents our tracking vector, the -Y vector, so that we can use this as our new tracking vector for the next rotation. Let's group it up and add another copy: Now we can use our new vector-to-align as the input for a new copy. We're not discarding any target XYZ components on this, making it equivalent to a damped track/swing, but because we've already rotated in Z to the target, that's the same thing as a locked track, same as rotation in only X. Okay, so what about a pole angle? A pole angle is similar, in that it is two sequential rotations: first we track the target, then we rotate about our modified tracking axis to a new target. This is actually a little more complicated-- we could use some tricks before to do our earlier locked track, by just discarding vector components, that we can't do once we're no longer measuring object basis vectors: So for our inputs, we have the relative location of two empties, we have a track-to vector, and we also have a pole vector-- the vector that should point to our pole. We've slightly edited our node group so that in addition to outputting our modified track-to vector, it also outputs a modified extra vector-- in this case, our modified pole vector. We need to measure the angle between our pole target and our modified pole vector, but the vec to pole target isn't necessarily going to be in the new XZ plane in which we want to rotate, so to get the target vector into the proper plane, we do a pair of cross-products. We can use the arcos of the dot product of normalized vectors to get the angle between them (note that cross-products are not necessarily normalized vectors!) This angle isn't directional however, so we need to look at which direction our rotation axis is pointing to decide whether to rotate by arccos(dot) or by negative arcos(dot). I'm using a mixRGB as a simple switch here. Finally, we'll rotate about our modified tracking vector as an axis. Of course, you'll want to I wouldn't be surprised if there were some optimizations to be found here. This is merely me working out the problem, testing it, fixing bugs as I go.
STACK_EXCHANGE
Online communication, collaboration, and file sharing play a crucial role in people’s daily lives. Centralized networks face many difficulties including data breaches, censorship, privacy violations, and etc. Therefore, decentralized networks are becoming more and more popular in the world. According to statistics, global decentralized social network sales revenue seeks more than US$ 12 million in 2023. It is assumed that the global decentralized social network sales revenue is likely to increase more than 8 times by 2033. Another example is BitTorrent – a well-known decentralized data storage network that accounts for less than 3% of all user traffic globally. The decentralized networks indeed offer an innovative approach to privacy and security in the digital realm. In this article, we will learn more about privacy-preserving technologies within these networks, highlighting the advancements and challenges that define the current landscape. Introduction to Decentralized Networks and Privacy Decentralized networks represent a paradigm shift from traditional centralized systems, where a single entity holds the authority and control over the entire network. The centralized network has its limitations. No matter how tight the security measures are, there is no 100% foolproof method of protecting the entity. This raises potential concerns about data accessibility, transparency, and control. In addition, hackers can more easily target a single point of failure when trying to access large amounts of data because everything is in one place. In contrast, decentralized networks distribute this control across multiple nodes, making them inherently more resistant to censorship, outages, and attacks. In other words, this architectural difference lays the groundwork for enhanced privacy and security features and allows large amounts of data to be stored without a central server or provider, helping to eliminate potential censorship and privacy invasion issues. The Significance of Privacy-Preserving Technologies In an era where data breaches and privacy invasions are all too common, the demand for technologies that safeguard user privacy has skyrocketed. Decentralized networks aim to revolutionize traditional centralized networks by providing more transparency and accessibility. However, previously mentioned benefits often come with disadvantages, for instance, reduced privacy because the user’s information is visible to all decentralized network participants. For this reason, privacy-preserving technologies are necessary to ensure privacy. Indeed, privacy-preserving technologies in decentralized networks are not just tools; they are the backbone of a movement toward a more secure and private online world. These technologies leverage cryptographic methods, such as end-to-end encryption (E2EE), homomorphic encryption, ring signatures, secure multi-party computation (sMPC), differential privacy, and zero-knowledge proofs (ZKPs), etc., to ensure that users' data remains confidential and secure from unauthorized access. Encryption: The First Line of Defense Encryption acts as the cornerstone of privacy-preserving technologies. By converting data into a coded format that is unreadable without the correct decryption key, encryption ensures that sensitive information remains secure in transit and at rest. For instance, end-to-end encryption (E2EE) is a secure communication process that secures data transferred from one endpoint to another and does not allow third parties to access this communication process. Even if the servers get hacked, you are still safe because your messages and other information cannot be read without the correct encryption keys. Another encryption method called homomorphic encryption generates only one single set of encrypted data and provides the user with only one key for decryption. This form of encryption allows users to perform mathematical operations on encrypted data without revealing the data itself. Recent statistics indicate that encryption adoption has seen a significant uptick, with over 80% of web traffic now encrypted, compared to just 50% five years ago. This surge underscores the critical role of encryption in protecting data privacy in decentralized networks. Zero-Knowledge Proofs: Enhancing Privacy Without Compromise Zero-knowledge proofs (ZKPs) offer a revolutionary way to verify transactions or data without revealing any underlying information. This cryptographic method allows for the validation of data accuracy without exposing the data itself, providing a powerful tool for privacy in decentralized networks. There are several types of ZKPs, including interactive proofs, non-interactive proofs, succinct non-interactive arguments of knowledge (SNARKs), proofs of knowledge, and scalable transparent arguments of knowledge (STARKs). The most basic types of ZKPs are interactive proofs that involve the prover and verifier who both interact with each other and prove the prover’s knowledge. The adoption of ZKPs is on the rise, with several blockchain projects integrating this technology to enhance user privacy and security. Decentralized VPNs: A Nod to Enhanced Privacy Within the realm of decentralized networks, decentralized Virtual Private Networks (dVPNs) have gained attention as a means of enhancing online privacy and security. Solutions like PortalsVPN and Orchid offer decentralized alternatives to traditional VPN services, leveraging the power of blockchain technology to provide secure and private internet access. A decentralized VPN does not have a single entity to maintain and perform centralized control of the server. Instead of a single service provider in charge, volunteers operate their nodes in the network. In other words, any user of a decentralized VPN can become a service provider and operate with their node in the decentralized VPN network. A decentralized VPN encrypts your internet traffic and mixes it with the encrypted internet traffic of other volunteers, making your internet traffic more difficult for third parties to trace. Moreover, dVPNs often do not store user data. It should be mentioned that without a central service provider, it is complicated for government authorities to censor decentralized VPNs. For this reason, decentralized VPNs can be used in regions with restricted internet access. Moreover, users of decentralized VPNs can earn rewards for operating their nodes. Therefore, this reward system motivates users to strengthen the decentralized VPN network. Other Tools and Technologies Beyond dVPNs, ZKPs, E2EE, and homomorphic encryption, the landscape of privacy-preserving technologies in decentralized networks is rich and diverse. Technologies such as secure multi-party computation (sMPC) or ring signatures are pushing the boundaries of what's possible, enabling secure data processing and analysis without exposing the actual data. The other privacy-preserving technologies are presented in the table below. |Other Privacy-Preserving Technologies |Secure multi-party computation (sMPC) |It is a protocol that distributes computations across multiple parties and no one party can see the other parties' data. |It is a digital signature that allows several members of the group to sign the message anonymously. |It is an addition of calibrated noise to the output of the function. |Privacy-Preserving Authentication Mechanism |It allows the user to authenticate without revealing unnecessary personal information. |It creates a model that is trained locally on each device and ensures the sharing of the model, not the data. It is assumed that privacy-preserving technologies will likely become increasingly popular in the future and the number of privacy-preserving technologies will grow. The Challenges Ahead Despite the promise of privacy-preserving technologies in decentralized networks, it is known that several challenges remain in decentralized networks. First of all, decentralized networks face scalability issues or struggle to operate under increased workload. Secondly, many decentralized networks lack seamless integration which is common in centralized networks. Thirdly, for beginners who do not have experience with blockchain concepts, the decentralized networks can be complex. It should be also mentioned that the ongoing battle against regulatory and legal hurdles is another obstacle related to the use and implementation of decentralized networks. The operational and maintenance cost of decentralized networks is higher than the cost of use of centralized networks. Moreover, security is one the most important issues in the development of decentralized networks, especially for small decentralized networks, which are the main potential target of cyber-attacks. Privacy-preserving technologies in decentralized networks represent a significant step forward in the fight for digital privacy and security. From encryption and zero-knowledge proofs to decentralized VPNs and beyond, these technologies offer a glimpse into a future where privacy is not just a possibility but a reality. As we continue to witness the evolution of these technologies, their role in shaping a more secure and private digital world cannot be overstated. The journey ahead is complex and fraught with challenges, but the foundation laid by these technologies offers hope for a more private and secure digital landscape. - Decentralized networks offer a foundational shift away from centralized control, enhancing privacy and security through distributed architecture. - Encryption remains a critical tool for safeguarding data, with its adoption rates serving as a testament to its effectiveness in preserving privacy. - Zero-knowledge proofs (ZKPs) revolutionize data verification, enabling the confirmation of data accuracy without compromising privacy by revealing the data itself. - Decentralized VPNs, such as PortalsVPN and Orchid, represent innovative approaches within the ecosystem, providing privacy solutions that leverage blockchain technology for secure internet access. - Emerging technologies like secure multi-party computation (sMPC) and homomorphic encryption are pushing the boundaries of privacy, allowing for secure data processing and analysis without exposing sensitive information. - Despite their potential, privacy-preserving technologies face challenges related to scalability, implementation complexity, and regulatory hurdles that must be addressed to realize their full potential. - The ongoing development of privacy-preserving technologies signifies a concerted effort toward establishing a digital environment where user privacy and security are paramount. - Community and developer engagement is crucial for the evolution and adoption of these technologies, as collaborative efforts can lead to innovative solutions and standards for privacy preservation. - Regulatory understanding and support are essential to navigate the complex legal landscape, ensuring that privacy-preserving technologies can thrive without unintended legal challenges. - User education and awareness are key to fostering an environment where individuals understand the importance of privacy and the tools available to protect it, empowering them to make informed decisions about their digital lives. These takeaways highlight the multi-faceted approach required to advance privacy-preserving technologies in decentralized networks, emphasizing the importance of innovation, collaboration, and education in overcoming challenges and shaping the future of digital privacy. Share this post Leave a comment All comments are moderated. Spammy and bot submitted comments are deleted. Please submit the comments that are helpful to others, and we'll approve your comments. A comment that includes outbound link will only be approved if the content is relevant to the topic, and has some value to our readers.
OPCFW_CODE
// // FeatureConstraintsBuilderTests.swift // FlintCore // // Created by Marc Palmer on 02/05/2018. // Copyright © 2018 Montana Floss Co. Ltd. All rights reserved. // import Foundation import XCTest @testable import FlintCore /// These tests attempt to unit test the DefaultAvailabilityChecker. /// /// To do this we don't want to bootstrap Flint itself, so we have to manually set up the environment /// and evaluate the constraints on our test features. class FeatureConstraintsBuilderTests: XCTestCase { var checker: AvailabilityChecker! // MARK: Helpers func evaluate(constraints: (FeatureConstraintsBuilder) -> Void) -> DeclaredFeatureConstraints { let builder = DefaultFeatureConstraintsBuilder() return builder.build(constraints) } func _assertContains(_ constraints: DeclaredFeatureConstraints, _ id: Platform, _ version: PlatformVersionConstraint) { XCTAssertTrue(constraints.allDeclaredPlatforms[id] == PlatformConstraint(platform: id, version: version), "Expected to find \(id) with \(version) but didn't") } // MARK: Tests func testPlatformVersionsAdditive() { let constraints = evaluate { builder in builder.platform(.init(platform: .iOS, version: .any)) builder.platform(.init(platform: .macOS, version: "10.13")) builder.platform(.init(platform: .tvOS, version: 11)) builder.platform(.init(platform: .watchOS, version: .any)) } _assertContains(constraints, .iOS, .any) _assertContains(constraints, .macOS, .atLeast(version: OperatingSystemVersion(majorVersion: 10, minorVersion: 13, patchVersion: 0))) _assertContains(constraints, .tvOS, .atLeast(version: 11)) _assertContains(constraints, .watchOS, .any) } func testPlatformVersionAssignment() { let constraints = evaluate { builder in builder.iOS = 11 builder.macOS = "10.13" builder.tvOS = 10 builder.watchOS = .any } _assertContains(constraints, .iOS, .atLeast(version: 11)) _assertContains(constraints, .macOS, .atLeast(version: OperatingSystemVersion(majorVersion: 10, minorVersion: 13, patchVersion: 0))) _assertContains(constraints, .tvOS, .atLeast(version: 10)) _assertContains(constraints, .watchOS, .any) } func testAnyPlatformVersionsAreDefault() { let constraints = evaluate { builder in // Nothing, default is specified as .any } _assertContains(constraints, .iOS, .any) _assertContains(constraints, .macOS, .any) _assertContains(constraints, .tvOS, .any) _assertContains(constraints, .watchOS, .any) } func testAtLeastIntPlatformVersionsProperties() { let constraints = evaluate { builder in builder.iOS = 10 builder.macOS = 10 builder.tvOS = 11 builder.watchOS = 4 } _assertContains(constraints, .iOS, .atLeast(version: 10)) _assertContains(constraints, .macOS, .atLeast(version: 10)) _assertContains(constraints, .tvOS, .atLeast(version: 11)) _assertContains(constraints, .watchOS, .atLeast(version: 4)) } func testAtLeastStringPlatformVersionsProperties() { let constraints = evaluate { builder in builder.iOS = "10.1" builder.macOS = "10.13" builder.tvOS = "11.2" builder.watchOS = "4.1" } _assertContains(constraints, .iOS, .atLeast(version: OperatingSystemVersion(majorVersion: 10, minorVersion: 1, patchVersion: 0))) _assertContains(constraints, .macOS, .atLeast(version: OperatingSystemVersion(majorVersion: 10, minorVersion: 13, patchVersion: 0))) _assertContains(constraints, .tvOS, .atLeast(version: OperatingSystemVersion(majorVersion: 11, minorVersion: 2, patchVersion: 0))) _assertContains(constraints, .watchOS, .atLeast(version: OperatingSystemVersion(majorVersion: 4, minorVersion: 1, patchVersion: 0))) } func testXXXOnly() { let constraintsiOS = evaluate { builder in builder.macOS = "10.13" builder.tvOS = "11.2" builder.watchOS = "4.1" builder.iOSOnly = 9 } _assertContains(constraintsiOS, .iOS, .atLeast(version: OperatingSystemVersion(majorVersion: 9, minorVersion: 0, patchVersion: 0))) _assertContains(constraintsiOS, .macOS, .unsupported) _assertContains(constraintsiOS, .tvOS, .unsupported) _assertContains(constraintsiOS, .watchOS, .unsupported) let constraintsMacOS = evaluate { builder in builder.tvOS = "11.2" builder.watchOS = "4.1" builder.iOS = 9 builder.macOSOnly = "10.13" } _assertContains(constraintsMacOS, .iOS, .unsupported) _assertContains(constraintsMacOS, .macOS, .atLeast(version: OperatingSystemVersion(majorVersion: 10, minorVersion: 13, patchVersion: 0))) _assertContains(constraintsMacOS, .tvOS, .unsupported) _assertContains(constraintsMacOS, .watchOS, .unsupported) let constraintsWatchOS = evaluate { builder in builder.tvOS = "11.2" builder.iOS = 9 builder.macOS = "10.13" builder.watchOSOnly = "4.1" } _assertContains(constraintsWatchOS, .iOS, .unsupported) _assertContains(constraintsWatchOS, .macOS, .unsupported) _assertContains(constraintsWatchOS, .tvOS, .unsupported) _assertContains(constraintsWatchOS, .watchOS, .atLeast(version: OperatingSystemVersion(majorVersion: 4, minorVersion: 1, patchVersion: 0))) let constraintsTVOS = evaluate { builder in builder.iOS = 9 builder.macOS = "10.13" builder.watchOS = "4.1" builder.tvOSOnly = "11.2" } _assertContains(constraintsTVOS, .iOS, .unsupported) _assertContains(constraintsTVOS, .macOS, .unsupported) _assertContains(constraintsTVOS, .tvOS, .atLeast(version: OperatingSystemVersion(majorVersion: 11, minorVersion: 2, patchVersion: 0))) _assertContains(constraintsTVOS, .watchOS, .unsupported) } }
STACK_EDU
[dm-crypt] simple ideas addressing ssd TRIM security concern arno at wagner.name Sat Apr 14 06:15:11 CEST 2012 On Sat, Apr 14, 2012 at 02:23:23AM +0100, alban bernard wrote: > I carefully read that page > http://asalor.blogspot.fr/2011/08/trim-dm-crypt-problems.html to > understand the basics behind the main security problem involved by trim > commands. Simple ideas came to my mind, but I need to submit them to know > how they fail (or by any chance how they may succeed). > From what I understand, TRIM commands are used to say to the SSD > controller: "these sectors are discarded, so you can erase them at any > time chosen by you rather than waiting an explicit rewrite from me". So, > from a crytographic point of view, using TRIM commands is like replacing > deleted files by "zero" files in a totally uncontrolled manner. This > breaks the main purpose of cryptography: hiding as much things as Well, it does not quite "break" it. The correct terminology is that you have an information leak where filesystem-discarded blocks (by TRIM) can be identified by an attacker with low effort Well, low "cryptoanalytic effort", actually. For a "break", you would actually have to have real information end up on the raw block device, but in most situations, the information content of the TRIM information will be small. There are scenarios though: Assume you have never TRIMed and a large file gets deleted. Then an attacker can determine the size of that file, but rounded up to the block size. For a file in the 1GB range that would man (asuming 512B sectors, even though filesystem block are typically larger), that the total file sizte is 30 bit, of which the first 21 are leaking. While that is information, it is pretty fuzzy and requires pretty special conditions to be visible in the A second scenario would arise if some malicious software that can write only the encrypted device wishes to signal to the outside. Then a "0" could be a low number of TRIMed blocks and a "1" copuld be a high number. Both can be achieved by wrinting alot of data or deleting a lot of data. Repeated action and observer access to only the encrypted data to transfer more bits. While this may be a concern, it is a pretty bizzare scenario. And the same can be achieved by changing sector contents, as the observer-part of the attacker _can_ detect changes in the on-disk data. The Design "error" is of course that the raw device is told about some things that should only be visible after decryption. > After TRIM commands, the SSD controller erases blocks whenever he wants > after receiving the command. Thus, it seems to not inform us back where > those blocks are remapped in its LBA translation table (not sure about It does not, but forensic analysis may be able to extract some sort of log or trace from the device. > So, what about running TRIM commands only in certain cases: on-demand / by > sectors / ... ? The overall purpose being: > - to limit the TRIMed space on device > - to control the TRIMed pattern (spread it randomly as much as possible) The information leakage is very small. If it is a concern, then TRIM must never be used. If not, TRIM can be used in unlimited fashion. It is a standard security trade-off. Your options can in some circumstances decrease the information lekage, but they will not so so reliably (comments below), and hence do not reduce the worst-case. But the risk-analysis that form the basis of the decision to allow TRIM or not has to use the worst-case as there is no basis for forming an average-case and an attacker may be able to provoke the worst-case scenario. > Here the naive things: > - send on-demand TRIM commands based on device write access rate and > remaining free space Can leak information just as well, will be fuzzier in only some cases. > - keep a table of TRIMed blocks or just their total size (send TRIM > commands only below a certain size limit threshold) > - send TRIM commands on randomly chosen deleted blocks only (not all > deleted blocks) Selecting them randomly would here mean "selecting them in a cryptographically secure random way". This violates KISS as it would be pretty complex and suddently you have information to protect by hard crypto in the filesystem layer that is not designed for this. If the goal of an attacker is to recognize some specific action the attacker designed before (e.g. malware accessing the device in a pattern), this may still be visible. > - write garbage to fill some TRIMed "blanks" (less than a threshold > critical to ssd performance) "garbage" would again need to be "cryptographic garbage". I am not even sure this is possible. And you still can deduce that blocks are actually deleted, as they show up as deleted in the SSD's internal tables, so the effect is only higher attacker effort, but the leackage stays exactly the same. > - randomize device usage pattern when choosing blocks to TRIM (hide I don't quite understand how that could work, do you mean to have a random mapping between encrypted and hysical secors? That would kill performance a lot worse than not using TRIM in the first place. > Let me know if it could lead to real life solution. Any criticism Well, I think you do understand the problem, but not quite its implications. It really is a case of "secure, fast, cheap, pick any two". If security is your primary concern, then do not use TRIM. The SSD can still to garbage collection (well, garbage compaction really) but only with the spare capacity it reserves for that. Leads to a bit of performance decrease, depending on the SSD. My advice would be to decide how important the postential data leakage is (depending on the application) and then do with or without TRIM entirely. The default should be no TRIM though as it is really a decision by the user to make the system less secure for a performance gain. Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno at wagner.name GnuPG: ID: 1E25338F FP: 0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25 338F One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision. -- Bertrand Russell More information about the dm-crypt
OPCFW_CODE
I Just added some templating/content management to MFlow that IMHO is more flexible and simple than proper templating or content management approaches. tFieldEd key html is a widget that display the content of html as is, But if logged as administrator, it permits to edit this chunk of html in place. So the editor could see the real appearance of what he write in the page while editing. When the administrator double click in the paragraph, the content is saved and identified by the key. Then, from now on the users will see the content of the saved paragraph, not the original one in the code. The content is saved in a file by default ("texts" in this versions), but there is a configurable version (tFieldGen). The html content and his formating is cached in memory, so the display is very fast. In case that the content has been fiixed and it don´t need further editions, just read and present the edited content. tFieldEd is not longer needed. There are also multilingual content management primitives mField and mFieldEd This demo shows in four steps how a typical process of content edition would work. setHeader $ \html -> thehtml << body << html let first= p << italics << (thespan << "this is a page with" +++ bold << " two " +++ thespan << "paragraphs") second= p << italics << "This is the original text. This is the second paragraph" pageEditable = (tFieldEd "first" first) **> (tFieldEd "second" second) ask $ first ++> wlink () (p << "click here to edit it") setAdminUser "admin" "admin" ask $ p << "Please login with admin/admin to edit it" ++> userWidget (Just "admin") userLogin ask $ p << "now you can click the field and edit them" ++> p << bold << "to save the edited field, double click on it" **> wlink () (p << "click here to see it as a normal user") ask $ p << "the user sees the edited content. He can not edit it" **> wlink () (p << "click to continue") ask $ p << "When text are fixed,the edit facility and the original texts can be removed. The content is indexed by the field key" ++> tField "first" **> tField "second" **> p << "End of edit field demo" ++> wlink () (p << "click here to go to menu") This example is at https://github.com/agocorona/MFlow/blob/master/Demos/demos.hs This example uses the last version of MFlow at https://github.com/agocorona/MFlow. It uses the last version of Workflow https://github.com/agocorona/Workflow TCache : https://github.com/agocorona/TCache
OPCFW_CODE
As the usage of AI turns into more and more pervasive in enterprise, industries are discovering that they’ll use machine studying fashions to profit from current knowledge to enhance enterprise outcomes. Nonetheless, machine studying fashions have a definite downside: historically, they want big quantities of information to make correct forecasts. That knowledge typically consists of intensive private and personal data, the usage of which is ruled by fashionable knowledge privateness tips, such because the EU’s Normal Knowledge Safety Regulation (GDPR). GDPR units a particular requirement referred to as knowledge minimization, which implies that organizations can accumulate solely knowledge that’s needed. It’s not solely knowledge privateness rules that should be thought of when utilizing AI in enterprise: Amassing private knowledge for machine studying evaluation additionally represents a giant threat with regards to safety and privateness. In response to the Value of a Knowledge Breach Report for 2021, the common knowledge breach prices over $4 million total for the enterprise, with a median value of $180 per every document compromised. Minimizing the info required So how will you proceed to profit from the large benefits of machine studying whereas lowering knowledge privateness points and safety threats and adhering to rules? Lowering the collected knowledge holds the important thing, and you should utilize the minimization know-how from IBM’s open supply AI Privateness toolkit to use this method to machine studying fashions. Maybe the principle downside you face when making use of knowledge minimization is figuring out precisely what knowledge you really need to hold out your process correctly. It appears nearly not possible to know that upfront, and knowledge scientists are sometimes caught making educated guesses as to what knowledge they require. Given a educated machine studying mannequin, IBM’s toolkit can decide the precise set of options and the extent of element for every function that’s wanted for the mannequin to make correct predictions on runtime knowledge. The way it works It may be tough to find out the minimal quantity of information you want, particularly in complicated machine studying fashions comparable to deep neural networks. We developed a first-of-a-kind methodology that reduces the quantity of non-public knowledge wanted to carry out predictions with a machine studying mannequin by eradicating or generalizing a number of the enter options of the runtime knowledge. Our methodology makes use of the information encoded throughout the mannequin to provide a generalization that has little to no affect on its accuracy. We confirmed that, in some circumstances, you may accumulate much less knowledge whereas preserving the very same stage of mannequin accuracy as earlier than. However even when this isn’t the case, with a view to adhere to the info minimization requirement, firms are nonetheless required to display that every one knowledge collected is required by the mannequin for correct evaluation. This know-how may be utilized in all kinds of industries that use private knowledge for forecasts, however maybe the obvious area is healthcare. One attainable utility for the AI minimization know-how can be for medical knowledge. For instance, analysis scientists growing a mannequin to foretell if a given affected person is more likely to develop melanoma in order that advance preventative measures and preliminary therapy efforts may be administered). To start this course of, the hospital system would typically provoke a examine and enlist a cohort of sufferers who conform to have their medical knowledge used for this analysis. As a result of the hospital is in search of to create probably the most correct mannequin attainable, they’d historically use all the out there knowledge when coaching the mannequin to function a choice assist system for its docs. However they don’t wish to accumulate and retailer extra delicate medical, genetic, or demographic data than they actually need. Utilizing the minimization know-how, the hospital can resolve what % discount in accuracy they’ll maintain, which may very well be very small and even none in any respect. The toolkit can then mechanically decide the vary of information for every function, and even present that some options aren’t wanted in any respect, whereas nonetheless sustaining the mannequin’s desired accuracy. Researching knowledge minimization You may experiment with the preliminary proof-of-concept implementation of the info minimization precept for machine studying fashions that we lately printed. We additionally printed a Knowledge minimization for GDPR compliance in machine studying fashions paper, the place we introduced some promising outcomes on just a few publicly out there datasets. There are a number of attainable instructions for extensions and enhancements. Our preliminary analysis targeted on classification fashions, however as we deepen our examine of this space, we plan to increase it to further mannequin sorts, comparable to regression. As well as, we plan to look at methods to mix this work with different strategies from the domains of mannequin testing, explainable AI (XAI), and interpretability. Knowledge minimization helps researchers adhere to knowledge safety rules, however it additionally serves to forestall unfair knowledge assortment practices, comparable to extreme assortment or retention of information, and the private threat to knowledge topics in case of an information breach. Generalizing the enter knowledge to fashions has the potential to assist forestall prediction bias or different types of discrimination, resulting in extra fairness-aware or discrimination-aware knowledge mining practices. Obtain the toolkit and take a look at it for your self.
OPCFW_CODE
When we think about how we can move through the Universe, we immediately think of three different directions. Left-or-right, forwards-or-backwards, and upwards-or-downwards: the three independent directions of a Cartesian grid. All three of those count as dimensions, and specifically, as spatial dimensions. But we commonly talk about a fourth dimension of a very different type: time. But what makes time a dimension at all? That’s this week’s Ask Ethan question from Thomas Anderson, who wants to know: I have always been a little perplexed about the continuum of 3+1 dimensional Space-time. Why is it always 3 [spatial] dimensions plus Time? Let’s start by looking at the three dimensions of space you’re familiar with. Here on the surface of the Earth, we normally only need two coordinates to pinpoint our location: latitude and longitude, or where you are along the north-south and east-west axes of Earth. If you’re willing to go underground or above the Earth’s surface, you need a third coordinate — altitude/depth, or where you are along the up-down axis — to describe your location. After all, someone at your exact two-dimensional, latitude-and-longitude location but in a tunnel beneath your feet or in a helicopter overhead isn’t truly at the same location as you. It takes three independent pieces of information to describe your location in space. But spacetime is even more complicated than space, and it’s easy to see why. The chair you’re sitting in right now can have its location described by those three coordinates: x, y and z. But it’s also occupied by you right now, as opposed to an hour ago, yesterday or ten years from now. In order to describe an event, knowing where it occurs isn’t enough; you also need to know when, which means you need to know the time coordinate, t. This played a big deal for the first time in relativity, when we were thinking about the issue of simultaneity. Start by thinking of two separate locations connected by a path, with two people walking from each location to the other one. You can visualize their paths by putting two fingers, one from each hand, at the two starting locations and “walking” them towards their destinations. At some point, they’re going to need to pass by one another, meaning your two fingers are going to have to be in the same spot at the same time. In relativity, this is what’s known as a simultaneous event, and it can only occur when all the space components and all the time components of two different physical objects align. This is supremely non-controversial, and explains why time needs to be considered as a dimension that we “move” through, the same as any of the spatial dimensions. But it was Einstein’s special theory of relativity that led his former professor, Hermann Minkowski, to devise a formulation that put the three space dimensions and the one time dimension together. We all realize that to move through space requires motion through time; if you’re here, now, you cannot be somewhere else now as well, you can only get there later. In 1905, Einstein’s special relativity taught us that the speed of light is a universal speed limit, and that as you approach it you experience the strange phenomena of time dilation and length contraction. But perhaps the biggest breakthrough came in 1907, when Minkowski realized that Einstein’s relativity had an extraordinary implication: mathematically, time behaves exactly the same as space does, except with a factor of c, the speed of light in vacuum, and a factor of i, the imaginary number √(-1).
OPCFW_CODE
📰 Welcome to the Blockmason Weekly Update! Hello, Blockmason community! Another week is upon us, which means it’s time for another Weekly Update. In this edition of the Weekly Update, Link now supports ThunderCore and Microsoft Azure Blockchain, and Blockmason joins the Microsoft Partner Network. Are you ready? Let’s get started! Link ThunderCore Blockchain Compatibility Now Officially Supported Hello, Blockmason community! We are excited to announce that Blockmason Link now has compatibility with the ThunderCore blockchain. ThunderCore is an Ethereum compatible, Smart Contract platform boasting 1,200+ transactions-per-second (TPS), quick block confirmations and low gas costs, making it quick and easy for DApps to deploy and scale. We are pleased to be able to bring more options to developers using Link and look forward to continuing the expansion of available blockchains usable with Link. You can obtain some testnet TT tokens from ThunderCore Testnet Faucet to be sent to your Link default account address to get started. ThunderCore also uses the Byzantium EVM Version, which you will need to specify in Link. For full details on how you can start using ThunderCore with Link check out the documentation here: https://github.com/blockmason/link-onboarding/blob/master/ThunderCore.md You can read more on ThunderCore at https://www.thundercore.com/ Microsoft Azure Blockchain Now Usable With Link as The First Supported Private Blockchain Microsoft’s Azure Blockchain Service makes it easy to build, govern and expand consortium blockchain networks at scale. Now thanks to new compatibility with Link, we show how you can quickly create your own private blockchain on Azure, and use the simplicity of Blockmason Link to deploy and interact with smart contracts on your Azure blockchain. This will be a private blockchain network using the Quorum protocol (https://www.goquorum.com/) but still Ethereum-compatible. Azure Blockchain Services has a wealth of tools available so feel free to dig into their Monitoring and logging tools to get a better view on your blockchain! We are excited to expand Link’s compatibility to include Microsoft’s Azure Blockchain. For details on how you can start using Microsoft’s Azure Blockchain with Link check out the full documentation here: https://github.com/blockmason/link-onboarding/blob/master/AzureBlockchain.md You can read more on Microsoft’s Azure Blockchain at https://azure.microsoft.com/en-us/solutions/blockchain/ Blockmason Becomes a Member of The Microsoft Partner Network Thanks to the recent inclusion of the Microsoft Azure Blockchain compatibility with Link, Blockmason has now joined the Microsoft Partner Network. Not only will this give Link a chance to appear in the Azure Marketplace, but it gives us a wide range of products to work with in the industry, as well as new program options to bring Link to market. More details soon, stay tuned.
OPCFW_CODE
Computer Graphics: Past, Present and Future Add to Google Calendar Since I started my career in Computer Graphics 35 years ago at U of M I cannot resist the opportunity to begin my talk by reminiscing about what was going on in the field back then, and what in particular was going on at Michigan. Moving on to the present I want to talk about two active research interests: pixel processing and geometric computation. The pixel is the basic element of computation for computer imaging. The most popular pixel format is to devote 8-bits for each of the red, green and blue components of a color. Image compositing operations include an extra "transparency" channel usually called alpha. When looked at more deeply, however, it turns out that there are subtle differences between in the details of these encodings for different applications. When we need to combine images from various sources such as computer generated images, digital video and digital cameras, these differences become more and more of a nuisance. In this part of the talk I will describe these differences and discuss some higher resolution formats that will help bring these various worlds together. 3D graphics depends on geometric computations to figure out what is visible at each pixel of a display. Current 3D hardware uses flat triangles as its basic elements. In the future we would like to render higher order curved surfaces. In order to do this, we will need to understand how to manipulate these constructs algebraically. In this part of the talk I will describe some new mathematical notation tricks that make it easier to solve such problems as intersections and tangency for higher order curves and surfaces. Finally, I will speculate a bit about the future of computer graphics: the problems and opportunities that will spark our interest in the next few years. Jim Blinn had a 35-year long career in Computer Graphics starting in 1968 while an undergraduate at the University of Michigan. In 1974 he became a graduate student at the University of Utah where he did research in realistic rendering and received a Ph. D in 1977. The results of this research have become standard techniques in today's computer animation systems. They include realistic specular lighting models, bump mapping and environment/reflection mapping. In 1977 he moved to the Jet Propulsion Laboratory where he produced computer graphics animations for various space missions to Jupiter, Saturn and Uranus. These animations were shown on many news broadcasts as part of the press coverage of the missions and were the first exposure to computer animation for many people in the industry today. Also at JPL he produced animation for Carl Sagan's PBS series COSMOS and for the Annenberg/CPB funded project "the Mechanical Universe" , a 52 part telecourse to teach college level physics. During these productions he developed several other standard computer graphics techniques including work in cloud simulation and a modeling technique variously called blobbies or metaballs. In 1987 he began a regular column in the IEEE Computer Graphics and Applications journal where he describes mathematical techniques used in computer graphics. He has just published his third volume of collected articles from this series. From 1989 to 1995 he worked at Caltech producing animations to teach High School level mathematics. In 1995 he joined Microsoft Research as a Graphics Fellow. He is a MacArthur Fellow, has an honorary Doctor of Fine Arts degree from Otis Parsons School of Design and is currently the only person to receive both the Siggraph Achievement Award (1983) and the Stephen Coons Award (1999).
OPCFW_CODE
How To Configure Exchange Server 2003 OWA to Use S/MIME This article has been archived. It is offered "as is" and will no longer be updated. IN THIS TASK This article discusses how to configure the Exchange Server 2003 version of Microsoft Outlook Web Access (OWA) to permit users to digitally sign and encrypt e-mail messages by using the new OWA Secure/Multipurpose Internet Mail Extension (S/MIME) control. The S/MIME control works in conjunction with public key infrastructure (PKI) technology to provide signing and encryption functionality. Note This article assumes a solid understanding of cryptography and PKI technology. For more information about cryptography and Windows PKI, visit the following Microsoft Web site: back to the top How to Install Windows Server 2003 Certification AuthorityThe standard User certificate template that is included with Windows Server 2003 Certificate Services supports message signing and message encryption for the OWA S/MIME control. If you want to require separate certificates for signing and encryption, you must create two new templates: one template for signing and one template for encryption. Note After the certification authority (CA) component is installed, certificates are issued automatically upon request unless the certificate template is modified to require an administrator to grant the certificate. Therefore, user certificates are issued without an administrator's approval. back to the top How to Request a Certificate To request a user certificate, follow these steps: - On the client computer, start Microsoft Internet Explorer. - On the Address bar, type the following text (where CertificateServer is the name of the server that is running Certificate Services), and then click Go: http://CertificateServer/certsrv - If you are prompted to, type your authentication credentials, click Request a certificate, and then click Next. - On the Choose Request Type page, click User Certificate, and then click Next. - On the User Certificate – Identifying Information page, click Submit. - On the Certificate Issued page, click Install this certificate. How to Install the OWA S/MIME ControlTo install the OWA S/MIME control on the client computer, follow these steps: - On a Windows 2000-or-later-based client computer that is running Internet Explorer 6.0 or later, log on to OWA. - In the OWA Navigation pane, click Options. - Under E-mail Security, click Download. Note If you receive a Security Warning dialog box, click Yes. - Under E-mail Security, click to select the Encrypt contents and attachments for outgoing messages check box if you want encryption enabled by default when you compose a message. - Under E-mail Security, click to select the check box for the recipient of the signed message. The message should be digitally signed by the sender. How to Test Encryption and SigningTo send an encrypted message, follow these steps: - In OWA, click New. - Compose a message. Note The sender must have the recipient’s public key to encrypt the message contents. Therefore the recipient must have already enrolled with Certificate Services. - On the toolbar, click Add digital signature to this message. - Click Send. - Verify that the message is encrypted and viewable only by the recipient on a computer that has the recipient’s encryption certificate installed. Article ID: 823568 - Last Review: 12/08/2015 03:26:22 - Revision: 2.7 Microsoft Exchange Server 2003 Standard Edition, Microsoft Exchange Server 2003 Enterprise Edition - kbnosurvey kbarchive kbtshoot kbhowtomaster KB823568
OPCFW_CODE
import collections from torch import nn from ..base import GraphClassifierLayerBase, GraphClassifierBase from .avg_pooling import AvgPooling from .max_pooling import MaxPooling class FeedForwardNN(GraphClassifierBase): r"""FeedForwardNN class for graph classification task. Parameters ---------- input_size : int The dimension of input graph embeddings. num_class : int The number of classes for classification. hidden_size : list of int Hidden size per NN layer. activation: nn.Module, optional The activation function, default: `nn.ReLU()`. """ def __init__(self, input_size, num_class, hidden_size, activation=None, graph_pool_type='max_pool', **kwargs): super(FeedForwardNN, self).__init__() if not activation: activation = nn.ReLU() if graph_pool_type == 'avg_pool': self.graph_pool = AvgPooling() elif graph_pool_type == 'max_pool': self.graph_pool = MaxPooling(**kwargs) else: raise RuntimeError('Unknown graph pooling type: {}'.format(graph_pool_type)) self.classifier = FeedForwardNNLayer(input_size, num_class, hidden_size, activation) def forward(self, graph): r"""Compute the logits tensor for graph classification. Parameters ---------- graph : GraphData The graph data containing graph embeddings. Returns ------- list of GraphData The output graph data containing logits tensor for graph classification. """ graph_emb = self.graph_pool(graph, 'node_emb') logits = self.classifier(graph_emb) graph.graph_attributes['logits'] = logits return graph class FeedForwardNNLayer(GraphClassifierLayerBase): r"""FeedForwardNNLayer class for graph classification task. Parameters ---------- input_size : int The dimension of input graph embeddings. num_class : int The number of classes for classification. hidden_size : list of int Hidden size per NN layer. activation: nn.Module, optional The activation function, default: `nn.ReLU()`. """ def __init__(self, input_size, num_class, hidden_size, activation=None): super(FeedForwardNNLayer, self).__init__() if not activation: activation = nn.ReLU() # build the linear module list module_seq = [] for layer_idx in range(len(hidden_size)): if layer_idx == 0: module_seq.append(('fc' + str(layer_idx), nn.Linear(input_size, hidden_size[layer_idx]))) else: module_seq.append(('fc' + str(layer_idx), nn.Linear(hidden_size[layer_idx - 1], self.hidden_size[layer_idx]))) module_seq.append(('activate' + str(layer_idx), activation)) module_seq.append(('fc_end', nn.Linear(hidden_size[-1], num_class))) self.classifier = nn.Sequential(collections.OrderedDict(module_seq)) def forward(self, graph_emb): r"""Compute the logits tensor for graph classification. Parameters ---------- graph_emb : torch.Tensor The input graph embeddings. Returns ------- torch.Tensor The output logits tensor for graph classification. """ return self.classifier(graph_emb)
STACK_EDU
I have mentioned the creepy rock spider vibe that's often present in the Failing Terrorgraph in previous posts. The stories themselves can be subtly sinister, but the atmosphere is present in what they place around them on the page as well. For example they will often have words in headlines evoking violence and death near photos of kids that are included in other articles. Look at the page below and you'll see a headline about deadly snakes at the bottom of it. (Interesting that they're seen as potential lifesavers as well. So there's the duality they love so much ... Up is down, good is bad, and right is wrong, don't ya know!) There's also one on the adjacent page (not visible) that reads "fatal sex chat". Of course this kind of thing will happen from time to time just by chance. But I see it so frequently it cannot be random. Look at the headline. It begins: "Family hit ..." Putting "hit" near photos of kids is a particular fave of theirs. If you live in NSW, keep an (illuminati) eye out for it in this paper particularly. I'm sure you'll see an example before long. Then there's the rest of the headline ... But that's not the end of it. Look closely at the placement of the words "cut" and "slashed". They are right over the heads of the two kids. Given the context I'm absolutely sure this was done intentionally. Think how twisted you would have to be to dream something like that up, let alone actually put it in the paper! Remember, this is not a fictional horror story in an anthology. This is "journalism" involving real people. Needless to say, hardly anyone notices this kind of thing. That's because they don't suspect that there is a hidden agenda -- least of all one this sinister. "That couldn't possibly be; it's just a newspaper." So they just don't see it. But when you realize there is something like this going on, I guarantee you'll start to see it. And that's not because it's only in your head and you're "projecting". You'll see it because it's actually there. And there's more. Look closely at the family name, shown in the bottom left of the photo above. Click on it to give you a better view if it's not visible straight away. The family surname is "Savage". So that becomes the adjective to describe the severity of those cuts and slashes, geddit? This is just so creepy and wrong. And they do this kind of thing all the time. I see so many examples like this I just cannot keep up. And that's just in the one paper. This particular trick is more pronounced and commonly used in this particular lie factory. But you see related patterns right through the rest of the MSM. This is a top down cultural phenomenon. I think it has a lot to do with the secret society wankers in the editorial tier who dominate all of the MSM. Please start looking for this kind of thing. I'll bet you'll see examples that creep you out before long.
OPCFW_CODE
Why does FireError fail in C# 2012, but works in VB, while FireInformation works in both? I have an SSIS package in Visual Studio 2014, and I want to raise an error in a Script Component if any records traverse a particular path out of a 3rd party transformation. I WANT to do this in C# 2012, but the FireError method gives an error: The best overloaded method match for 'Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSComponentMetaData100.FireError(int, string, string, string, int, out bool)' has some invalid arguments When I try to do this: bool fireAgain = false; IDTSComponentMetaData100 myMetaData; myMetaData = this.ComponentMetaData; myMetaData.FireError(0, "Script Component", "Invalid Records", string.Empty, 0, ref fireAgain); but if I change FireError to FireInformation, it compiles and works -- except of course I need an error raised, not an informative event. Also, if I use Visual Basic instead of C# as so: Dim pbFireAgain As Boolean = False Dim myMetaData As IDTSComponentMetaData100 myMetaData = Me.ComponentMetaData myMetaData.FireError(0, "Script Component", "Invalid Records", String.Empty, 0, pbFireAgain) Which is, I mean, literally the same exact thing but in a different language, it works fine. VB also works with FireInformation. Obviously I can solve my immediate problem by using VB, but can someone tell me WHY this is this way? It seems like a specific issue with C#. As evidence, we have this on MSDN: https://msdn.microsoft.com/en-us/library/ms136031.aspx Where the Script Component version of FireError is the only of eight examples to not have C# and VB versions (the logging one is poorly formatted, but they're both there). I'm wondering if there's a debugger configuration that threatens to run C# code in an odd way, as this stackoverflow question answered, but the error I get is at design time -- Visual Studio springs the earlier "invalid arguments" error before I compile, so it knows something is off. Thoughts? You may be confusing the similar but different syntax for firing error vs information events from Script Components (data flow task) versus Script Tasks (control flow). The intellisense for Component indicates that the parameter is pbCancel whereas the fireAgain corresponds to the Information Task's parameter. Script Component C# Script Component example public override void Input0_ProcessInputRow(Input0Buffer Row) { bool cancel = false; bool fireAgain = false; this.ComponentMetaData.FireInformation(0, "My sub", "info", string.Empty, 0, ref fireAgain); this.ComponentMetaData.FireError(0, "My sub", "error", string.Empty, 0, out cancel); } VB Component Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) Dim cancel As Boolean Dim fireAgain As Boolean Me.ComponentMetaData.FireInformation(0, "my sub", "info", String.Empty, 0, fireAgain) Me.ComponentMetaData.FireError(0, "I hate vb", "Error", String.Empty, 0, cancel) End Sub There's no need to explicitly specify that a parameter is By Reference since that appears to be done in the definition versus the C# requirement to specify it also on invocation. ByRef vs ByVal Clarification Script Task C# public void Main() { bool fireAgain = false; this.Dts.Events.FireInformation(0, "my sub", "info", string.Empty, 0, ref fireAgain); // Note, no cancel available this.Dts.Events.FireError(0, "my sub", "error", string.Empty, 0); } VB Public Sub Main() Dim fireAgain As Boolean = False Me.Dts.Events.FireInformation(0, "my sub", "info desc", String.Empty, 0, fireAgain) Me.Dts.Events.FireError(0, "my sub", "error desc", String.Empty, 0) Dts.TaskResult = ScriptResults.Success End Sub Summary C# requires you to specify ref and out keywords. They are not synonyms VB lets you do whatever Error event in Components have a cancel parameter So, I get the utility of ref and out; I take it your third note that "Error events in Components have a cancel parameter" is important to why the boolean must be an "out" parameter, rather than a ref; that is, I appreciate how to fix it, but why do the two functions (FireInformation and FireError) differ in this regard? Also, I still maintain that the Microsoft documentation I linked to is cryptic on the topic, so I'm curious if anyone knows why. But again, thanks! You are passing it by ref, not out, in your C#. I don't think VB.NET needs those keywords. So, is there a proper syntax to get the FireError working in C# that I'm missing? I got it; billinkc had a nice example. Still not clear on why FireError requires an "out" variable; I guess you can continue to update the boolean in a FireInformation call, but the error requires the flag to remain static... still not completely clear on why. Does FireError
STACK_EXCHANGE
[cfe-dev] Building with mingw64 on Windows issue Martin Storsjö via cfe-dev cfe-dev at lists.llvm.org Sat Dec 8 11:16:54 PST 2018 On Sat, 8 Dec 2018, Maarten Verhage wrote: > Yes, good idea. Going for x86_64-8.1.0-release-posix-seh-rt_v6-rev0! Also I > did realize I made a mistake in my windows command prompt script. The reason > it wasn't able to find std::mutex was that I didn't specify the include > folders for the gcc includes. The header file mutex certainly is present in > the mingw64 folder tree. It is also present in the win32 threads variant so > I might try that too, when I see the posix variant is building LLVM/clang That's rather strange. Normally you don't need to manually specify the include directories but they are implicit when you invoke GCC, but they are implicifly found when you invoke the compiler. I'm fairly sure the prebuilt GCC versions from mingw installers work that way. > Currently I'm just going for builing LLVM/clang with an empty projects > cmake -G "MinGW Makefiles" ^ > -DCMAKE_BUILD_TYPE=Release ^ > -DCMAKE_SYSTEM_NAME=Windows ^ I'm not sure if CMAKE_SYSTEM_NAME is necessary, and/or if it does any harm to specify it when it's not needed. > -DCMAKE_CXX_FLAGS="-I%gcc_include_path% -I%gcc_include_path%\c++ -D_WIN32_WINNT=0x0600" > -S C:\dev\llvm -B T:\x86_64-8.1.0-release-posix-seh-rt_v6-rev0\mingw64\build > > cmake_result.txt 2>&1 > mingw32-make --directory=T:\x86_64-8.1.0-release-posix-seh-rt_v6-rev0\mingw64\build > -f Makefile > build_result.txt 2>&1 > It builds a fair bit more. I wasn't sure if previous attempts did make the > folder: T:\x86_64-8.1.0-release-posix-seh-rt_v6-rev0\mingw64\build\NATIVE. > But at least this is what I see now. > But as I'm now stuck at: > No rule to make target 'NATIVE/bin/llvm-tblgen', needed by > 'include/llvm/IR/Attributes.inc.tmp'. Stop. > seen on the end of the build_result.txt file. I find it hard to decide what > I could try next. The reason is that I'm missing a bit of a "build > rationale". With that I mean some documentation that explains in general > terms how the building process is designed. In it I hope to learn the reason > for the NATIVE folder, what it contains and what llvm-tblgen.exe is designed > to do. With that understanding I'm able to make an educated guess to try to > set a define that might solve the specific issue I'm facing now with that > "No rule to make target " > Would there be some documentation on that topic on the internet or are you > willing to explain this to me? I'd love to learn. If you haven't read https://llvm.org/docs/CMake.html yet, that's recommended. (I don't remmeber if you've mentioned that you've read it or The NATIVE directory indicates that cmake thinks that you are cross compiling. It might be linked to you specifying CMAKE_SYSTEM_NAME even though it's redundant. llvm-tblgen is a tool that reads .td files and generates code (.h/.cpp) out of it. So normally when you compile llvm, the build system first compiles llvm-tblgen and a few other tools, then uses llvm-tblgen to generate even more source files to compile. When cross compiling, one first has to build a native version of llvm-tblgen in order to be able to run it during the build on the build machine, even if the llvm build itself is supposed to be for another architecture/os. (Even further away from your actual topic; CMake is supposed to handle building this automatically when cross compiling, but it doesn't really work for me in the cases where I've cross compiled LLVM, so for those cases I first build the tools in a non-cross llvm build directory, and point the cross build to the existing tools.) I'm successfully building llvm with mingw/gcc within msys, with a cmake invocation like this: cmake -G "MSYS Makefiles" -DCMAKE_BUILD_TYPE=Release If buidling within msys, make sure to pick the mingw-w64-x86_64-cmake package instead of the one for building things that target msys itself. I haven't tested building outside of msys in a plain cmd with mingw32-make though, but nothing of the errors you've shown so far indicate that as a More information about the cfe-dev
OPCFW_CODE
$argv[0] can be empty In some cases $argv[0] can be empty and strpos will throw an "Empty needle" error. Therefore on line 493 $argv should be checked, that it is not empty and a string, instead of just checking its existence with isset(). Someone tried to attack one of my websites with "/index.php?++++hot=1&++++kw=%E8%93%9D%E7%89%99%E8%80%B3%E6%9C%BA&r=l" and this request caused an exception which was logged and where I found the error. Provide a narrative description of what you are trying to accomplish: [x] Are you fixing a bug? [x] Detail how the bug is invoked currently. [ ] Detail the original, incorrect behavior. [ ] Detail the new, expected behavior. [ ] Base your feature on the master branch, and submit against that branch. [ ] Add a regression test that demonstrates the bug, and proves the fix. [ ] Add a CHANGELOG.md entry for the fix. [ ] Are you creating a new feature? [ ] Why is the new feature needed? What purpose does it serve? [ ] How will users use the new feature? [ ] Base your feature on the develop branch, and submit against that branch. [ ] Add only one feature per pull request; split multiple features over multiple pull requests [ ] Add tests for the new feature. [ ] Add documentation for the new feature. [ ] Add a CHANGELOG.md entry for the new feature. [ ] Is this related to quality assurance? [ ] Is this related to documentation? Hi @eweso, thanks for your contribution. I've checked it quickly and it looks like you have enabled register_argc_argv in your php.ini that you can see argv/argc in SERVER from GET (non-CLI) requests. This setting is disabled by default. I've changed my configuration, tried the request you provided and the results of SERVER['argv'] is as follows: array (size=9) 0 => string '' (length=0) 1 => string '' (length=0) 2 => string '' (length=0) 3 => string '' (length=0) 4 => string 'hot=1&' (length=6) 5 => string '' (length=0) 6 => string '' (length=0) 7 => string '' (length=0) 8 => string 'kw=%E8%93%9D%E7%89%99%E8%80%B3%E6%9C%BA&r=l' (length=43) so yeah - argv[0] is empty. Looking at the code, I don't think that we should really process SERVER['argv'] there for GET request as it contains (per documentation: https://www.php.net/manual/en/reserved.variables.server.php) the query string. We are detecting there the base url, so it cannot be detected from query string. I would suggest to change your PHP configuration, unless you really need to use somewhere _SERVER['argv'] for GET requests. As I said in previous comment - we don't really want process argv, as for GET requests it contains query string only - so we shouldn't use it for script filename. It should be used only in CLI mode. Can't see a nice way to test it for GET request, for CLI test is in place already. Thanks, @eweso!
GITHUB_ARCHIVE
So you’re considering Udacity’s Intro to Machine Learning with PyTorch Nanodegree? I went through and finished this program in 2019, and in this video, I’m going to provide my Udacity Nanodegree review..coming up. First, I’m going to describe the courses and projects in the Udacity’s Intro to Machine Learning with PyTorch Nanodegree. Next, I’ll talk about factors such as the extracurricular courses, lecture videos, and project reviews in the Nanodegree. And finally, I’ll give my honest opinion on the Udacity’s Intro to Machine Learning with PyTorch Nanodegree, overall. There are three courses in this nanodegree program. Courses & Projects Course 1: Supervised Learning In this first course, you will learn about supervised learning, which is a common class of methods for machine learning model construction. Project: Find Donors for CharityML The project associated with this course is called “Find Donors for CharityML.” It involves writing a Python script using a dataset from a fictional charity organization. The purpose of the project is to identify categories of people (based on existing charity donors in the dataset) that are most likely to donate to the charity. You’ll need to evaluate and optimize at least three different supervised learning algorithms to determine which algorithm will provide the highest donation yield. Examples of supervised learning models you might use are: - Logistic Regression - Naïve Bayes - Support Vector Machine (SVM) - Decision Trees - Random Forest This project was relatively straight-forward and if you have at least basic experience with Python, you shouldn’t have issues with completing it. By the end of this project, your knowledge of supervised learning algorithms, model evaluation methods, and feature importances will definitely increase. Course 2: Neural Networks In the second course, you’ll learn the foundations of neural network design and training in PyTorch. Project: Build an Image Classifier The project associated with this course is called “Build an Image Classifier.” In this project you’ll implement an image classification application using a deep neural network in PyTorch. This image classification application will train a deep learning model on a dataset of images. It will then use the trained model to classify new images. This project is consdierably more difficult than the first project. I spent more time on this deep learning project than the other two projects combined in this Nanodegree program. I recommend at least an intermediate skill level in Python before attempting this project. In particular, you’ll want to be well-practiced with Python functions, dictionaries, and lists. Having some initial knowledge of the Python Imaging Library (PIL) might prove beneficial as well. Course 3: Unsupervised Learning In the third course, you’ll learn to implement unsupervised learning methods for different kinds of problem domains. Project: Create Customer Segments The project associated with this course is called “Create Customer Segments.” In this project, you will utilize clustering algorithms such as K-means and Principle Component Analysis to compare a business’s customer data to external demographic data to identify over and under-represented customer populations. A very high percentage of this project involves pre-processing the data and creating plots with either the Seaborn package in Python or the MatPlotLib package in Python. So I would recommend going into the project with at least a basic skill level in those two visualization packages. When it comes to the lecture videos, they’re among the best I’ve seen and they’re very informative. The presentations from the speakers and the use of animations were incredibly helpful, especially in the sections about supervised learning and deep learning. When it comes to the project reviews, I typically received a review within a few hours after submitting a project. If there were issues with a project I submitted, those issues were clearly communicated by my reviewer. The mentor support is helpful when it comes to challenges that are relatively common in the projects. And I found the resume and LinkedIn profile review services valuable as well, which are available through their Career Resource Center. Overall, it was a great learning experience and I continued to use many of the Machine Learning and Python concepts from this program in work situations afterwards. But, deep learning isn’t as widely used in many organizations as machine learning at this point of time and that should be taken into consideration. To cap off, I firmly believe that project-based learning is the best way to learn Machine Learning, Deep Learning, and Python and in that regard, this program is one of the best educational options available. Udacity “Intro to Machine Learning with PyTorch” Nanodegree Resources to Help you Prepare for the Udacity Data Science Nanodegree: Udacity “Programming for Data Science with Python” Nanodegree: Udacity Data Analyst Nanodegree: Official PyTorch Tutorials: Python Data Science Handbook: Essential Tools for Working with Data:
OPCFW_CODE
Have you ever been involved in a project where unexpected issues outside of the control of the project team cause the project schedule to change? Never? I didn’t think so, you’re way too good for that. But for the rest of us mere mortals, keeping projects on track is an important consideration. Actually, I think larger projects are more likely to see schedule hiccups even in spite of the resources and/or project management expertise available to them. I’ve been part of some – let me think here – eleven figure projects, and change at that level seems to be the norm, not the exception. Some of the project management team tends to be dedicated to dealing with change orders. A network diagram is one of the project manager’s primary tools. Although many large projects have big, complex network diagrams drawn out on a wall, I think they’re more useful for the small, casual, back-of-the-napkin stuff. Things like analyzing a change order where a technical expert will tell me the tasks that are involved and how they are related to one another. Types of Network Diagrams There are two types of network diagrams: I think that the former (activity-on-node) is the most intuitive and useful, because projects are really just a series of tasks and writing the tasks in boxes and connecting them with arrows seems to make the most sense. In this article, we will continue with Activity-on-Node terminology. How to Draw a Network Diagram A network diagram is a graphical representation of tasks. If two tasks can be performed simultaneously, or they have more than one predecessor, you can draw two arrows to it. Using the critical path method, the longest path from start to finish defines the minimum completion time for the project. In this case, this mini-project would take a minimum of 27 days (5 + 12 + 6 + 4). If you have a target completion date, defined by a client or manager, you would have to start 27 days earlier (in this case) to finish on time. How to Keep a Project on Track There are two distinct times when you might want to draw a network diagram to keep your projects on track: - Project planning - Analyzing change orders Most people without a project management background will divide the project into tasks, assign a certain number of hours, expenses, or whatever to each task, and add up the total. It’s not rocket science, but the time required to carry out the project is generally guesstimated with barely anything in the way of calculation or justification as to how it is achievable. Somebody with technical experience or seniority in the organization inherits the noble job of picking a date out of the air (after thinking about it really hard). Isn’t there a better way? In order to make full use of project management practices, you should draw out a network diagram during the project planning stage. It takes 5 minutes, and very little brainpower. Draw it on the back of a napkin at the restaurant, or jot it down at the office whenever you feel motivated. You can be as general or as detailed as you like. A simple network diagram with 5-10 tasks, like my example above, will hardly take anything away from your regular workload but it will do wonders for project delivery. If you’ve got more time, you could draw out a large, complex one and update it regularly. You must give a realistic thought not only to how much time each task will take, but how they relate to each other. Missing predecessor relationships is a surprisingly easy thing to do: - Do you really need to finish the lab analysis before starting the engineering? - Is it realistic to expect the engineer to start work immediately upon completion of the lab work? It’s not the choosing of tasks, but rather the relationships between them that provide their power. Project managers always think in terms of network diagrams. Even if the overall project network diagram is more complicated, the basic network diagram, consisting of between 5 and 10 tasks, should always be in your head and recitable on demand. Analyzing Change Orders Almost all projects experience some form of change from their initial conception. If not, you are either extremely good, or the project is not that complicated. Remember what I was saying about thinking in terms of network diagrams? In order to maximize project management tools on your projects, you should immediately default to a network diagram anytime something threatens to change the project timeline. Is a member of the project team slow to produce? Did the equipment rental experience a delay? Pull the basic network diagram (5-10 tasks) from your long term memory and answer the following questions: - Which task is affected? - What other tasks are dependent on it? - Is it on the critical path? - If so, what is the impact on the completion date, and are there any other tasks which can compensate? Think in terms of Network Diagrams To be a more effective project manager you should think in terms of network diagrams. Commit the basic network diagrams for all your projects to memory. When things come up, think immediately of the network diagram and how it will be changed. Most of all, do not be intimidated by drawing out network diagrams whenever you have the chance. The more, the better. It’s extremely easy and takes very little time. Your clients, customers, etc. will love you for it, and your career might be upwardly mobile in a way you never thought possible.
OPCFW_CODE
I am developing a “package” similar to Mathworks Simulink but base on Julia. Entry is via xschem schematic capture into a verilog net list, which is parsed into an abstract syntax tree, and then into a “Universal Net List” saved in a YAML file. The julia simulation engine reads the yaml file. The engine is bit level and cycle level accurate. It is not really intended to compete with verilog, but closer to a system-C where the user can easily (this is the goal - needs work) add high-level custom blocks. I have just gotten my quadrature output, 12-bit direct digital frequency synthesizer working (it includes two ROM look up tables, adders, registers, etc., etc.). My intent was to then put the Julia code into a package so I can think about sharing it with others. I am having major problems in putting my current code into a package. I have spent at least two days trying to read documentation, wich at times I mostly can not comprehend. I’ll start with a few issues and add others later. - I have many modules with each module in a separate file. Because Julia does not allow forward function references, I have had real problems getting this set up; it is now working with issues fo Revise vs @printf to be resolved. So from Gitbook it was suggested to make a new repository on Github (done). Problem, can’t clone repository first as ]pkg> generate wants to make a new sub directory. Then one needs to link the new subdirectory to the gith, b repository; this is very convoluted and not well explained. Anyways, I think I got through this but now have the problem: (SysSim) pkg> precompile 0 dependencies successfully precompiled in 2 seconds (26 already precompiled) ERROR: The following 1 direct dependency failed to precompile: Failed to precompile SysSim [0f8dfbf7-a8b6-4432-9d9e-474bb026f832] to ./compiled/v1.7/SysSim/jl_J9gm3f. ERROR: LoadError: ArgumentError: Package SysSim does not have Main in its dependencies: I have no idea how one goes about putting Main in the dependencies. I also can’t find documentation on recommended flow for packages with multiple file modules and packages vs environments? Many many more issues trying to get package manager to work and dev vs not using dev etc. Googling I see discussions about REQUIRE? but I haven’t seen REQUIRE mentioned anywhere (I think it’s old?) A complete tutorial on setting up a package management environment with multiple file would be nice. What is the structure of your files? Show the listing of your package folder please. I believe there is a simple solution. As already mentioned, some more information would be helpful before an answer can be given. However, feedback from new users is extremely valuable, and everyone would be grateful if after the problem is solved you could spend some time filing issues describing what is missing from the current documentation (or if nothing is missing, pointing out why the current documentation was misleading or not well structured). Current docs 1. Introduction · Pkg.jl (also linked from Pkg · The Julia Language ) Apologies, but it is a bit difficult to identify your problem given the hints you have provided. I will do my best to highlight a potential issues/misconceptions. FYI/Also: I have ported much of Richard Schreier’s Toolbox if that’s of interest to you: → GitHub - ma-laforge/RSDeltaSigmaPort.jl: Port of Richard Schreier's Delta Sigma Toolbox Not necessary, but quite ok. Modules & files can be organized independently (completely orthogonal concepts in Julia). Not sure what you mean here - but if you mean you can’t make empty function declarations like you can in C/C++ this is true (in a way). - Function declarations are unnecessary in Julia. - Functions can be defined after they are first used in Julia (no issues) f(x) = g(x) #Not yet defined - but Julia accepts happily. g(x) = 3x #Ah! Julia now knows how to run f(x) now! f(5) #This call works -- no problem The only issue with @printf + packages I recall is that you should “add it to your package” (Project.toml file): Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7" Yes. I looked at the Gitbook you linked. I find it a bit difficult to follow, to be honest. (Though I’m still happy the author is trying to improve our understanding). I find it is best to do: - Go to an empty work folder. - Generating package pkg>generate - as you mentioned. - (This just gives you the skeleton for your package (including its unique identifier UUID).) - Copy that whole folder into the repo you cloned from Github (Let’s call it PKGROOT). - Move your code inside this new package as well (PKGROOT). - cd PKGROOT; julia (Launch julia from PKGROOT) pkg>activate . (Activate your package so you can modify it.) pkg>add (Add all packages your own package depend on.) It shouldn’t be. Main module is just the scope where your interactive session lives. - You don’t need to add it. - and your module code probably shouldn’t reference it. Correct: It is no longer used. Please ignore all information pertaining to REQUIRE. To use your package - Start a new Julia session pkg>dev PKGROOT (Make your package available to whatever environment you are currently in) - Note the use of “dev” - not “add”. Ask me why if you don’t know. PKGROOT should be an absolute path here (not relative path). From now on, this julia environment is ready to use your package!
OPCFW_CODE
Map arm movement (as plant) to lidar-measured distance (as setpoint) using PID I am trying to control the arm of my setup such that its tip is at a particular distance from the surface. That's when its supposed to stop. The actuator has no feedback, so I use a 1D LiDAR to measure the distance of the arm from the surface. Currently, I ask the actuator to keep lowering the arm until it has reached the desired distance (target_d for example), which is actually measured using the lidar. As the actuator is basically kind of bang-bang controller, I have only 2 options: ALL GAS or HAND BRAKES. I am trying to use a PID to contol this. The issue is, the error is obtained as distance (in mm) and the input to the plant is time for which the actuator fires up. Can I have a PID or any alternate controller to achieve fast and accurate actuation without having to oscillate around the setpoint ? How rapidly can you control the bang/bang output? And what in the world is this thing, actually? Be specific about the actuator technology. What leads you to believe it's a suitable setup for achieving the goal? I can send the commands to the controller at a max rate of 250kbps via CAN. The actuator is this linear actuator or a slight variant. I have chosen the actuator as it is easy to use and was readily available to me. Your actuator is fine, it's your drive electronics which are utterly unsuitable. Get the version of the actuator with encoder and give it a proper servo drive. Or even open-loop, a remotely decent motor driver would give you something other than "full speed ahead". You either misunderstand, or desperately need to replace whatever box is sitting between that DC motor and the CAN bus. Use the right tools for the job, not the wrong ones! If I were to keep only the actuator (which probably also has an encoder/potentiometer with it), what should the other circuitry be, to have a controlled arm movement ? A servo driver, ideally one which takes a position command and achieves it, or at least one which takes a velocity command and closes that loop. Worst case, something that takes an open loop directional throttle type command. @ChrisStratton in either of these cases, how would the feedback be used to correct the actuation ? In the first case the driver would close a position loop, in the second a velocity loop controlled by your position loop. I think the two biggest concerns here are 'gain' and the bandwidth requirements of your system. Without regulating the force applied to the lever arm from your actuator (bang bang) you will always be moving the arm with max acceleration, meaning you can't impose damping on your system through the control input, so it inherently has an enormously high gain. You also likely have the issue of LiDAR limiting the bandwidth of your control loop; you can poll a position sensor orders of magnitude faster than the turnaround time for LiDAR frames. I see in a comment that you are using a linear actuator, with some kind of CAN capable motor controller. Ditch the bang bang control. DC motor controllers are dirt simple. If your current one can't change the speed of your motor, you won't have difficulty finding one that can. Then instead of using time as your PID control effort, you can use motor voltage (or current if you buy a higher quality controller). I also recommend ditching LiDAR in favor of an angular position sensor. I have spun out my own contactless magnetic angle sensors that are great for robotics (continuous angle output, retain the same angle across power cycles). You'll get higher bandwidth and resolution than with LiDAR. It looks like the linear actuator you're using also comes with encoder options, if you don't want to make your own. How would the angle sensor help me getting the arm to stop at the exact setpoint ? I can't understand. forward kinematics. if you know the arm angle, and you know its position relative to the table, you know everything necessary to determine the distance of the end of the arm to the surface of the table. All position control for robotics is done this way; if vision is used at all, it's on a 'higher level' (figuring out locations of objects not attached to the robot...) not for the low level task of figuring out the configuration of its arms But isn't the error computed by controller (distance in this case) and the input to the plant (angle in this case) not supposed to be of the same units (like mm or degrees)? that's called inverse kinematics. in the case of your arm, there is a simple closed form relation between the arm angle and the end effector position
STACK_EXCHANGE
Studying coding is tough all alone but with Android applications development it may appear even more sophisticated. You are not required just to comprehend Java, you are required to set all the Android software as well and understand all the out-and-outer whims of Advanced Android application development . Android Software Development Kit (SDK) represents a complex of instruments which are aimed at assisting in Android application development. Let’s consider some of the most useful instruments in Software Development Kit. Two basic integrated development environments (IDE) are there for Professional Android application development. An IDE is considered a primary program where codes are written and applications are put together. That is able to assist person in managing, editing different files in app, operating packages and keeping in libraries, application will demand, as well as reviewing it on real devices. Eclipse is a default IDE for Mobile application development in Android It enables modifying Java and XML files and managing of different parts of app, amongst lots of other assignments. A variant you take out from Google also has a pack operator that lets a person upgrade Android tools when Google produces the latest version of them. Android application studio development may be named a key alternative, that is presently being designed direct by Google. This is a piece of the extended beta as various Google projects. A long-time intent is for Android Studio to substitute Eclipse as the basic IDE for Android engineering. Google is offering various Android application development services and features. They can be checked here: API Guides: The common APIs are separated from Google services. They are ranging from code to basic animation creation, to reading sensors and connection in the worldweb. There is diverse information that may supplement your application with functionality. Sample Code: Occasionally it assists in reviewing how someone else has done this before you. This part provides examples of code for different functions. It may demonstrate how something functions, or just utilize it in your application so you don't fiddle around. What’s really good about Android Studio that it’s made specially for Professional Android application development. There is number of other options, like Unity3D and various application designers, each of which has its own peculiar advantages and disadvantages on each particular subject. Just before you start Android mobile application development , you are required to set up Java on your device in order of utilization of Android Studio. Exactly you will need Java Development Kit (JDK). All you will need do is downloading kit on your device and setting it. When you enter Android Studio you will be shown the menu allowing you to start or perform some options’ configuration, like downloading code samples. As soon as you get your samples, you are able to commence new Android Studio Projection. A directory tree consisting of various folders and files, which are making up your application, will be demonstrated. There will also be many widgets that are able to be supplemented to your application. At time you run your creation you might need Android Virtual Device, that is the emulator new apps tested on. How to develop better applications? You just need to keep trying, add in some variables, interesting pictures and helpful functioning. That would be pretty enough to create a basic application.
OPCFW_CODE
import {createSandboxSpy} from '../baseHelper'; import ScrollBlock from '../../src/components/ScrollBlock/index.jsx'; import ExeEnv from 'react/lib/ExecutionEnvironment'; ExeEnv.canUseDOM = true; let spy; let container; let mockParent; let scrollBlock; let scrollEvt; class MockParent extends React.Component { constructor(props) { super(props); this.state = { itemsCount: this.props.itemsCount, }; } onFetchData(retrieve) { const add = retrieve || 1; this.setState({ itemsCount: this.state.itemsCount + add, }); } render() { const items = []; let count = 0; while (items.length < this.state.itemsCount) { items.push(<li key={count}>a</li>); count = count + 1; } const list = React.renderToString(<ul ref="list">{items}</ul>); return ( <ScrollBlock fetchData={this.onFetchData.bind(this)} itemsCount={this.state.itemsCount} itemTotal={10}> {list} </ScrollBlock> ); } } function mountComponent(itemsCount) { container = document.createElement('div'); document.body.appendChild(container); mockParent = React.render(<MockParent itemsCount={itemsCount} />, container); scrollBlock = TestUtils .findRenderedComponentWithType(mockParent, ScrollBlock); } function unmountComponent() { if (container) { React.unmountComponentAtNode(container); } } function dispatchScrollEvent(target, scrollTop) { target.scrollTop = scrollTop; target.dispatchEvent(scrollEvt); } /** * ScrollBlockComponent * @test {ScrollBlockComponent} */ describe('ScrollBlockComponent', () => { beforeEach(() => { spy = createSandboxSpy(ScrollBlock.prototype, [ 'componentDidMount', 'componentWillReceiveProps', 'componentWillUnmount', 'onScroll', 'render', 'fetchData', 'setState', ]); }); afterEach(() => { unmountComponent(); }); /** * @test {ScrollBlockComponent#constructor} */ it('should be an instance of React.Component', () => { mountComponent(0); TestUtils.isCompositeComponent(scrollBlock).should.be.true; }); /** * @test {ScrollBlockComponent#constructor} */ it('should have appropriate state and props', () => { mountComponent(0); scrollBlock.state.isLoading.should.exist; scrollBlock.state.isLoadedAll.should.exist; scrollBlock.props.fetchData.should.be.a('function'); scrollBlock.props.itemsCount.should.be.a('number'); scrollBlock.props.itemTotal.should.be.a('number'); }); /** * @test {ScrollBlockComponent#componentDidMount} */ describe('ScrollBlockComponent#componentDidMount', () => { /** * @test {ScrollBlockComponent#componentDidMount} */ it('should have been called', () => { mountComponent(0); sinon.assert.calledOnce(spy.componentDidMount); }); /** * @test {ScrollBlockComponent#componentDidMount} */ it('should fetch data when none are loaded', () => { mountComponent(0); scrollBlock.props.itemsCount.should.equal(1); // or spy.componentDidMount.getCall(0) .thisValue.props.itemsCount.should.equal(1); }); /** * @test {ScrollBlockComponent#componentDidMount} */ it('should not fetch data when some are already loaded', () => { mountComponent(3); scrollBlock.props.itemsCount.should.equal(3); // or spy.componentDidMount.getCall(0) .thisValue.props.itemsCount.should.equal(3); }); }); /** * @test {ScrollBlockComponent#componentWillReceiveProps} */ describe('ScrollBlockComponent#componentWillReceiveProps', () => { /** * @test {ScrollBlockComponent#componentWillReceiveProps} */ it('should have been called', () => { mountComponent(0); sinon.assert.calledOnce(spy.componentWillReceiveProps); }); /** * @test {ScrollBlockComponent#componentWillReceiveProps} */ it('should set isLoading state to false when props has new children', () => { mountComponent(0); spy.componentWillReceiveProps.getCall(0).args[0].children.should.exist; spy.setState.getCall(1).calledWith({isLoading: false}); }); }); /** * @test {ScrollBlockComponent#componentWillUnmount} */ describe('ScrollBlockComponent#componentWillUnmount', () => { /** * @test {ScrollBlockComponent#componentWillUnmount} */ it('should have been called', () => { mountComponent(0); unmountComponent(); sinon.assert.calledOnce(spy.componentWillUnmount); }); }); /** * @test {ScrollBlockComponent#onScroll} */ describe('ScrollBlockComponent#onScroll', () => { scrollEvt = document.createEvent('Events'); scrollEvt.initEvent('scroll', false, false); /** * @test {ScrollBlockComponent#onScroll} */ it('should load the data when the scrollbar reaches the end', () => { mountComponent(6); const elem = React.findDOMNode(scrollBlock); elem.scrollHeight = 100 * scrollBlock.props.itemsCount; elem.clientHeight = 200; dispatchScrollEvent(elem, elem.scrollHeight - elem.clientHeight); sinon.assert.called(spy.onScroll); scrollBlock.props.itemsCount.should.equal(7); }); /** * @test {ScrollBlockComponent#onScroll} */ it('should not load the data when the scrollbar is not at the end', () => { mountComponent(6); const elem = React.findDOMNode(scrollBlock); elem.scrollHeight = 100 * scrollBlock.props.itemsCount; elem.clientHeight = 200; dispatchScrollEvent(elem, 200); sinon.assert.called(spy.onScroll); scrollBlock.props.itemsCount.should.equal(6); }); /** * @test {ScrollBlockComponent#onScroll} */ it('should not load the data when all data are loaded', () => { mountComponent(10); const elem = React.findDOMNode(scrollBlock); elem.scrollHeight = 100 * scrollBlock.props.itemsCount; elem.clientHeight = 200; dispatchScrollEvent(elem, elem.scrollHeight - elem.clientHeight); sinon.assert.called(spy.onScroll); sinon.assert.notCalled(spy.fetchData); scrollBlock.props.itemsCount.should.equal(10); }); }); /** * @test {ScrollBlockComponent#render} */ describe('ScrollBlockComponent#render', () => { /** * @test {ScrollBlockComponent#render} */ it('should have a className \'ScrollBlock\'', () => { mountComponent(0); React.findDOMNode(scrollBlock).className.should.equal('ScrollBlock'); }); }); /** * @test {ScrollBlockComponent#fetchData} */ describe('ScrollBlockComponent#fetchData', () => { /** * @test {ScrollBlockComponent#fetchData} */ it('should set isLoading state to true', () => { mountComponent(0); sinon.assert.calledOnce(spy.fetchData); spy.setState.getCall(0).calledWith({isLoading: true}); }); }); });
STACK_EDU
The size of the blocks in the noise pattern.Range: 1 to 999; Default: 20 Stretches or compresses the blocks in the noise pattern along the X or Y axis independently. Use this to adjust the aspect ratio of the blocks.Default: 0, 0 The overall strength of the effect. At 0, the effect does nothing, leaving the layer unchanged.Range: 0 to 1; Default: 0.25 If this is turned on, the randomly generated blocks will be a uniform gray color and when multiplied with the colors in the layer will affect the layer brightness only. If turned off, the randomly generated blocks can be any color.Default: off Controls automatic variance over time. Turn this on to freeze the noise pattern, causing it to remain constant over time. If this this is off, the noise pattern will change randomly with every frame.Default: off Forces blocks in the noise pattern to appear either at full strength or not at all. In-between strengths are not generated.Default: off Shifts the random block pattern horizontally or vertically by the given amount.Default: 0, 0 The amount by which RGB noise components can overshoot the maximum possible values. If this is set to 1.0 the generated components will be at most one, so when multilied with colors in the layer can only make the existing colors darker. Setting this to a value greater than one yields random RGB values that can over over 1.0 and can thus make colors in the layer brighter. Note that when this is greater than 1.0, color clipping may occur.Range: 0.5 to 2; Default: 2 A value used to generate the random patterns of noise. Change this for different noise patterns. For control over when the noise pattern changes, turn on Freeze and animate Seed instead.Range: 0 to 5; Default: 0 Add Block Noise to any layer to generate a noise pattern that automatically changes with each frame over time. If you don't want the pattern to change automatically with each frame, turn on Freeze. Because Block Noise is a multiplicative effect, it will be hard to see on dark layers, and invisible on layers that are pure black. To see the random blocks generated by this effect in their original colors, use the effect on a pure white layer. - Create striped noise patterns by setting Stretch to an extreme value. - Simulate the "snow" pattern that appears when an analog TV doesn't have an input signal. To do this start with a white shape layer, add Block Noise, set Size to a small value (between 5 and 10 works best), and turn on Monochrome. Finish up by adding a weak Gaussian Blur to the layer. - Create interesting maps for use with Displacement Map by turning on Freeze and adding multiple instances of Block Noise with different settings, then using keyframes to animate Offset. - Slow down the shifting of the pattern by combining this with Time Quantization. - Simulate digital glitching by adding a neutral gray shape layer in front of a photo or video, and applying Block Noise and then Displacement Map. In the Displacement Map effect, turn on From Center and use a small value for the Displacement Map's X and Y offset.
OPCFW_CODE
toggle quoted messageShow quoted text Thanks so very much , I will do the same and. get back to you. On 3/31/21, Brian Vogel <email@example.com> wrote: On Wed, Mar 31, 2021 at 02:18 PM, Vinod Benjamin wrote: how to create a debug log. Here's a pasted copy of the file I gave the link to. I understand, since it's a Word document, and that's where your reading problem currently lies, that you wouldn't be able to read it. *Collecting NVDA Debugging Information to Report to NVAccess.* Whether you are someone who has a GitHub account for NVDA or not, when it comes to reporting detailed information to the developers of NVDA when it’s needed you will be called upon to turn on debug level logging so that they can get a handle on exactly what’s happening. You will often need to include this (or at least a chunk of the log) along with a description of what you have done (in exact steps, when possible) and what isn't working as Very often, issues are triggered by an Add-On rather than NVDA itself, so the first thing you should try is restarting NVDA with add-ons disabled, and see if you can recreate your problem, or if it goes away. If it goes away, then you know that an add-on is triggering it. You then have to go through add-ons manager, first disabling all add-ons except one, and restarting NVDA as usual, until turning one of them on causes your problem to reappear. Once you know which add-on is causing the issue, this needs to be reported to the developer of that add-on rather than the NVDA developers. Your NVDA key is either INSERT (desktop keyboard layout) or CAPS LOCK (laptop keyboard layout), depending on how you have NVDA set up. To restart NVDA with add-ons disabled: 1) Press NVDA+Q 2) Down arrow to 'Restart with add-ons disabled' 3) Press ENTER Next, try recreate the issue - do whatever causes problems. If the problem(s) still occur, even with all add-ons disabled, then you’ll want to proceed to changing the NVDA logging level so you have detailed information for reporting purposes. If the problem(s) are gone, then go through the steps to figure out what add-on is causing them, noted above. To set your log level: 1) Press NVDA+Control+G to open the general settings 2) Press TAB until the focus is on 'Log level' 3) Press DOWN ARROW to get to 'Debug' 4) Press ENTER to close settings 5) Press NVDA+Control+C to save settings. When you restart NVDA, *please do so with add-ons disabled* , which will give the cleanest log, using the previously noted steps. Once NVDA is running with add-ons disabled, recreate the issue - do whatever causes problems. Once the problem(s) have occurred, there will be log information that needs to be sent to the developers. There are several ways to access the NVDA log file afterwards: If NVDA is still running and usable: 1) Press NVDA+F1 to open the log viewer 2) Press CONTROL+A to select all. 3) Press CONTROL+C to copy. 4) Press CONTROL+V to paste the copied log where you need to paste it. Instead of using the log viewer, or if NVDA has stopped and you needed to restart it or the computer: 1) Press WINDOWS+R to open Windows Run dialog 2) Type %temp% and press ENTER (that's the percent sign, the letters T E M P and another percent sign). Windows Explorer should open to the temporary 3) Press TAB to move to the file list 4) Press N and move down to find up to three files: nvda.log (the log file for the current or most recent NVDA session), nvda-old.log (the log from the previous session) and nvda-crash.dmp (a crash dump with more information created if NVDA itself crashes). 5) Depending on whether you’re filing a GitHub issue or using an email program to send information to firstname.lastname@example.org ( email@example.com ) , exactly what you do will be different in regard to attaching files. That being said, attach as many of those three files to an email to as would contain useful information. This is generally the nvda.log file where debug level logging was active and the nvda-crash.dmp file if NVDA crashed. As always, give a detailed description (including steps to reproduce) of has happened as well as a description of what you expected to happen. Brian - Windows 10 Pro, 64-Bit, Version 20H2, Build 19042 Always remember others may hate you but those who hate you don't win unless you hate them. And then you destroy yourself. ~ Richard M. Nixon
OPCFW_CODE
difference between environment variables and System properties iam using the below link to understand environment variables and system properties. https://docs.oracle.com/javase/tutorial/essential/environment/env.html The link says environment variables are set by OS and passed to applications. When i fetch environment variables using System.getenv() it shows me lot of properties which i never set. So it must be OS (im using macOS) which had set these properties. Some of the properties in System.getenv() are MAVEN_CMD_LINE_ARGS, JAVA_MAIN_CLASS_1420, JAVA_MAIN_CLASS_1430. My question is why would OS would like to set the java specific properties in environment variables? Ideally these should be set by JVM (in System.properties()). P.S.: From whatever i have read on net i understand that environment variables are set by OS and System.properties() are set by JVM Also if someone can point me to a good link on environment variable and System.properties it will be very helpful. Iam very confused between the two. Environment variables are OS level variables, they do't necessarily need to be set by the OS. You can set them yourself as well, and sometimes scripts do the same (like the maven script which will export/set variables as well). Java system properties are those which are passed to your progam with -D on the command line (or programmatically through System.setProperty. Environment variables are used on the OS to configure stuff. These variables can be used by any program. They can be set system-wide or by specific users. Different programs are interested in different environment variables. Java just happens to be one of the programs running on the OS. So System.getenv() is Java's way of giving your program access to environment variables (because your program may be interested in one of them). Nothing special. I'm sure you know all about Java's own system properties. That's a Java-specific way of configuring stuff throughout the JVM instance. @ernest_k you mean to say thet the values which i get by System.getenv() are sent to other process also. for example: firefox, chrome etc. the properties i mentioned in the questions are also passed to other applications. is it possible? @tin_tin Yes. Not always, though (if you understand how variables work, you'll see that they can be bound to a shell session such that only programs launched in that session will "see" those variables). This may be OS-specific. So if you have OS-wide variables, all processes have access to them. If you have user-level variables, all processes run by that user have access to them. It's more like "programs pull variables" rather than "the OS pushes variables to programs" @ernest_k It is more like the process (program) that starts another process (program) pushes environment variables to the new process. Every process has its own copy of environment variables, so modifying the environment variables of a process will not affect the environment variables of other processes started by that process. @Andreas I've never seen things like that (yep, never tested variables where I launch the process). The relevance of pull/push is minor here, though; but I need to check how variables reach running programs. Thank you Environment variables is an OS concept, and are passed by the program that starts your Java program. That is usually the OS, e.g. double-click in an explorer window or running command in a command prompt, so you get the OS-managed list of environment variables. If another program starts your Java program1, e.g. an IDE (Eclipse, IntelliJ, NetBeans, ...) or a build tool (Maven, Groovy, ...), it can modify the list of environment variables, usually by adding more. E.g. the environment variable named MAVEN_CMD_LINE_ARGS would tend to indicate that you might be running your program with Maven. In a running Java program, the list of environment variables cannot be modified. System properties is a Java concept. The JVM will automatically assign a lot of system properties on startup. You can add/override the values on startup by using the -D command-line argument. In a running Java program, the list of system properties can be modified by the program itself, though that is generally a bad idea. 1) For reference, if a Java program wants to start another Java program, it will generally use a ProcessBuilder to set that up. The environment variables of the new Java process will by default be the same as the current Java program, but can be modified for the new Java program by calling the environment() method of the builder. i was looking for an answer like yours. Answer by you and ernest_k cleared lot of doubts. Feels like nirvana
STACK_EXCHANGE
Git is a distributed version control system created by Linus Torvalds in 2005. The same guy who created linux. Genius. Before Git, Apache Subversion (SVN) was one of the popular version control systems. SVN is a centralized version control system which stores all the files in 1 central server. If two developers are working on the same file, they will have to resolve conflicts all the time. The developers cannot commit the changes if they are offline unlike Git. Below is a list of basic commands to save your changes as commits and push them up to the server (Github) using Terminal. I'm a terminal person. No GUI. I know that VSCode has built-in user interface for git. There are existing Git Clients too. git checkout develop # Go to develop branch git add . # add all the files you have edited git commit -m "Add changes to the files" # add commit message to the files you've added git push origin master # Push the commits to master branch in the server A hosting service that works with Git. There are alternative hosting services that work with Git as well. For example, Bitbucket and Gitlab. However, I like Github the best because the UX is smooth af. FYI, Microsoft acquired Github and I find this tweet hilarious. A popular branching strategy created by Vincent Driessen. My team at HOOQ is using it. We use it together with Git Hubflow and we are puking rainbow. There's no right or wrong branching strategy, you just have to find the most suitable model that suits your team. Everyone works on master branch only. This is useful to avoid long lived branches which result in accumulating merge conflicts. - Master branch - Force developers to do small commits and changes frequently - Easy to understand - Iterate quickly, ship faster - Incomplete features may be released if no feature flag is used - Introduce bugs easily - Frequent merge conflicts if developers work on the same files Master branch is the stable branch. Nobody touches it. Everyone works on their own feature branch then opens a pull request to merge feature branch into develop branch. A release branch is branched out from develop branch and then merged back to master and develop branches. This encourages continuous delivery. - Master branch - Develop branch - Feature branch - Release branch - Hotfix branch - Easy to work in parallel - Release branch is useful to track releasable features - A little complicated for first time user One-line commands for using Gitflow with Github. This amazing tool is created by DataSift You guys are da MVP. git checkout develop git pull origin develop git checkout -b release/production-1.0.0 git add . git commit -m "Add new release" git push origin release/production-1.0.0 git checkout master git pull origin master git merge release/production-1.0.0 git tag production-1.0.0 git push --tags origin production-1.0.0 git checkout develop git pull origin develop git merge master git push --delete release/production-1.0.0 git hf release start production-1.0.0 git add . git commit -m "Add new release" git hf release finish production-1.0.0 There are numerous factors to consider when deciding what is the best branching strategy for your team. Does your team require continuous delivery? How comfortable are your team members adopting an entire new workflow versus tweaking existing workflow to suit their needs? Personally, I like gitflow because of the separation of concerns for different features and the ease of creating a release with releasable features. But I do believe there is no single best workflow. I have not tried SVN. Not entirely true, I tried playing with SVN but it was too complicated for my liking. I am spoiled by git. There may be parts in the illustration that are inaccurate, please correct me! Also talk to me at Twitter @linxea_. I'm bored over there. Also, I made a slide for my workshop tomorrow at https://linxea.github.io/git-github-gitflow. Feeling fancy with reveal.js.
OPCFW_CODE
Common errors with solutions There are several reasons an agent can lose communication to the control plane. Common examples include - Running a network based attack that affected the traffic. Ensure both api.gremlin.comand DNS are white-listed. - Running a CPU attack has starved Gremlinof the ability to compute API encryption. This is rare but it does happen. In the event of a LostCommunication error, The Gremlin agent will trigger it's dead-man switch and cease all attacks. This can occur on a host when running a network attack, when a previous network attack had been run AND the agent was killed mid attack by the user, system or other tool which did not allow Gremlin to run garbage collection. To solve, please run There are two non-exclusive modes of failure that can occur with this error message: - The running version of Gremlinis several versions out of date - Update the Gremlinagent or docker image - Update the /var/lib/gremlin/executionshas become corrupt - Delete the file - Delete the file Docker has killed the container via kill -9. This is often attributed to OOM issues, and is most often seen when running a memory attack. Allocating more RAM to Docker usually solves the issue. Unable to find local credentials file: Gremlinis not configured to point to the correct credentials file, usually located in /var/lib/gremlin. Ensure the credentials file(s), either certificates of API keys, exists and Gremlinhas read+write access. Permission denied (os error 13): The Gremlincontainer does not have proper filesystem permissions. Gremlinrequires write access to /var/lib/gremlin, including the ability to create new files. Check permission on the host, and ensure write access is being passed through via docker when running the This is often observed in the context of Capabilities: Unable to inherit one or more required capabilities: cap_net_admin, cap_net_raw Solution: You'll need to add some capabilities to that docker container (full list here: https://help.gremlin.com/security/#linux-capabilities) docker run -it --cap-add=NET_ADMIN --cap-add=KILL --cap-add=SYS_TIME gremlin/gremlin syscheck Gremlin agent is unable to authenticate against the API. Causes of this error are usually due to bad or missing credentials files or certificates, or a revocation issued against the client. 401 Unauthorized - Authorization header is missing or malformed Client has been revoked (401 Unauthorized) AUTH_RENEW: 401 Unauthorized - Ensure you have valid credentials (Certificates or API keys) being place in a location that Gremlincan read from. Gremlinhas proper read+write access to - Remove the file /var/lib/gremlin/.credentialsif it exists This error can also be the result of a race condition when Gremlin daemon is being started prior to the environment variables being exported. In some specific cases, this error can also occur when multiple hosts or agents are configured with the same Common places this can occur: - Improperly configured ECS/Kubrenettes/Mesosphere where multiple Gremlinagents are assigned the same virtual IP - Missing HOST meta data on AWS/GCP/Azure which causes Gremlinto revert to the default localhost Identifier The client limit for your company or team has been reached, Gremlin does not have a license to apply to the client. You may terminate or revoke existing clients, or contact sales to increase the client limit. The account, most likely trial account, has expired. Please contact sales to extend the trial. This is most often attributed to a host having bad time data. Verify the system clock of the host and try again. If this problem persists past validating your hosts system clock, please reach out to support ASAP. An error code of 409 indicates there is a conflicting attack running on the host. This is most often seen in the case of one network attack running (e.g a blackhole attack) and attempting to launch a second network attack. However, this can also be seen when trying to run two concurrent network or state attacks against the same target as well.
OPCFW_CODE
The model of how the Monitor command line works in the Apple II Reference Manual, and most other documentation, isn't really in line with how it does work. It doesn't help that the Monitor itself, at least as far as command line parsing goes, is more a set of hacks than anything truly coherent. The following is an excerpted and edited copy of my personal ... Immediate mode constant allow the use of modifiers to select high/low byte of an address. From the Merlin manual: 6.4 Immediate Data For those opcodes such as LDA, CMP, etc., which accept immediate data (numbers as opposed to the contents of addresses) the immediate mode is signaled by preceding the expression with a "#". An example is LDX #... In assembler, a label is just a number representing an address. On the 6502, addresses are 16 bits, but the accumulator can only contain 8 bits at a time. What you need is to extract the high and low halves of the address as distinct immediate operands, so that you can store them in the zero-page pointer location. I'm not familiar with this particular ... Apparently, the correct syntax is as follows: START LDA #<DATA The #< and #> seem to indicate the the assembler that we are going for the LSB and MSB of the address for the DATA, not the DATA itself. I'll leave the question open in case someone wants to elucidate on the why/... I did some looking around on the internet archive, browsing through a few collections, and I ran across this variant: Rhode Island Apple Group Volume 14 - Integer Basic Games The disk contains a what could be a variation, or an ancestor (or even a descendant) of the code listed above. There are enough similarities to look suspicious, but most of these ... Okay stealing text from the Apple Monitor unpeeled: $31 MODE - This byte is used by the Monitor command processing routines to control parsing and to control operations when a blank is encountered after the hex digits. For example, a hex address followed by a colon causes setting of MODE so that during further processing of the input line each blank ... I think I found an ancestor in the Nibble magazine program index from volume 2, number 7, 1981: Catsup Catalog Supervisor Weber, Chuck Express II, V2N7 1981 You can run it online or download the disks in a zip archive. (It's on NIB06.DSK.) Another ancestor might be Beagle Brothers' KEY-CAT from Utility City. For getting your first taste of 6502 assembly, I recommend doing the web-based tutorial Easy 6502. You should be able to get through it in a few hours. Once you've got the basic ideas down, if you're going to learn 6502 assembler at the level of writing non-trivial routines and programs you're going to have to write a fair amount of it. I recently wrote a ... I realize the OP asked specifically about Assembly Language, but I felt strongly enough about the quality of "Machine Language for Beginners" that I wanted to post it as an answer. Given that the Apple IIe has a pretty decent built-in monitor, this book is a natural fit for getting the basics down. I completely understand the utility of a good assembler, ... Creative Computing Magazine printed Stephen R. Berggren's Apple Nuclear Power Plant simulator in December 1980. There were many variants/developments on this for different platforms, and some added graphics. There's a playable version of the original at kevinr/apple-nuclear-power-plant-sim: Stephen R. Berggren's 1980 Apple Nuclear Power Plant sim. No, you can't substitute; the 6502A was used precisely because it is faster for some things, even when not run at a higher clock rate. Apple IIe Technical Note #2: Hardware Protocol for Doing DMA (starting on page 2 of that PDF) explains this. On page 4 of 9 of the note it says: In the Apple IIe a 6502A, a 2 MHz part is used instead of the 1 MHz 6502 ... I've written a small program that confirms that lines of text do tear if modified while being scanned. It's not easy to see (it would have been a large amount of extra work to do the exact sync that would make it really clear), but as it runs, amongst all the flickering you can see diagonal lines across the line of text where the line tears due to reading ... For many questions in the early days, the answer was Beagle Brothers. In this case DiskQuik. DISK DRIVE EMULATOR by HARRY BRUCE and GENE HITE (REQUIRES APPLE IIe WITH EXTENDED 80-COLUMN CARD) AN IN-MEMORY "DISK DRIVE" DiskQuik acts like a disk drive connected to Slot 3, but it is much faster, quieter and more reliable. I worked with Apple II from the assembly code side many years ago for a gaming company. I just remember that you could split the screen so that you could show graphics and text at the same time. I remember it being pretty well behaved. You could not easily do the same thing on the Commoredore 64. We had to do special coding to switch between graphics and ... Hardware management on the Apple II is done by accessing a set of 'Softswitches', addresses when accessed set certain modes. For the screen there are several locations: $C050 Select Graphics $C051 Select Text $C052 Full Screen (Graphics) $C053 Mixed Screen $C054 Page 1 $C055 Page 2 $C056 Select Low Res $C057 Select Highres Usually they are accessed with a ... The screen refreshes 60 times per second (or 50 times in PAL countries) so a cell with one character in the top half and a different one in the bottom half would only be visible for 1/60th or 1/50th of a second. Under ordinary conditions, you won't notice it.
OPCFW_CODE
import { defaultRequiredState, FilledRequiredState, RequiredState } from "./RequiredState" import { findAllPlayersID, getOrError, getPlayerHero } from "../util" /** * Sets up the state to match the passed state requirement. * @param stateReq State requirement to match. */ export const setupState = (stateReq: RequiredState): void => { print("Starting state setup") // Use defaults and override them with the passed state. Does not work if we have nested objects. const state: FilledRequiredState = { ...defaultRequiredState, ...stateReq } // Player / hero let hero = getOrError(getPlayerHero(), "Could not find the player's hero.") if (hero.GetCurrentXP() !== state.heroXP || hero.GetUnitName() !== state.heroUnitName) { hero = PlayerResource.ReplaceHeroWith(hero.GetPlayerOwner().GetPlayerID(), state.heroUnitName, state.heroGold, state.heroXP) } // Focus all cameras on the hero const playerIds = findAllPlayersID() playerIds.forEach(playerId => PlayerResource.SetCameraTarget(playerId, hero)) // Move the hero if not within tolerance if (state.heroLocation.__sub(hero.GetAbsOrigin()).Length2D() > state.heroLocationTolerance) { hero.Stop() hero.SetAbsOrigin(state.heroLocation) } hero.SetAbilityPoints(state.heroAbilityPoints) hero.SetGold(state.heroGold, false) // Golems if (state.requireSlacksGolem) { createOrMoveGolem(CustomNpcKeys.SlacksMudGolem, state.slacksLocation, state.heroLocation) } else { clearGolem(CustomNpcKeys.SlacksMudGolem) } if (state.sunsFanLocation) { createOrMoveGolem(CustomNpcKeys.SunsFanMudGolem, state.sunsFanLocation, state.heroLocation) } else { clearGolem(CustomNpcKeys.SunsFanMudGolem) } } function createOrMoveGolem(unitName: string, location: Vector, faceTo?: Vector) { const context = GameRules.Addon.context const postCreate = (unit: CDOTA_BaseNPC) => { if (unit.GetAbsOrigin().__sub(location).Length2D() > 100) { unit.SetAbsOrigin(location) unit.Stop() } if (faceTo) { unit.FaceTowards(faceTo) } context[unitName] = unit } if (!context[unitName] || !context[unitName].IsAlive()) { CreateUnitByNameAsync(unitName, location, true, undefined, undefined, DotaTeam.GOODGUYS, unit => postCreate(unit)) } else { postCreate(context[unitName]) } } function clearGolem(unitName: string) { const context = GameRules.Addon.context if (context[unitName]) { if (IsValidEntity(context[unitName])) { context[unitName].RemoveSelf() } context[unitName] = undefined } }
STACK_EDU
You might have heard that Data Analysis jobs are constantly in demand, especially in this modern, data driven world. Nowadays data skills are an extremely desirable asset that can instantly enhance your CV! If you’re looking for a way to enrich your data analysis portfolio, I would definitely recommend you to learn SQL, as it’s a very powerful tool when handling and analysing data. In a nutshell, SQL lets you access and manipulate data from relational databases through the use of structured queries and commands. There are many versions of SQL, but in this post I will focus on how to learn Oracle SQL and SQLite. How to start learning SQL? I would say that learning SQL will be no different to getting started with any other skill: You need to start with the basics and establish a solid foundation that you can add to from there onwards. You should also bear in mind that the best way to acquire proficiency in something is to practise it, as often and with as much effort as you can! When it comes to practising SQL, you will need 2 basic things: data and an SQL editor. You can go for free online options that come with built-in datasets (eg. W3Schools and Oracle Live SQL) or you can choose to work with your own data and install an SQL editor, as explained here. 1. Starting with the Basics One of the best resources to start learning the basics of SQL (for free!) is perhaps the W3 Schools website. - The site has its own sample database which includes several tables with varying amounts of data (see screenshot below). The tables aren’t very big, but they should suffice for practising basic SQL commands. - The site also offers an online code Editor where you can write your own SQL commands and test them against the sample datasets provided. - As long as you have a browser (eg. Chrome or Mozilla Firefox) that lets you access the online editor, you can use this resource from any device. - This means you don’t have to install any applications on your device to run your SQL code. - Once you’re familiarised yourself with some basic SQL, you can test your understanding with the SQL Exercises offered by the site. 2. Get your hands on Oracle SQL Eventhough the previous resource is a great starting point, you will need more complex data as you advance through your SQL learning journey. In my case, I was introduced to Oracle SQL while working on my last project. Eventhough I had already built a solid SQL foundation, there were times when I needed to learn new SQL methods: luckily, that’s when I heard about Oracle Live SQL. What is Oracle Live SQL? Oracle Live SQL is a great, FREE platform provided by Oracle that helps you learn more about Oracle SQL and practise different queries. - It has hundreds of tutorials that will teach you all the Oracle SQL basics (for example, check out this Introduction to SQL tutorial) - You can access many more tutorials through their Code Library (see the screenshot below). - The platform gives you instant read access to several Oracle Schemas (think of this as datasets), so you can practise your queries with “real life” data - It comes with its own online SQL editor where you can write and run your queries. - Bonus: The platform lets you save your queries so you can access them any time you like. This means you can easily pick up where you left off last time. - Since Oracle Live SQL runs directly from a web browser (eg. Chrome), you won’t need to install any software into your device. - This means that you can use Oracle Live SQL from any device that can connect to their website. Contrary to W3Schools, you must create an account in Oracle Live SQL before you can use their service. The account is free though, so kudos to Oracle for that ✅ What if you want to work with your own data and on your own device? So far, I’ve discussed 2 tools that let you practise your SQL whilst online, with the datasets provided within each resource. But, what if you want to practise with your own data, or want to work locally, without having to connect to an online resource? In such cases, you can install a database browser on your device and use it to handle your datasets (think of the DB browser as a “viewer” for your database). A simple solution is for you to work with SQLite and to query your data via the DB Browser for SQLite (this will be your “database viewer”). Note that the SQLite commands aren’t too different from the Oracle SQL ones, so by practising SQLite you can still improve your general SQL skills. Setting up your data in DB Browser The following steps will show you how to load your data into the DB Browser: once loaded, you can start writing your queries and practising with SQLite! 1. If you haven’t done so before, download DB Browser for SQLite and install it on your device. 2. Make a note of the folder where your database file is stored (in other words, where your data is stored). Note: If you haven’t got any data yet, you can easily download an SQL dataset from Kaggle. Just make sure you choose an SQLite file as shown below. 3. Once you install DB Browser, open it and click on Open Database. 4. Navigate to the Folder where your database is stored and select your file (make sure it’s an SQLite file!). Hint: Go back to Step 2 if you forgot where your data is stored. 5. Click OK and voila! Your database will be loaded and will give you the following options: Database Structure tab: This screen will list all your database tables, columns within each table and the data type of each column. Browse Data tab: You can visually inspect your data within this tab and you can apply any filters you see fit. This view is very useful to give you a general idea of how your data looks like. Execute tab: This is the tab that you’ll be most interested in, as this is where you can practise writing and running your queries! Once you Run your code, you’ll see the results of your queries right below the query editor. I hope the resources in this post have helped you on your SQL learning journey! I shared them because I found them to be extremely useful for my own learning. If you have any feedback about these resources or suggestions for this post, do leave a comment below. You can also get in touch with me at email@example.com. Thank you for reading and happy learning!
OPCFW_CODE
If $(A/I)^n$ $\cong$ $(A/I)^m$ as modules, why does it follows that $m=n$? can anyone explin to me why if $(A/I)^n$ $\cong$ $(A/I)^m$ (isomorphic according to modules) with I being a maximal ideal and (thus $A/I$ is a field) then m=n? What is $A$? An arbitrary ring? Sorry @JohnM.Campbell. It is a ring with 1, but other than that, yes is is an arbitrary ring. i have to show that if i have an Isomorphism between $A^m$ and $A^n$, then it means that n=m. The proof ends by saying that f I is an maximal ideal and $(A/I)^n \cong (A/I)^m$, then n=m because A/I is a field. Do you recognize that $\text{some field}^n$ is a vector space and we know a lot about dimensional invariance of vector spaces? To expound on the comments: Presuming $A$ is a commutative ring with $1$ (if not commutative, you have to fuss about $I$ being a two-sided max ideal) then $A/I = k$, a field, and for $n \neq m$, $k^n \not\simeq k^m$ (this is simple in terms of vector spaces to prove), so if $k^n \simeq k^m$, then $n = m$, as required. To see $k^n\not\simeq k^m$ for $m\neq n$, note an isomorphism of vector spaces takes a basis to a basis. Any basis of $k^n$ has $n$ elements, and similarly for $k^m$, and so if $n\neq m$, there can be no bijection between sets of $n$ elements and $m$ elements. Isomorphisms are bijections. QED. Hello @Walkar, can you please explain to me why is A/I=k is a field, then $k^n$ is not isomorphic to $k^m$ is n does not equal m? Thanks. Sure, let me edit. why does this only work with fields? Why can't I just say that if i have an isomorphism between $A^n$ and $A^m$ with A being a comm ring with 1, then n=m using the same argument u wrote? This is the only case you need at this point. Maybe it works in greater generality, but since $I$ is a max ideal, $A/I$ is a field. So this proof works. In other scenarios, you might have to use the property @goblin mentions in his post, but I was just using the case I knew (and you needed).
STACK_EXCHANGE
View Full Version : ERROR: Out of memory requesting 06-24-2006, 06:58 PM I've been playing the game and I keep getting this error (The memory values change). ERROR: Out of memory requesting 50331648 bytes (80477200 total free) I know I'm not running out of memory, I got 2 gigabytes of memory and I set my virtual memory to 2048/4095 on all my partitions (windows 2000). I'm running with everything on including next generation content with the exception of shadows. Here's a list of my hardware: amd 64 dual core 4200+ nvidia 7800 gtx 512 2 GB Corsair XMS 2 400 GB WD Hard Drives in a raid 1 config USB Headphones for sound. Thanks for any help on this issue in advance. 06-25-2006, 03:46 AM Well, with a system like that you can run any game with very high settings in high resolutions. Re the out of memory errors, perhaps don't take it so literally, have you got all updates from M$, basically a patched and up to date OS, you don't mention if you have applied the TRL 1.1 or 1.2 patch? These are accumulative so you don't have to worry about 1.1 if you have applied 1.2. I run with an X1800XT 256MB and I absolutely have to turn the shadows off (like you) in the ingame menu otherwise it runs like a slideshow, other than that I have everything on, perhaps try disabling different effects like depth of field, water effects etc, until you get a more stable setup. Lastly you could try looking for updated drivers, for your sound, graphics and chipset, I have read on another forum that new Nvidia drivers are out with loads of bugfixes! Good luck and let us know if you find a resolution, 06-25-2006, 01:06 PM I was running forceware 84.43 beta, then upgraded to 91.31 beta. I'm running trl v 1.2. All my windows updates are installed, and my nforce 4 drivers are "amd 6.70". I'm completely stumped on this one, not sure what the problem is. I might just try to completely reinstall it, as the problems were not happening at first. I did notice a significant increase in fps when I upgraded to forceware 91.31, but the crashing still persists. If anyone has any similar issues, or knows how to fix it, I would love to hear about it. 06-25-2006, 05:40 PM I totally reinstalled it, and it still does crashes with the same error. 06-26-2006, 11:45 PM A hack job that seems to work (not happy about it though). Found out how to enable more config options from some site by editing the registry, to enable them, do the following: Open up: HKEY_CURRENT_USER/Software/Crystal Dynamics/Tomb Raider: Legend Add a DWORD key named: ExtendedDialog with a value of 1. Now on your shortcut for trl, add -config at the end of the command. The only option I changes I made were: Uncheck the "Use 3.0 Shader Features" Switched to 8x Anisotropic Filtering from 16x 06-28-2006, 06:32 AM Kudos to you for finding that one out and fixing it yourself! As you say though "A hack job that seems to work (not happy about it though)." You just shouldn't have to do this though to get the game to work, it could be crystal dynamics fault, it could be nvidias fault, pc gaming can be a nightmare really. If I was having trouble myself I'd dig in to regedit (hate it in there) and look myself but fortunately I seem to be having a good run with my gaming at the moment. glad you appear to have trl up and running though now, 06-28-2006, 08:12 AM Ya, it's good to finally be able to play the game. It's the only game I've had problems with since I can remember. Well, I often had to deal with performance/frame rate issues, but that's cause I crank everything up then turn things down gradually. The only game I had to do that for with my new setup though is Oblivion, that game was a beast. Anyway, I usually don't buy console ports, but I got trl because PC Gamer gave it good reviews, so I figured I would give it a go. Now that I have it up and running, I'm having fun with it, good gameplay, and a change of pace from the normal shooters. 07-08-2006, 06:20 AM Thanks for posting that registry hack fix Baggz! I was suffering from the same "out of memory" problem. Your fix to turn off the 3.0 shaders was the only thing that worked. :thumbsup: vBulletin® v3.8.7, Copyright ©2000-2013, vBulletin Solutions, Inc.
OPCFW_CODE
<?php namespace domath\utils; class BasicCalculator{ const ADD = 0; const SUBTRACT = 1; const MULTIPLY = 2; const DIVIDE = 3; const PERCENT = 4; const SQUARE = 5; const EXPONENT = 6; /** * Returns a formatted string with input and answers * @param mixed $input * @param int $mode * @return string */ public static function toString($input, $mode){ $output = ""; $answer = null; $symbol = null; switch($mode){ case self::ADD: $symbol = "+"; if(is_array($input)) $answer = self::add($input); break; case self::SUBTRACT: $symbol = "-"; if(is_array($input)) $answer = self::subtract($input); break; case self::MULTIPLY: $symbol = "*"; if(is_array($input)) $answer = self::multiply($input); break; case self::DIVIDE: $symbol = "/"; if(is_array($input)) $answer = self::divide($input); break; case self::PERCENT: $symbol = "%"; if(is_array($input)) $answer = self::percent($input[0], $input[1]); break; case self::SQUARE: $symbol = "√"; if(is_string($input)) $answer = self::square($input); break; case self::EXPONENT: $symbol = "^"; if(is_array($input)) $answer = self::exponent($input[0], $input[1]); break; } if(is_array($input)){ foreach($input as $inputValue){ $output .= $inputValue.$symbol; } return substr($output, 0, -1)."=".$answer; } else{ return $symbol.$input."=".$answer; } } /** * Calculates the sum of all the values in $inputs * @param int[] $inputs * @return int */ public static function add(array $inputs){ if(is_array($inputs)){ $output = $inputs[0]; foreach(array_slice($inputs, 1) as $input){ $output += $input; } return $output; } } /** * Calculates the difference of all the values in $inputs * @param int[] $inputs * @return int */ public static function subtract(array $inputs){ if(is_array($inputs)){ $output = $inputs[0]; foreach(array_slice($inputs, 1) as $input){ $output -= $input; } return $output; } } /** * Calculates the total value, multiplying all the values by each other * @param int[] $inputs * @return int */ public static function multiply(array $inputs){ if(is_array($inputs)){ $output = $inputs[0]; foreach(array_slice($inputs, 1) as $input){ $output *= $input; } return $output; } } /** * Calculates the quotient of all the values in $inputs * @param int[] $inputs * @return int|string */ public static function divide(array $inputs){ if(is_array($inputs)){ $output = $inputs[0]; foreach(array_slice($inputs, 1) as $input){ if($input != 0) $output /= $input; //if the value is 0, it won't perform the calculation } if(in_array(0, array_slice($inputs, 1), 0)){ return "ERROR"; //returned if there was one or more zeros } else{ return $output; //returned if there were no zeros } } } /** * Calculates the difference between $input1 and $input2, returns percentage * @param int $input1 * @param int $input2 * @return int */ public static function percent($input1, $input2){ return ($input1 / $input2) * 100; } /** * Calculates the squared value of $input. * @param int $input * @return int */ public static function square($input){ return sqrt($input); } /** * Calculates the value of $input to the power of $exponent * @param int $input * @param int $exponent * @return int */ public static function exponent($input, $exponent){ return $input ** $exponent; } }
STACK_EDU
ShuTu SWC generator: make smooth Colab deploy This work is being done in a Colab notebook, install_shutu_on_colab.ipynb. On 2019-10-12, it was shown that ShuTu can be installed on Colab. But the stock install instructions raises errors on Colab, harmless yet confusing errors. It would be better to rewrite the build.sh as Colab code s.t. there are no errors. It's pretty simple: basically calling make twice. Also no need to install demo data because will be using Allen Institute data. Here is the contents of build.sh as found in the install ZIP file. #!/bin/bash set -e if ! [ -x "$(command -v mpicc)" ] then echo "Cannot find mpicc. Installing openmpi ..." if [ ! -d Downloads ] then mkdir Downloads fi cd Downloads downloadDir=$PWD MPI_RUN=$downloadDir/local/bin/mpirun if [ ! -f $MPI_RUN ] then if [ ! -d $downloadDir/openmpi-4.0.1 ] then package=openmpi-4.0.1.tar.gz host=https://download.open-mpi.org/release/open-mpi/v4.0 if [ -x "$(command -v wget)" ] then wget $host/$package else if [ -x "$(command -v curl)" ] then curl -c - -O $host/$package else echo 'Failed to get $package: No download tool found.' exit 1 fi fi gunzip -c $package | tar xf - cd openmpi-4.0.1 ./configure --prefix=$downloadDir/local --disable-mpi-fortran make all install cd .. fi echo "export PATH=$downloadDir/local/bin:\$PATH" >> ~/.bashrc fi cd .. else MPI_RUN=mpirun fi export PATH=$downloadDir/local/bin:$PATH cd mylib make cd .. make echo '#!/bin/bash' > process.sh echo 'set -e' >> process.sh echo 'if [ $# -lt 4 ]; then' >> process.sh echo ' echo "Usage: ./process.sh <data_dir> <common_name> <param_file> <num_proc>"' >> process.sh echo ' exit 1' >> process.sh echo 'fi' >> process.sh echo 'dataDir=$1' >> process.sh echo 'commonName=$2' >> process.sh echo 'paramFile=$3' >> process.sh echo 'numproc=$4' >> process.sh echo ${MPI_RUN}' -n $numproc ./createTiffStacksZeiss $dataDir $commonName' >> process.sh echo ${MPI_RUN}' -n $numproc ./processImages $dataDir' >> process.sh echo ${MPI_RUN}' -n $numproc ./stitchTiles $dataDir' >> process.sh echo ${MPI_RUN}' -n $numproc ./ShuTuAutoTrace $dataDir $paramFile' >> process.sh chmod u+x process.sh bash That build.sh basically will install openmpi-4.0.1.tar.gz (wget or curl) is need be, then just calls make twice and write the convenience script, process.sh (which is just a few lines in Colab cells). So, cut out the openmpi install and the process.sh stuff and all that is left of build.sh is: export PATH=$downloadDir/local/bin:$PATH cd mylib make cd .. make
GITHUB_ARCHIVE
I need some help not a expert at all on this stuff. I have internet service of 30 MB and a wireless router Netgear WNDR3400v2, and my PS3 is only getting around 5MB to 8 MB, not sure what to do to get my 30 MB? I know it seems like a lot, but if you read through my post and use the techniques I mention on how to streamline your connection bandwidth, it should increase your connection speed. It's important to remember that the advertised connection speed from your ISP is theoretical at best, and it only applies to the signal after it leaves your internal LAN. If your LAN speed runs at Jurassic-era network speeds (e.g., you're using Ethernet or Wireless G (the PS3's internal Wi-Fi), or a Wireless G Router, your connection speed is going to suffer. The key thing to remember about Network speeds is that your connection is only as fast as the slowest network device that you're connected to. "Age, Wisdom, & Treachery overcomes Youth, Skill, & Daring" Former Sony G.A.P. Member Linksys Dual-Band N Network MOH, RDR, KILLZONE 2/3, COD 4/5,MW2, B'Ops, Battlefield 1943, BFBC2, BATTLEFIELD 3, BLACK OPS 2: Clans: MoBn, NBK i have the newest model 360 GB ps3. And lately, as i have moved house and it is quite far away (a couple of rooms) from the router, it has had trouble connecting to the onternet ans PSN. I origonally thought this was a problem with DHCP and the PS# not being able to keep an IP adress, so i gave it a static one. that did nothing, so now i assume the problem is with the PS3's wi-fi. what is an external adaptor for the ethernet port and where can i get one? Personally, I would use the first one (TRENDnet TEW-647GA). what is port Triggering for MLB 12 and MLB 13 the show. Are they the same as Port Forwarding? Do you think by using Port Triggering it will avoid Delay Traffic in the Video game or get less of it Dude... first of all, thank you for trying to help ppl! It's always good when someone try to help. So, I really appreciate your iniciative. BUT... Listen to what minimoto18 is saying. I can GUARANTEE that you are totally wrong. Believe me, there's no chance that a wireless could be better than wired. You just messed up things. I'm not here to flame you. Just trying to open your eyes that these informations should be checked before you pass ahead. I don't know if you are a professional in this field. You are probably a gamer trying to help ppl or someone very curious. Anyway........ I won't go deep in details, because that's not the place, but basically when you talk about ethernet, you are talking about that old 10Mb/s connection that no one else uses. That's the only situation that you are totally right! Even 802.11g blows an ethernet connection. When you compare 802.11 G or N against a 100Mb/s ethernet, you are talking about a FastEthernet connection. And here it is. Thw wrong judgment starts here. Nowadays, amost every application is conection oriented and uses TCP as transport. TCP uses negotiated windows to control several aspects of a conection flow. Again, i won't go deep in details, but basically it means that if you have that much interference as you told, chances are VERY high that bandwidth will drop a lot! And even if you can't consider that, there are several aspects of a wireless conection that would "bother" a gamer. For example, latency and jitter. So, if you ask me what I would choose (802.11n or FastEthernet), FastEthernet! I wouldn't think twice! Althought the theorical N speed is 300Mbps, it will rarelly reach those levels of bandwidth. You could check every benchmark website around... look for the top soho routers around. They won't reach that sustained bandwidth, period. If you need, I could test a 100Mbps connection against a wireless N. Actually I did that yesterday, for coinscidence, obviously! That said, you just cannot compare a wireless N connection with a GigabitEthernet. Man, forget that! lol Several models of soho routers has gigabit lan ports right now, and even if they don't, like I said before, FastEthernet most times is enough to blow a Wireless N (in a gamer perspective, for sure). For now, wireless is a convenience, not performance. No one likes cables... But if you need performance and you have the option, you can't choose wireless. I hope you understood at least a bit. And if you decide to answer saying I'm wrong, please, do a lot of research and tests before. And again! your guide is very good. Just some notes.... - you don't need to port forward ports 80 and 443. They are outbound in your LANs perspective. If you use a firewall filtering your outbound conection, then you need to open those out ports, but you don't have to port forward them to inside LAN. Try by yourself. - When I talk about wireless, I'm not considering the AC standard. Because in this case, we are talking about a very recent technology. So, I won't take my chances saying that everything is better than AC. I'm just not sure. Anyway... AC is something that most ppl don't have yet. So, in this case, it's hard to compare. Hope you don't get me wrong... These informations are very important. I couln't let it pass. But thanks for the guide!
OPCFW_CODE
Quasi-consensus is a term that was coined recently for a well-known problem: there exists network wide configuration values that, if inconsistent, cause undesirable network behavior (but do not cause a fork). For example, the minimum fee required to relay transactions is quasi-consensus because “Mallory” could more easily double spend transactions by first creating a transaction below the minimum fee of some nodes, and then a double spend above that minimum fee that will propagate to all nodes that rejected the first transaction. I have written a thorough description of that problem here for readers who are interested in more detail. Standard transaction rules are also part of quasi-consensus for the same reason. One full node cannot deploy a new output (constraint) script without getting all other full nodes to accept the new script format into its set of “standard” transactions. This problem is partially solved by P2SH transactions but P2SH has severe limits, notably in script length. Quasi-consensus rules are an impediment to permissionless innovation. And since it is generally accepted that permissionless innovation drives technology quicker and more successfully than permissioned innovation it would be better to have as few quasi-consensus rules as possible. One quasi-consensus rule that is currently harming some applications is the maximum length and size of unconfirmed transaction chains. Unconfirmed transaction chains happen when users spend received funds without waiting for the receive transaction to be committed to a block. There are some applications that simply want to spend a prior input without confirmation, and have the recipient respend the respend, over many iterations. Although this seems esoteric, note that the money a wallet spends to itself if its inputs exceed what it needs to pay — its change — is unconfirmed. So making a bunch of quick transactions from the same wallet may create unconfirmed chains. Bitcoin Unlimited has recently committed a change that removes unconfirmed transaction size and length limits from quasi-consensus for practical purposes. Our nodes now communicate these parameters to connected peers so that peers know each other’s mempool rejection policies. Peers that don’t support this communication message are assumed to use the current network-wide “quasi-consensus” value. When a transaction that is held in a node’s mempool becomes acceptable to a peer, it is now sent to that peer. Previously, a transaction would be forwarded when it first entered the mempool, but never again. This made double spending extremely easy — the original spend would never propagate from the more accepting nodes to the rest of the network. If this change is enabled across the BU network, the transaction will be propagated throughout the BU nodes. When a block comes in that confirms enough parent transactions to make the transaction valid in other mempools, a double-spender is essentially racing the entire BU network to push his double-spend into the miner nodes that new accept the transaction. It is certainly possible that a well-funded, well-prepared attacker could deploy enough infrastructure to win this race sometimes. Today, similar attacks against 0-conf are also available to well-funded, well-prepared attackers since doublespends are not propagated by all nodes. But a business is concerned with probabilities, not absolutes. Will the profits made from an application that uses 0-conf transactions outweigh the successful cheaters (this business tradeoff is true for all 0-conf applications, not just large 0-conf chains)? I believe that this technology may enable businesses to profitably deploy applications that benefit from deep unconfirmed chains on the BCH mainnet today. A business could deploy a few strategically placed BU nodes, within the same data centers that also host mining pool nodes. To enable this feature, an operator would configure both new unconfirmed limits (these config fields have existed for a long time) and turn on the new intelligent transaction forwarding: limitancestorsize=<KB of RAM> limitdescendantsize=<KB of RAM> limitancestorcount=<number of allowed ancestors> limitdescendantcount=<number of allowed descendants>
OPCFW_CODE
No reason to think he is right. However he raises some interesting points borne of a reevaluation of his original position. I think several of his predictions are wrong but he still does present some good insights into this space. they are not. Bt until someone is making money off it it's unlikely to be a long term success. Especially in the tech world. Even the poster child of open source success, firefox, really only blew up after they figured they could make money off their little search box.yeah because every working professionals job is to write 20 page reports...how hard is it to understand that just because tablets dontdo everything well doesn't mean that they can't be used for work, or... The problem with Android is that no one is making money off it. 1) HTC is making money, but they are making about the same money they were when they were the leaders in Windows Mobile. Essentially, Android has replaced Windows Mobile with them (after accounting for the growth in size of the overall smartphone market, which, even if you completely remove Android from the picture, is MUCH larger than then. In fact, for all the marketshare RIM is losing, they are still... The biggest problem with the TV idea is where Apple is going to sell these. TVs take up a LOT of space. Apple has extremely limited space in their stores. Are they going to devote these to iPads, iPhones, iPods and Macs, which don't take up much space, or TVs, which do and don't have tremendous margins either... Intel is going about this the wrong way. The reason they were able to win in all the other areas is because people knew how to develop for x86. The legacy applications built on x86 gave Intel a huge edge. In mobile, its the complete opposite. Legacy code (and more relevantly, instruction sets) are anathema, because battery life, and not processing power, is king in this space. If Intel really wants to become relevant, they need to invest a ton of money developing... Best Buy was clearly trying to prop up Apple's competition. This seems more pervasive than just at the managerial level. This was an extremely short sighted move by them. They can't keep bashing Apple and expect the company to behave nice with them. The individual sales folks have also had a large anti-Apple tilt, which Apple has ignored. Best Buy should have allowed the natural anti-Apple tilt of their employees to tilt the scales in the competitors' favor, instead... Well, this tells us what we knew. All other things equal, a large majority of people would much rather get an iPhone than anything else. However, all other things are not equal. Carrier subsidy, and pushing of particular phones gives Android a much bigger leg up than if they had to compete on their own. I would also venture to say that at this point, this is enough of a factor to continue extending Android's lead in the smartphone market. Fortunately for Apple,... Yes, but not exactly. The problem is that when the Kindle entered the market, it was the only ereader (there were a few minor players). Also, there was a HUGE library already available. The problem with this is: 1) Competition. Ridiculous competition from all sorts of devices, not just tablets. 2) Developers will need to build only for this. Unlikely to happen. Why in the world does anyone think people will buy a Gamestop tablet for games, when they can get a $499 iPad which does a whole lot more, and has a much larger game library? If its Android, then why does anyone think game devs will develop for Gamestop's Android tablet as opposed to all the millions of other android tablets on the market?
OPCFW_CODE
Friday, September 24th, 2010 One of the goals behind the Web Ninja Interview series is to talk with the web gurus behind many amazing web sites and products that don’t directly blog or speak at conferences as much. Today we talk with Marcin Wichary. I’m a huge fan of Marcin’s work. He was behind the animated and playable Google Pac-Man logo; created the initial HTML5 fancy (shmancy) slide deck that is now an open source project; and helped with Google Instant. He also has a geek love of computer history; just as artists study the masters who came before them computer scientists should know their history. Let’s begin. Brad Neuberg: Tell us a bit about yourself, where you’re from, your background, interests, and some projects you have worked on (don’t be humble!) Marcin Wichary: I grew up in Poland. I have a master’s in computer science, and a doctorate in human-computer interaction. I created my first HTML tag in… 1995? It was probably a <P>, but that’s just a guess. I am proud of GUIdebook, which is (was) an online gallery of graphical user interfaces. Alas, I have not had time to update it since 2006. At Google, which I joined in 2005 as a user experience designer, I helped with various internal tools and a number of search-related initiatives, including search options, real-time search, and most recently Google Instant. Brad Neuberg: You built the HTML5 Pac-Man Google Doodle. What’s the story behind that? Any technical things you ran into that surprised you? Marcin Wichary: One of the Google illustrators and a good friend of mine Ryan Germick had an idea to create a first playable doodle for Pac-Man’s birthday. Since I’ve been exposed to arcade games a lot in my childhood, he reached out to me; I built a very early prototype the same night. The biggest surprise was how much thought went into Pac-Man’s details. You’d think it’s a very simple game, but there’s so much nuance and polish in every aspect. I had to recreate all of it from scratch, and since I personally believe that it’s the details that make or break the experience, it was inspiring to see someone thinking about all that already 30 years ago. Technically, I was sad to witness HTML5 audio not being quite ready to be used in games. (There’s actually very little HTML5 in the Pac-Man doodle.) And the infamous background caching bug in IE6 bit me so bad that no known solutions worked; I had to introduce a separate code path that didn’t use CSS backgrounds, just regular images cropped by parent divs. Marcin Wichary: A colleague suggested I give a talk to my team about HTML5. I agreed, thinking “I’ll just find a nice list of what’s there in HTML5 and make a presentation out of it.” Turned out, there was no such list, so I had to poke around and construct one myself. It was actually an interesting process. I called it “archeology of the future” – I was looking at Web technologies that’ll ultimately span years 2000 to 2020, trying to figure out how they all fit together right now, in 2010. In terms of choosing a medium, I’ve been making my slide decks in HTML for about a decade now. Before Keynote, it was the only way for presentations to look exactly the way I wanted (have to give credit to IE6 here for its gorgeous full-screen mode), so I felt fairly comfortable following that – but utilizing newer technologies this time around. I’ve also always enjoyed teaching by example, hence the addition of sliders that allowed direct manipulation of CSS. I am a very hands-on, low-level kind of guy. I created my first website in FrontPage, but since then I’ve been coding everything by hand. These days it means TextMate and Safari’s excellent Web inspector. I also make a point not to reuse much of what I do, but write the same things over and over again. This allows me to learn and adapt to changing technologies. (For example, I wrote two new presentation engines since the said HTML5 presentation.) Marcin Wichary: Get an iframe, make it tiny using CSS3 translate/scale, cover with a transparent div to intercept events, and voilà – you have a nice site thumbnail without having to create an image, upload it and worry about them getting out of sync. Of course, it also comes with terrible latency, so there you go. color: transparent and text-shadow with 0px offset means blurry text. How is that not awesome? (Not that I can think of a use for it.) Not sure if those are particularly clever or useful – I have an attention span of a <marquee> tag and if I do anything innovative I’m usually the first to miss it – but it’s exciting to discover new uses for things that, in and of themselves, are so brand new. Brad Neuberg: What are some of the things you are hacking on these days (that you can talk about)? Marcin Wichary: My main project for the past several months was Google Instant, and we just launched it, so right now I’m looking around and learning about projects. I’ll be doing some internal HTML5 advocacy, teaching and workshops – hopefully some of that will surface on HTML5rocks.com. Brad Neuberg: Tell us about a hobby, interest, or something in your background that most people wouldn’t expect or know about. Marcin Wichary: One thing that I feel keeps me in balance is collecting (and slowly going through) books about computing history. I probably have some 800 of them by now. My apartment filled with obsolete technology talking about another obsolete technology is a good counterpoint to living in the future at work. And, as always, the best technology stories are those about people – this keeps me focused on our users in the present as well. Brad Neuberg: Where do you see HTML5, CSS3, SVG, etc. (i.e. the new stuff on the web) going in the next year? How about the next 3 years? Marcin Wichary: They say nothing ages faster than today’s idea of tomorrow, and I’ve never been a good futurist. :) So I’ll give you my wish list instead. Brad Neuberg: For folks that want to do the kinds of cutting edge things you’ve been doing and have done, what advice or resources would you give them? Marcin Wichary: One of the first popular home video game consoles was 1977’s Atari VCS 2600. It was an incredibly simple piece of hardware. It didn’t even have video memory – you literally had to construct pixels just moments before they were handed to the electron gun. It was designed for very specific, trivial games: two players, some bullets and a very sparse background. All the launch games looked like that. But within five years, companies figured out how to make games like Pitfall, which were much, much cooler and more sophisticated. Here’s the kicker: if you were to take those games, go back in time, and show them even to the *creators* of VCS, I bet they would tell you “Naah, it’s impossible to do that. The hardware we just put together won’t ever be able to handle this.” Likewise, if you were to take Google Maps or iPhone Web apps, take your deLorean to 1991 and show them to Tim Berners-Lee, he’d be all like “get the hell out of here.” What I’m trying to say is: Assume nothing is impossible. It’s actually easier this way. So many times people asked me if something is doable in HTML, and my initial instinct was to say “no.” But you look around, ask around, *think* around, and there’s always a way. Something else that took me a while to internalize: you have to accept that with Web development, anything that’s worth anything will be a hack. Not just prototyping; production code as well. That’s hard to swallow when you’re used to proper, clean, sterile programming. I’d go as far as to say if you’re working on something and you never think “what I did here is terrible; hopefully one day there will be a nicer, better way to do it,” you’ve already fallen behind. And eventually that battery of hacks in your sleeve might make you stand above. My crude and jaded metaphor of Web development is button mashing when playing video games. Everyone hates button mashers, but working with cutting-edge Web really is flying blind a lot of the time – you’re trying out all sorts of things that sometimes don’t logically make a lot of sense. But they somehow work. If you get used to that mentality and you get familiar with those hacks, you will train your instincts to know which buttons to mash first, and give yourself more buttons as well. Lastly, if you ask a “what if?” question and leave it unanswered, you should be ashamed. :) Brad Neuberg: Thanks Marcin! What kinds of questions do you have for Marcin? Ask them below! Posted by Brad Neuberg at 6:00 am
OPCFW_CODE
Teaching week 5: Lectures will cover ideal gas entropy from multidimensional sphere, thermodynamic fluctuations and we will then start on chapter 2. The solution of the last set of problems is here: Please look also through the problems we solved during the semester. For preparation to the exam look here: Let is try to solve exam set from 2003 and 2008. the problems can be found here. Let us try exam set from 2009. You should try to use only 3 hours for it. the problems can be found http://www.uio.no/studier/emner/matnat/fys/FYS4130/v12/exams/x_09.pdf The solutions of the set can be found here Wednesday- summary lecture. Thursday - solving the 2003 and 2008 exam sets To check if your assignment in March was approved please see the list. Exact solutions of the Ising model. Weiss and Landau mean field theory the problems about magnets we will leave for the next week 17.04.2015 Note: It seems that time is not good. Please send email to firstname.lastname@example.org with suggestions of new time (and/or dates) The solution can be found Due to the snow all means of transport broke down, and I'm stuck at home today. Next lecture will be 8. april and cover mean field theory and random walks. Comulsory assignments may be handed in in 'Ekspedisjonskontoret' by 15:00 26/3. Have a good easter will cover models of magnetic systems (Ising model) and mean field theory of magnetism, Please note that the deadline for submission of the assignement on March 26 is 3PM. The assignement may be handed in at the Reception Office or electronically. Please finish problem 4 from the Problem_set_07. In addition consider The solutions of problem_set_07 1-3 can be found .. so you can work on the compulsory problem set. will cover chapters 5.8-6.4. Exercises are devoted to normal distribution, 2-dim Fermi Gas at low and high temperatures, the electron gas in magnetic field, and to phase transitions from free fermi and boson gases to its boundary states. The problems can be found here: http://folk.uio.no/larissa/nuclphys/fys4130/problem_set_07.pdf As a help to midterm exam please solve problem 1.2 from Galperin booklet. This week lectures include Bosons at non-zero chemical potential, Bose Einstein condensation and Fermions. We will finish chapter 5 in the book (excluding pages 108-113). The symbol a in problem 1.6 lacks definition. There should have been a sentence in problem 1.5 reading. 'Plot (ln(P(x)) as a function of x^2 and show that there is a linear relation between these two quantities with slope -a.' It is an example of the obligatory exercises. Please solve http://www.uio.no/studier/emner/matnat/fys/FYS4130/v14/exercise-blog/problem-set-06.pdf Brief solutions of Problem Set 05 can be found Solutions of the problem set 04 can be found at the exercise blog 2015. Problems 4.6, 4.8, 4.9, 4.12, 4.13 Lectures: will finish chapter 3 (minus fixed pressure ensembles). We will also discuss Liouvilles theorem and non-equilibrium processes/entropy increase. Fermi-Dirac and Bose-Einstein systems. Canonical and Grand Canonical Ensembles, Problems 4.5, 4.19, 4.20, 4.24, 4.25. Solutions are here : http://folk.uio.no/larissa/nuclphys/prob_3_FYS4130.pdf Solutions of Problems 4.3, 4.6, 4.7 from Yuri Galperin book can be found here: Solutions of the Set 01 can be found here: http://folk.uio.no/larissa/nuclphys/prob_1_FYS4130.pdf Seminar will take place in room FO 262 from 1215-1400
OPCFW_CODE
Your favorite Apple, iPhone, iPad, iOS, Jailbreak, and Cydia site. Thread: Search function? 10-10-2007, 04:24 AM #1Search function? I switched from a Treo, and one of the most useful functions I found was the Search feature. Many times, I would search my appointments to find info for a client I had seen many months earlier but never entered into contacts. I was reading the threads on Apache and had an idea. The contacts and calendar databases are sqlite. Using Apache, and PHP. (or maybe one of the other scripting languages, but I am more familiar with PHP), could a web page be created and hosted on the iPhone that would search and report results from the contacts & calendar dbs? I am sure it would be better to write an app to do this, but I don't possess the skills to do so. Any ideas? Comments? 10-10-2007, 06:53 AM #2 I don't have your answer, but the other thing that my Treo had was that ability to create repeating appointments in the format "every second Monday of the month". Not available on my iPhone. 10-10-2007, 09:18 AM #3 There is no copy and paste, so what you can do with the search results will be limited.Various 'Books old and new 8 Giggity, Giggity, Gigg-it-y. 10-11-2007, 07:56 PM #4 Ok everyone I got it (getting into the calendar through SQL php) now someone has to either help me make a webapp or wait. I will post a screen shot soon. 10-11-2007, 08:26 PM #5 My idea is to setup a webserver on the iphone (like lighttpd & php), then run the php pages from there. I am working on it as well. The idea for me is to be able to lookup past appts. 10-12-2007, 10:52 AM #6 Please someone give me the PHP code to convert this integer into this date: Friday, Oct. 12, 2007 10:00 PM I dont remember the conversion from integer format date and time to a normal date and time. Also, as a note to progess trying to access the DB straight from PHP kinda didnt let me (some encryption error, did anyone else have this??) I had to dump it reload into a db I made the query. I have the query and can read and write. Please help with the conversion of format above. Thank You 3 Test 213933600 for the $start_date I found a $start_date= date("l,F,j,Y g:i", $start_date); but it returns 3 Test Monday,October,11,1976 10:00 Etc/GMT+4 Monday,Oct,11,1976 11:00 0 1 0 0 Last edited by joejoe123; 10-12-2007 at 12:54 PM. Reason: Addition
OPCFW_CODE
Cycrest has some important tips to save you and your organization time and improve productivity. - Double-click word to highlight it. (You do not have to drag the mouse across it, in other words.) Double-click and drag your mouse to highlight in one-word segments. Triple click to highlight a whole paragraph. - When deleting, if a selection is highlighted, you do not have to delete it first. Just start typing and it will automatically be replaced. - AutoFormatting: this is when Word automatically creates clickable links, bold type, indented bulleted or numbered lists, and other formatting as you type. Tired of it? - You can turn on and off these features. In Word 2010 and newer (Windows), open the File menu; click Options, Proofing, AutoCorrect Options, and then AutoFormat Options. - On a Mac (Word 2011 and newer), open the Tools menu; click AutoCorrect, then AutoFormat As You Type. - On your keyboard, there is a difference between the Backspace and Delete keys. Press Backspace to delete the typed character to the left of the blinking insertion-point cursor. Pressing Delete, however, removes the character to its right. (On Macs however, the Backspace key is labeled as Delete and deletes characters to the left of the cursor.) - In Microsoft Word, when you paste in text from another document, you may not want all the boldface, colors, fonts and other formatting retained. Instead of using the Paste command, open the Edit menu and click Paste Special. Click Unformatted Text. You will get just the text, without all the fancy formatting. - You can hide all windows, revealing only what is on the computer desktop, with one keystroke: hit the Windows key and "D" simultaneously in Windows, or press F11 on Macs (on recent Mac laptops, Command+F3: Command is the key with the cloverleaf logo). This is perfect to use when you want to look at, or delete something, you have just downloaded to the desktop. Press the same keys again to go back to where you were. - If you cannot find an obvious command, try clicking using the right-side mouse button. (On the Macs with single-button mice, you can Control-click instead.) - Moving a file into the Trash or the Recycle Bin does not actually delete it. You have to empty the Trash or Recycle Bin. (Once a year, I hear about somebody whose hard drive is full, despite having practically no files. It is because over the years they have put so many gigabytes' worth of stuff in the Recycle Bin and never emptied it.) - Especially if you are a beginner (or an expert), it is frequently useful to capture the image of what is on the screen -- an error message or diagram, for example. - In Windows, PrintScreen key copies the whole screen image, as a graphic, onto your invisible clipboard, so you can paste into an e-mail message or any other program. If you add the Alt key, you copy only the front window. - On the Mac, press Command-Shift-3. (Command is the key with the propeller on it, next to the Space Bar.) - If you press Command-Shift-4 instead, you get a cross-hair cursor; you can draw across just one portion of the screen. Or, if you now tap the Space Bar, you turn the cursor into a little camera icon. You can now click on just one window or toolbar that you want to copy. - In both cases, you can hold down the Control key to copy the image to the Clipboard instead of leaving a file on the hard drive. - The Esc key (top left of the keyboard) means, "close this" or "cancel this." It can close a menu or dialog box, for example. - You can duplicate a file icon (instead of moving it) if you press the Alt key as you drag it out of its window. - You can switch among open programs by pressing Alt+Tab (or Command-Tab on the Mac). On the Mac, the much less known Command-tilde (the ~ key, upper left corner) switches among windows in a single program. - In Windows, you can click and hold a window on the window bar then shake it left and right. The other windows will minimize.
OPCFW_CODE
gulp cannot find semantic.json during installation of semantic-ui I'm trying to install semantic-ui using npm and gulp using this tutorial: http://www.semantic-ui.com/introduction/getting-started.html I run npm install semantic-ui --save and everything's fine. but then I direct into semantic/ folder and run gulp build but is says: cannot find semantic.json. Run "gulp install" to set-up Semantic the semantic.json file is on the root of my project. I also tried gulp install but it says Task 'install' is not in your gulpfile what should I do? EDIT: this is my gulpfile.js file: /******************************* Set-up *******************************/ var gulp = require('gulp-help')(require('gulp')), // read user config to know what task to load config = require('./tasks/config/user'), // watch changes watch = require('./tasks/watch'), // build all files build = require('./tasks/build'), buildJS = require('./tasks/build/javascript'), buildCSS = require('./tasks/build/css'), buildAssets = require('./tasks/build/assets'), // utility clean = require('./tasks/clean'), version = require('./tasks/version'), // docs tasks serveDocs = require('./tasks/docs/serve'), buildDocs = require('./tasks/docs/build'), // rtl buildRTL = require('./tasks/rtl/build'), watchRTL = require('./tasks/rtl/watch') ; /******************************* Tasks *******************************/ gulp.task('default', false, [ 'watch' ]); gulp.task('watch', 'Watch for site/theme changes', watch); gulp.task('build', 'Builds all files from source', build); gulp.task('build-javascript', 'Builds all javascript from source', buildJS); gulp.task('build-css', 'Builds all css from source', buildCSS); gulp.task('build-assets', 'Copies all assets from source', buildAssets); gulp.task('clean', 'Clean dist folder', clean); gulp.task('version', 'Displays current version of Semantic', version); /*-------------- Docs ---------------*/ /* Lets you serve files to a local documentation instance https://github.com/Semantic-Org/Semantic-UI-Docs/ */ gulp.task('serve-docs', 'Serve file changes to SUI Docs', serveDocs); gulp.task('build-docs', 'Build all files and add to SUI Docs', buildDocs); /*-------------- RTL ---------------*/ if(config.rtl) { gulp.task('watch-rtl', 'Build all files as RTL', watchRTL); gulp.task('build-rtl', 'Watch files as RTL ', buildRTL); } what are you trying to do? are you running gulp build from npm_modules/semantic? also add your gulpfile.js so we can assist I'm trying to install and use semantic-ui in a project. I'm running gulp build from myproject/semantic are you trying to add semantic-ui to your existing project? yes, I'm adding it to an existing django project iv'e tried to do npm install semantic-ui and i see what you meant. have you considered using bower? can I use bower on windows? This problem happens when the node_modules is located in the upstream or different folder. Let's say you install the semantic-ui globally and the node_modules is located in: /Users/afshin/node_modules/ All you need to address this issue is to copy the node_modules to the semantic-ui folder (your semantic-ui project folder) Hope this save someone from a headache. I lost you, where to copy the node_modules from? @JikkuJose you can find the node_modules in upstream folders. I run npm install semantic-ui --save and everything's fine. but then I direct into semantic/ folder and run gulp build ... iv'e tried to follow up your lead and executed npm install semantic-ui i got this annoying wizard: Why not bower? Since all you care about is to referencing semantic-ui's static files, i suggest using bower install bower: npm install -g bower then add semantic-ui: bower install semantic-ui The semantic-ui package includes a dist directory contains a build of js + css ready to use I tried to use gulp because it has an update command. Can I update my semantic-ui with bower too?
STACK_EXCHANGE
Today’s genetic engineers have many resources at their disposal: an ever-increasing number of massive datasets available online, exact gene-editing tools like CRISPR, and cheap gene sequencing methods. But the increasing number of new technologies doesn’t come with a clear idea to help researchers figure out which genes to target, how to interpret results, and which genes to target. A team of scientists and engineers at the MIT Media Lab and Harvard’s Wyss Institute for Biologically Inspired Engineering, Harvard Medical School decided to make one. An integrated pipeline has been created by the team for performing genetic screening studies. Every step includes recognizing target genes of interest, screening and cloning genes with less time consumption and with efficiency. The protocol called Modular Perturbation Screening and Sequencing-based Target Ascertainment is described in Cell Reports Methods. On GitHub the associated open-source algorithms are available. Modular Perturbation Screening is a streamlined workflow that allows researchers to identify genes of interest and perform genetic screens. They don’t have to think about which tool to use or what experiments to perform to get their desired results. With many existing databases and system it is fully compatible. The researchers are hoping that many scientists will be benefited by this by saving time and improve result quality. The two co-authors of the paper were frustrated. The genetic underpinnings of different aspects of biology the two scientists were trying to explore. They combined the strength of genetic engineering and digital methods. With various tools and protocols, they kept running into problems they were using, which are commonplace in science labs. The algorithms used to sift through an organism’s genes to identify those with a significant impact on a given biological process could tell when a gene’s expression pattern changed but didn’t provide any insight into the cause of that change. In living cells when both the scientists wanted to test a list of candidate genes, what type of experiment both should run it wasn’t immediately clear. And many of the tools available to insert genes into cells and screens are expensive, not flexible, and time-consuming. What would be required to make an end-to-end platform for genetic screening both scientists began working then. The challenge was that should also work for all their projects. The team created two new algorithms to help meet the need for computational tools that extract information and analyze increasingly large datasets. These datasets are generated via next-generation sequencing. The 1st algorithm takes the standard data about a gene’s expression level. With information about the state of the cell it combines with along with details about which proteins are known to interact with the gene. A high score is given to genes whose activity is correlated with significant, cell-level changes and that are highly connected to other genes. More high-level insights are provided by 2nd algorithm by during cell-type differentiation generating networks to represent the dynamic changes in gene expression. Then centrality measures are applied. The Modular Perturbation Screening protocol moves from the laptop to the lab once the target genes have been identified. To disrupt those genes in cells and see the effect of the perturbation on the cell their experiments are performed. It was systematically evaluated by the team of researchers multiple gene perturbation tools, including complementary DNA and many versions of CRISPR in human induced pluripotent stem cells. To unlock synergies between the two methods then created a new tool that allows CRISPR and cDNA to be used within the same cell.
OPCFW_CODE
Cheat codes for the web – Browser developer tools for non-developersMonday, August 8th, 2022 at 9:22 am Every browser these days comes with built-in developer tools that help people create, test and fix products for the web. You can right-click any website and select `Inspect` to get to them, press `F12` or `CMD + Shift + I on Mac` or `Ctrl + Shift + I` on Windows/Linux. These tools are for developers, but they can also help you to fix some annoyances of the web. That’s why I created a collection of tricks how to make the web less annoying for you by using browser developer tools . I work on these tools as a product manager and daily I get about 20 – 30 feedback messages that people opened them accidentally and are sure they “have been hacked”. I get it – to non-developers these tools look daunting and complex and all the errors they display can be intimidating and worrying. And that annoys me. Tooling for the web shouldn’t require you to be an expert. On the contrary – the more you use them, the more you should become an expert of the web. This is why I put together this collection with demonstrations for you to see what developer tools can be: your cheat codes for the web. When I look at the web as a whole and especially, let’s call them “fringe content” web sites, I am disappointed at how we use this medium. Instead of giving people content they came for, we smother intrusive ads all over, prevent people from using the context menu which is full of accessibility enhancing tools and make people jump through hoops just to get to some content they found in a search engine. That’s why I wanted to show you how you can use the browser developer tools to work around some of these annoyances. As the browser of choice I use Microsoft Edge, because it comes for free on any Windows machine, is available for all other platforms, and I work on it. You’re free to use whatever you want, and most of the functionality should also be available in other browsers. Another important reminder is that you are not doing anything illegal here nor can you get traced by the products you change to your needs. Everything you do in these tools happens on your computer. You do not change the live product and if you reload it, your changes are gone, too. Here are the tasks and how to do them with browser developer tools: - Get the mobile version of the current document - Remove annoying overlays and page elements - Inspect the uninspectable - Get back the context menu - Avoid unwanted redirects - Take screenshots of web content - Get a simpler video player - Check the document in different modes - Download images Do you have some other task you’d like to know how to achieve it? Do you have a recipe you use yourself? Comment in an issue on the “GitHub Repository”https://github.com/codepo8/web-cheatcodes of this or ping me on Twitter.
OPCFW_CODE
How come movies/video games decrease the motivation to do buddhist practises? I practised buddhism a lot in holidays given due to Covid-19. I listen to dhamma and meditate each and every day. When my college started, I was not ready for the exam. I got stress. Somehow I found some interesting videos on a video streaming platform and I watched videos such as movies, gaming videos and funny videos. It helped me to forget the problems. At the beggining, I had a lot of resistance to watch that kind of useless videos. But at the end, I lost that resistance. Watching a video became nothing. Even after 2 months gone, I still watching videos. I passed days without doing anything related to buddhism. I can't be mindfull as before. What happened to me? How those videos cause me to lost my interest for buddhism? How I lost that resistance? Have you any personal experiences like this? How can I get back to that previous state? Meainwhile, many gamers follow Theravada Buddhism ;-) What happened to me? How those videos cause me to lost my interest for buddhism? How I lost that resistance? I suspect something you wrote points to the answer – you wrote that college work stressed you out, and these videos “helped me forget the problems.” People respond to stress in their own way, e.g. overeating, indulging in drugs or sensual pleasure, or zoning out on YouTube videos. They’re all easy distractions that are pleasurable in the short term and an escape from your troubles. As you’ve found, watching videos online can become compulsive – there are always endless more videos to watch, endless suggestions and enticements. It may not cause the same obvious physical downsides caused by overeating or drug use, but there is a definite negative effect as it takes over your time and stunts your motivation for other, healthier endeavors. Internet addiction in general is a very real thing, and perhaps reading a bit about it will give you some insight as to how your mind has been affected by your time online. How can I get back to that previous state? Not to get technical, but there’s no such thing as going back to a previous state, nor should you want to. You’re currently learning a challenging but important lesson – distractions will always exist, and now you know that a weak spot for you is internet videos. Guess what? You’re going to get through this, and then you’re going to encounter the next challenge and overcome that one, and so on. And at each step you’re going to become wiser about pitfalls, how to avoid them, and how to climb out of them. I suggest you start by reading (or re-reading) about the five hindrances and their antidotes. Of the five hindrances to progress – sensory desire, ill will, sloth-torpor, restlessness-worry and doubt – you seem to be plagued by a mix of the last three. They each have their specific antidotes, in this case such aspects as rousing energy, having a schedule/routine for meditation, and developing contentment and trust in the process. Underlying all of this are the constants of Buddhism: being mindful of our thoughts and impulses, investigating them to understand them, and detaching from them as you realize they’re not “you” and they’re just as transient as anything. I believe having a schedule/routine for meditation is critical. As the body and mind get used to it, we just automatically do it and are less likely to skip it in favor of distractions. Keep to the schedule and give yourself time to get in a groove. Don’t beat yourself up if it’s tough at first. I noticed that the election week stress threw me off, and I’m just now getting more focused day by day. Best wishes to you. At many times in our lives, the food given us is given in faith. This is certainly true of monks. It is also often true of householders who have yet to earn a living. AN7.72:11.2: Which is better—to have a strong man force your mouth open with a hot iron spike and shove in a red-hot copper ball, burning, blazing, and glowing, that burns your lips, mouth, tongue, throat, and stomach before coming out below dragging your entrails? Or to enjoy alms-food given in faith by well-to-do aristocrats or brahmins or householders?” The Buddha's message here is sobering: AN7.72:14.1: “I declare this to you, mendicants, I announce this to you! It would be better for that unethical man to have a strong man grab him by the head or shoulders and make him sit or lie down on a red-hot iron bed or seat. Why is that? Because that might result in death or deadly pain. But when his body breaks up, after death, it would not cause him to be reborn in a place of loss, a bad place, the underworld, hell. But when such an unethical man enjoys the use of beds and seats given in faith by well-to-do aristocrats or brahmins or householders, that brings him lasting harm and suffering. When his body breaks up, after death, he’s reborn in a place of loss, a bad place, the underworld, hell. Being mindful of the faith others have in us, we bear what has to be born, turn off the TV, take exams. And if the exams are stressful, meditation can help us let go of anxiety so that we might express our gratitude for that which was given by working or practicing for our own benefit and others. could it be because those entertainments are one of five hindrances? Simple answer. But I think, this is the best answer. Thanks :) This answer was flagged as low quality. Is it a comment or an answer? If it's an answer, maybe you can explain which hindrances they refer to and why. And any other details. Thanks. I remember watching a YouTube talk by Ven. Dhammavuddho who stated that the five hindrances obsess and enslave a person, not just during meditation but also at other times. Young householder, It can happen in regard of asking and answering of questions for the sake of entertainment too! Yet the mind, not free from desire after sensuality, from the even greater desire for becoming, gaining a stand, needs, if not inclined toward Jhana, an entertaining, a livelihood. Good to seek out for a "hobby", kammatthan, something given, causing no harm for others, oneself, and dedicated not toward the low and equal but upwardly, toward the Gems. Giving not at proper time, toward those without metta, Sila, wrong view, ones sacrifices do not bear much to no fruits, even bind one down: the way of sacrifices by the "commpassionate" fools... for it's difficult to lower ones stand and serve more sublime yet feed ones defilement when gaining the notion "look, I help!" No Sila, no notions of generosity, no desire to renounce... nothing but wrong conncentration for the sake of compensation, the sake of investment toward stand, toward home, house. Gained some reputation, an approve, fought another battle, decay... toward another game, question, quest in conquering with improper attention another wind-mill monster under run-a-mill. One would be much wiser to leave the gamers and players domains and seek for sacrifices far aways from drug addicted, dedicated toward something worthy for gifts, for there is no incease of happiness to be excepted investing in what's dedicated for the world. Encouragement to avoid pirvat, commercial social medias No Sila, after sensual pleasures, slander around, no conncentration, no exame, lose in this and next world... (3) "What are the six channels for dissipating wealth which he does not pursue? (a) "indulgence in intoxicants which cause infatuation and heedlessness; (b) sauntering in streets at unseemly hours; (c) frequenting theatrical shows; (d) indulgence in gambling which causes heedlessness; (e) association with evil companions; (f) the habit of idleness. "Free" social medias, fb, google, exchange... incl. their inhabitats, are (e). All duties and advices on good friends for young householder here: The Layperson's Code of Discipline Care for you duties in your relation and don't follow the fools who believe themselfs being monks while holding on house. The food you gain isn't given in faith but as a trade which asks for compensation. Otherwise you fool yourself double, end up as lossy beggar who still clings to sensuality and coupling in his old... standing behind the walls, feeding on left overs by degenerated social-monks shares... running around and citing texts like a small girl dreaming being princess i wonderworld. [Note that this is not given for stacks, exchange and other world-binding entertainments or to maintain a mast-pet relation, but to escape from this wheel]
STACK_EXCHANGE
How was vishwanathan anand? I am fairly new to chess. I don’t know anything. The time I am taking interest in chess it is already proclaimed that this is the time of carlsen (except that yui person). I wish to know about vishwanathan anand. For one that he was predecessor and for two that I am an Indian so I have some sort of subconscious obligation to know about him. I want to know from you guys, The chess community. (I can get facts on Wikipedia but I can't understand the environment and energy of his era and niether will successive generation know about carlsen) How was the rise and fall of V. Anand ? Why isn't he good now ? Doesn't he has more experience which should make him better player ! Vishy has been playing top-class chess for over two decades now, and although he's getting a bit old and could be said to be past his prime, he's still rated 9th in the world; talking about "the fall" of Vishy therefore seems strange IMO. Anand is still in the top 10. What are you talking about?? @Bad_Bishop but shouldn't experience make a chess player better ? Like it's not tennis that you need physical fitness @SmallChess he has been playing really bad @user154547 Bad? Played over 2750+ level, and it's bad?? @SmallChess of course I am not comparing this to you and me level. I am talking about his peers. His recent games are getting worse. Why ? Doesn't experience makes you better ? Like you said, you know nothing. Playing games of chess on his level for several hours a time is very taxing for the body, even if most of it isn't as visible as in other sports. And "really bad".. how would you of all people know? @Annatar sorry if I come up rude. I was just curious. No bad feel. *sparks *sprinkles *smiles No bad feelings from me either, don't worry. You are just making a fool out of yourself here, that's all. Like a guy stumbling into a bar when the TV shows a game of [insert sport here], sees that the one team he even knows the name of has a worse score and starts asking everyone for there reason they became "so bad", I mean, they were famous enough that he heard of them, shouldn't they win against a nameless something? Right? Oh, wait, the nameless team happens to be even more famous now to everyone who is even remotely interested in the sport? And you are just watching the finals? Damn.. @Annatar it's actually not my view. Being Indian I hold him at high regard since I was a child (not interested in chess that time but news was thing) @Annatar I asked cuz I have heard lot of criticism of him recently. Here are some revelations. Firstly, chess is a sport. It's not called that by formality, it actually is a sport. It takes both physical and mental endurance to play the sport at a high level. This seems silly to anyone who has never seriously played a 6-hour game, but it does not to anyone who has been through that experience. Especially if you start dozing off at move 50 and make a blunder that pours half a day of work down the drain. Then you realize going to tournaments and doing this frequently at a much higher level must put a big strain on both body and mind. Your mental faculties and your body both deteriorate, starting when you are around 22 years old. Experience may seem very useful in chess, but opening theory is constantly developing while the rest of the game relies on pattern recognition and calculation. These skills do not improve beyond a certain point which a player like Anand has reached decades ago. Thus, experience plays a limited role in chess and does not properly compensate for the deterioration of your mind and body. At a certain point you need to put in many hours of practice every week just to keep your skills from deteriorating, all the while your body is deteriorating regardless. In a game like chess, this is not as noticeable as in the more physical sports of course, but it is nonetheless significant and thus it is exceptional that Anand has remained in the top 10 for such a tremendously long amount of time. This is the exception, not the rule. So instead of wondering why Anand is falling off, I would wonder why he has been able to maintain such a superb position for such an incredible amount of time. It's the exact opposite of the proposed question, which indeed comes off as a bit ignorant as people have pointed out, considering that Anand is not just "still good" but rather "still among the world elite which has been developing and growing constantly while he was getting older". It's amazing, really. The rest of the question is a bit unclear. I have no idea how I could possibly know something about an elite player that cannot be found on the internet and I don't know what kind of specialty you expect surrounds players who sit behind a board and make good moves very consistently. Aside from the few rather strange fellows like Fischer and, to a lesser extent, Kasparov, most chess players are just people like you and me. Was anand back in the days as revered and famous as today's chess players ? Like carlsen e.t.c I am too young to have been around during that time, but I would say this is almost certainly the case. An example: in the one book series I used to learn the game, there are some hand drawings of players. Aside from the national stars, Anand is featured along with other world famous players like Kasparov and Karpov. He is an active ex-world champion, after all. It's hard to rank people like this, but he's certainly incredibly famous and regarded very highly by most of us in the chess community due to his decades of world elite performance.
STACK_EXCHANGE
// Distributed under the MIT License. // See LICENSE.txt for details. #pragma once #include <array> #include <cstddef> #include <limits> #include <optional> #include <utility> #include <variant> #include "DataStructures/DataVector.hpp" #include "Options/String.hpp" /// \cond namespace PUP { class er; } /// \endcond /*! * \ingroup ControlSystemGroup * \brief Manages control system timescales * * The TimescaleTuner adjusts the damping timescale, \f$\tau\f$, of the control * system.\n The damping timescale is restricted to * `min_timescale`\f$\le\tau\le\f$`max_timescale` * * The damping time is adjusted according to the following criteria: * * **Decrease** the timescale by a factor of `decrease_factor` if either \n * - the error is too large: \f$|Q| >\f$ `decrease_timescale_threshold` * OR * the error is changing quickly: \f$|\dot{Q}|\tau >\f$ * `decrease_timescale_threshold`,\n * AND \n * - the error is growing: \f$\dot{Q}Q > 0\f$ * OR * the expected change in \f$Q\f$ is less than half its current value: * \f$|\dot{Q}|\tau < |Q|/2\f$ * * **Increase** the timescale by a factor of `increase_factor` if \n * - the error is sufficiently small: \f$|Q|<\f$ `increase_timescale_threshold` * \n * AND \n * - the expected change in \f$Q\f$ is less than the threshold: * \f$|\dot{Q}|\tau < \f$ `increase_timescale_threshold` */ class TimescaleTuner { public: static constexpr Options::String help{ "TimescaleTuner: stores and dynamically updates the timescales for each " "component of a particular control system."}; struct InitialTimescales { using type = std::variant<double, std::vector<double>>; static constexpr Options::String help = { "Initial timescales for each function of time. Can either be a single " "value which will be used for all components of a function of time, or " "a vector of values. The vector must have the same number of " "components as the function of time."}; }; struct MinTimescale { using type = double; static constexpr Options::String help = {"Minimum timescale"}; }; struct MaxTimescale { using type = double; static constexpr Options::String help = {"Maximum timescale"}; }; struct DecreaseThreshold { using type = double; static constexpr Options::String help = { "Threshold for decrease of timescale"}; }; struct IncreaseThreshold { using type = double; static constexpr Options::String help = { "Threshold for increase of timescale"}; }; struct IncreaseFactor { using type = double; static constexpr Options::String help = {"Factor to increase timescale"}; }; struct DecreaseFactor { using type = double; static constexpr Options::String help = {"Factor to decrease timescale"}; }; using options = tmpl::list<InitialTimescales, MaxTimescale, MinTimescale, DecreaseThreshold, IncreaseThreshold, IncreaseFactor, DecreaseFactor>; TimescaleTuner(const typename InitialTimescales::type& initial_timescale, double max_timescale, double min_timescale, double decrease_timescale_threshold, double increase_timescale_threshold, double increase_factor, double decrease_factor); TimescaleTuner() = default; TimescaleTuner(TimescaleTuner&&) = default; TimescaleTuner& operator=(TimescaleTuner&&) = default; TimescaleTuner(const TimescaleTuner&) = default; TimescaleTuner& operator=(const TimescaleTuner&) = default; ~TimescaleTuner() = default; /// Returns the current timescale for each component of a FunctionOfTime const DataVector& current_timescale() const; /// Manually sets all timescales to a specified value, unless the value is /// outside of the specified minimum and maximum timescale bounds, in which /// case it is set to the nearest bounded value. void set_timescale_if_in_allowable_range(double suggested_timescale); /// The update function responsible for modifying the timescale based on /// the control system errors void update_timescale(const std::array<DataVector, 2>& q_and_dtq); /// Return whether the timescales have been set bool timescales_have_been_set() const { return timescales_have_been_set_; } /// \brief Destructively resize the DataVector of timescales. All previous /// timescale information will be lost. /// \param num_timescales Number of components to resize to. Can be larger or /// smaller than the previous size. Must be greater than 0. /// \param fill_value Optional of what value to use to fill the new /// timescales. `std::nullopt` signifies to use the minimum of the initial /// timescales. Default is `std::nullopt`. void resize_timescales( size_t num_timescales, const std::optional<double>& fill_value = std::nullopt); void pup(PUP::er& p); friend bool operator==(const TimescaleTuner& lhs, const TimescaleTuner& rhs); private: void check_if_timescales_have_been_set() const; DataVector timescale_; bool timescales_have_been_set_{false}; double initial_timescale_{std::numeric_limits<double>::signaling_NaN()}; double max_timescale_; double min_timescale_; double decrease_timescale_threshold_; double increase_timescale_threshold_; double increase_factor_; double decrease_factor_; }; bool operator!=(const TimescaleTuner& lhs, const TimescaleTuner& rhs);
STACK_EDU
How to run a Qt application in headless mode (without showing my GUI) I have a QT application based on a QApplication and supposing that my application has a complex GUI (QDialog, QMainWindow...). My Application can run in two modes: with GUI in headless mode I would like to know how I can launch the application in headless mode (that is to say without GUI visible) From a very basic code, below, what argument shall I have to allow this? int main(int argc, char*argv[]) { QApplication app(argc, argv); // which option should I add to argv to run in headless mode return app.exec(); } Use QCoreApplication instead? Thank you for your response. I effectively could use it but some parts of the code need QApplication as the application shall run under the two modes GUI and headless. Similar to (unanswered) create-a-truly-headless-qapplication-instance I've never used Qt myself but perhaps creating a custom QStyle (or QCommonStyle) that renders nothing and set that with QApplication::setStyle(new CustomStyle); before QApplication app(argc, argv); could be an option? I have found a workaround, I posted it in the older question. May be this one will get deleted for duplicate Which parts of QApplication do you need in head-less mode which are not provided by QCoreApplication? If head-less is an essential requirement for your application, then you might also re-design your application to separate the parts needed for head-less and the parts for GUI respectively. There are several options here. Either you need a Qt Console Application or you need a headless GUI Application. You will find truly running a GUI in headless mode rather tricky. This applies in case you need to run the very same app in a Linux system which does not have the installed GUI libraries, like a minimal setup. Without extensive xorg and/or EGL libraries you'll find it impossible. But fear not, you can do it with minimal impact by using either the Qt VNC platform plugin or with with the help of Xvfb. So in short Solution 1: Hide it with Qt's VNC plugin $ QT_QPA_PLATFORM="vnc" ./my-app it's the same as $ ./my-app -platform vnc You'll find that you software has a GUI but it's running in headless mode, in order to view the GUI you just connect to it with any vncviewer. Solution 2: Avoid dependencies with Qt's VNC plugin The same as the other solution, and you can just hide your GUI by not showing it. Solution 3: Nullify render with offscreen render This is rather similar to VNC but you'll get a totally null output, no way for GUI interaction: $ ./my-app -platform offscreen Solution 4: Run Xvfb and launch it there You can run a fake Xorg server and run things over there. export DISPLAY=:1 Xvfb :1 -screen 0 1024x768x16 & ./myapp & From the given solutions I'd prefer the offscreen render, but your Qt compilation might not have the plugin or it might ask for xcb or egl libraries. It's your choice. rather run it with xvfb-run ./myapp
STACK_EXCHANGE
I'm searching the way to change loaddev variable of FreeBSD's /boot/loader. There is nothing useful either in FreeBSD's man, or in Google. By default it should be the same as currdev, but I have currdev=disk0s2:, but loaddev=disk0p2:. How to change it to load loader.conf and other... Is there a way to make FreeBSD's boot loader read (or re-read) the config files (like /boot/loader.conf) after you have escaped from the menu to the boot loader prompt? When I read the sources in /boot/lua it seems like there should be a "read-conf" command, but it doesn't exist when I try from... It's easy to configure the console keyboard layout, e.g. for German with keymap="de.kbd" in /etc/rc.conf. However, when I escape to the loader prompt during boot, I'm faced with a US layout. Typing pathnames and such becomes painful. Is there some magic to change the keyboard layout at the... I am experimenting with boot environments and observing some very weird behavior related to installing/removing packages via pkg -c /tmp/path/to/be <command>. Any time I attempt to install/remove packages in a non-current boot environment using this incantation and then immediately bectl... There's a FreeBSD recommendation to use /boot/loader.conf for nvidia (with the latest driver for NVIDIA). Is this use of loader.conf appropriate (with FreeBSD 13.0-RELEASE), or somewhat archaic? i just did a freebsd-update fetch, freebsd-update install and reboot from 13.0-RELEASE to 13.0-RELEASE-p2 and get greeted with this: Just yesterday everything was fine, zpool status had all four hard disks ONLINE... what can i do? :/ I have installed the FreeBSD with a lot of help of the community, but ran into another serious issue after installing and adding nvidia modules into kernel + modifying my /etc/rc.conf and /boot/loader.conf to launch gnome3 by default exactly as it was said. And now I am having... What I'm considering doing is using a generic two port M.2 drive (SATA and NVMe) as my root and swap device. Two of my systems on the network I'm constructing do not suport UEFI (HP Microserver G7 N54L). The M.2 SATA SSD is going to be connected to the internal systemboard SATA port. I have a box with FreeBSD 11.1-STABLE r326098 amd64. It have a ZFS root on GELI encrypted providers: scan: none requested NAME STATE READ WRITE CKSUM bootpool ONLINE 0 0 0... Can someone clarify how to use removable flash drive with encryption key with new full disc encryption process? The new approach is to encrypt /boot altogether with /root filesystem. So, as I understand, initial encryption performed by EFI loader. Is there a way to pass keyfile to EFI loader... "getting pretty good at the openboot..." --famous last words. So... I've tried to boot the ISO files burned to cdrom on my sparc64 in several several ways. no dice. (iso's burned from linux w/ brasero & wodim) As far as I can tell, the t4-1 niagara seems to be supported, (or at least not... I've posted earlier about my issues with booting my systems. When system boots up, it shows list of bios drives, and then spinner starts for about two-three seconds, after which it will get stuck. Waited 30 or so minutes without luck, didn't move. When booting from ISO (via IPMI — it's... I found this interesting line in /boot/defaults/loader.conf: acpi_video_load="NO" # Load the ACPI video extension driver What does it do? Anybody has list of how these logos look like? loader_logo="orbbw" # Desired logo: orbbw, orb, fbsdbw, beastiebw, beastie, none... By default FreeBSD puts kernel messages (white color) on the first console. I know I can disable them with boot_mute in the /boot/loader.conf file, but is there a way to move them to second console instead of disabling them? ... or to redirect them to some file instead of disabling them... Is there an alternate boot loader? BTX gets going enough to print it's version (1.02) and maintain a blinking cursor but beyond that it doesn't actually do anything. I was searching for a solution and it seems that this was not an issue with the earlier version of BTX. It seems that I would need... Dear FreeBSD Support Team: Last year I upgraded the hardware in this Samsung NP300V4A by adding a 500 GB SSD and 12 GB RAM. So I decided to install FreeBSD 10.3, in that try I had problems installing it from the USB due to display errors at the moment of enter option 1 in the splash screen of... I'm having some troubles booting FreeBSD. I have experienced similar things before and have subsequently given up on installing BSD. With the stable release of FreeBSD 11, I thought I'd give it another try, but to no avail. I will describe my actions step by step. I grabbed the amd64 memstick... I'm a Dvorak keyboard user and I currently use a blank keyboard. I have set the Dvorak keyboard map in rc.conf, but I haven't found a way to set it in loader.conf Is loader hardcoded to Qwerty US? I ask because I'm having trouble typing the GELI Passphrase and experimenting with boot... I attempted to change /boot/defaults/loader.conf and it doesn't work. I searched on Google and I found solutions. The kernel boot process doesn't work. If this ever happened to you, please help. Thank you very much to all,
OPCFW_CODE
IoT in Practice: The Intelligent Chair (part 1: architectural overview) Some time ago we were looking for an easy to understand IoT example which we could use to demonstrate the basic capabilities and functionalities of IoT. This was the genesis of our so called “intelligent chair”. The intelligence of the chair shoud be as follows: the chair was to detect if somebody sits on it or not. That’s all. Well, this “intelligence” is rather modest but nevertheless it covers all aspects of an IoT device: Any IoT device has to deal with the following five components: - Power consumption We designed our chair to use a simple pressure sensitive resistance (Sensors) to detect if somebody (or somewhat like a heavy bag) sits on it. The communication should be established wirelessly (Connectivity). Since a lot of intelligent chairs may be needed, the technology for transmitting had to be at low cost. Furthermore we aimed at a most independent design for our chair. That’s why we chose to implement a mesh network using MySensors (mysensors.org). This is not only ideal from an independence viewpoint but also from a financial perspective (Costs). The necessary components are rather cheap and can operate on low power (Power Consumption). In our mesh network each node (intelligent chair) tries to reach a central gateway to send information about its state: occupied or empty (Logic). However, if the gateway is out of reach it can make use of a so called repeater node to forward its state to the gateway. Repeater nodes are regular nodes, essentially. The only difference is that repeater nodes must not go into sleep mode to safe energy. To integrate the intelligent chair into other components we built a simple backend application (IoT-Controller) which is used as a bridge from the wireless protocoll of MySensors to TCP/IP. From a user interface point of view we wanted to demonstrate two aspects: - Real-time information - Analytical information For the real-time information we simply built a SAP UI5 client application which reflects on the actual state of the chairs. However, at the time we built our intelligent chair there was no server websocket technology in SAP HANA available (luckily this changed with SPS11 of HANA XS). That’s why we added some websocket functionality to our IoT-Controller (using node.js). With this logic, the IoT-Controlle now also acts as simple websocket server. For the analytical part we decided to use the IoT Service of HCP. Our IoT-Gateway updates the HCP IoT tables with every change of state. This facilitates a very simple reporting with the HCP toolset. In our demo case we built a simple Fiori-based application which showed some long time reports of the data collected. The overall architecture of all involved components looks like this: From a technical viewpoint we used the following components to build the system: - Force sensing resistor (FSR): Interlink Electronics 400 series - Logic Board: DevDuino v2 (ATmega 328, Arduino compatible) - Communication: nRF24L01 module (part of DevDuino) - Battery powered (part of DevDuino) - Gateway: Arduino Uno R3 with nRF24L01 module and Ethernet Shield - IoT Controller: Raspberry PI with WiFi extension This concludes Part 1 of this blog series. If further elaboration is of interest, we are happy to proceed this series with the specific implementation details. Please let us know if this is of any interest.
OPCFW_CODE
It has been news today that AOL is firing people from Netscape, mainly from research and development. There is no doubt that this is results about AOL vs. Microsoft settlement 2 months ago where MS settle antitrust suit brought by AOL Time Warner, paying the latter $750 million. But is this really a bad thing? This is how business works; if you have money you can buy your competitor out. Microsoft doesn?t want future development on fat web clients because that would kill their office application. Ok, this is a bad thing for those people in Netscape who got laid off but I believe MS did not choose to laid off bad ones and those who are now writing their CV’s can write there that “MS paid AOL $750M to get me fired” ;-) Hehe… goto Google and put “weapons of mass destruction” into search query and then press “I’m feeling lucky” A new version of Windows is far far away. I guess we need to stick on XP at least ~3 more years. Hmm… This is sounds good for linux for desktops. Full article at USA Today I just read nice article about The Missing Future of Software Developers. Very intresting reading. Joel on Software talks about how programmers hate user interface programming… I would just love if someone could write once and for all a operating system which would work simultaneously with web. When I would like to participate to a meeting I could just click meeting on webpage and my computer, knowing that it is indeed a form of appointment, will pick up all the right information, and understand it enough to send it to all the right applications. Not to all Microsoft applications but to all devices. I?am so tired to type calendar events to my Nokia, Outlook at office and Mozilla calendar at home. I have had this exactly same problem years and there hasn?t been any improvments. Actually I think it has got worse than it was few years ago. Few years ago Microsoft was promising in future all devices to work with Windows but today they say that all Microsoft product will work together. :( I just found nice app called SharpMT. Looks nice little app for posting blogs to movabletype. Let’s see if this appears on my page :) I just update Movable type to version 2.64 and just testing that everything works. Dalai Lama was just today in a town where I live. Frederiksberg Denmark. That’s kinda amazing I did not know that he was here until I saw bunch of people walking on my street wearing some rope instead spring collection from H&M. :) Too bad, I would it love to hear what he had to say. Especially critism about invadors… “Over the last few days, several people have written in with the news that there will be no further standalone versions of Microsoft Internet Explorer. Future enhancements to the browser will only be delivered in the form of operating system upgrades. The news was confirmed by Internet Explorer Program Manager Brian Countryman in a May 7th online chat discussing the changes made to Internet Explorer for Windows Server 2003. He said: “As part of the OS, IE will continue to evolve, but there will be no future standalone installations. IE6 SP1 is the final standalone installation… Legacy OSes have reached their zenith with the addition of IE 6 SP1. Further improvements to IE will require enhancements to the underlying OS.” The next consumer version of Windows, codenamed Longhorn, is due in 2005. So does this mean that MS is starting finally closing windows into one big packet. It seems that MS has moved their focus from 3 party developers to managers and end users by providing out of the box packet. There wont be Internet based wysiwyg enviroment on IE if you want to post publish article on the net buy new windows and office with frontpage. If I would be 3 party developer I would be worried… Hey wait a minute I’am 3 party developer… damn… It is nice to start morning with nice ad’s.
OPCFW_CODE
- Can I try the aplications before I make any purchase? - I try to download an application, but my browser shows a weird text. - How can I get the DEMOS? There are no direct links. - I have purchased a license, how do I use it? - I enter the license key but nothing happens… now what? - Are the applications standalone only? - Where are the videos demonstrating the applications? - How do I make a purchase? - I have a coupon, where do I redeem it? - I have a license but it says “no activations left”, why? Can I try the applications before I make any purchase?In every product’s page, there are links for download the applications for Mac and Windows. These links download a limited version of the application. All features are working and some are a bit limited. Only saving is completely disabled. I try to download an application, but my browser shows a weird text.This happens when your browser tries to show the file instead of downloading it. “Right-click” on the link and select “Save link as…”. How can I get the DEMOS? There are no direct links.There are buttons for the demos on every product page. Clicking it will open a panel with the versions of the demo to select from. Enter your email and the links for the selected demos will be sent to you. I have purchased a license, how do I use it?The license keys are used to unlock the limited versions to fully functional. – Open the downloaded application (there are links on every product’s page for Mac and Windows). – Enter the license key from your receipt in the “serial” field. – Press “Unlock” ant wait. Voila! Enjoy the fully unlocked application! I enter the license key but nothing happens… now what?THere are two reasons you might get stuck at the “Loading…” screen. When entering a license key, make sure you have internet access. The key is validated online to create your unique license. if you have older versions of the apps on Windows, the unlocking uses Java. Check if you have Java | Download Java If you run 64bit Windows, download the 64bit version of Java. - When entering a license key, make sure you have internet access. Are the applications standalone only?For now, all applications are standalone only. Where are the videos demonstrating the applications?The videos are being made while you rea this! If you by any chanse have made a video demonstrating one of our applications, please contact us and let us now! How do I make a purchase?Place products in your cart by clicking the “Purchase…” button on a product’s page. Every purchase is being made with PayPal. If you do not have a PayPal account, you can use your credit or debit card through the PayPal checkout. I have a coupon, where do I redeem it?Coupons can be redeemed at the checkout. Place them at the “Have a discount code?” field. I have a license but it says “no activations left”, why?Every license key can be used a limited amount of times. - Here is the F.A.Q. page. If you have a problem or question this is the first place to look! Can’t find your question here? Feel free to contact us!
OPCFW_CODE