Document
stringlengths
395
24.5k
Source
stringclasses
6 values
create package in java create a package in java creation of package in java creating package in java user defined package in java create package in java program create package in java example program package creation in java example create packages in java user defined packages in java user defined packages in java example create user defined package java example create user defined package java creating packages in java creating packages in java with example create pacakages in java creating package in java steps for creating package in java creating a package in java creating a package in java with example create a package in java with example steps for creating user defined package in java User Defined Packages : Definition Of User Defined Package : User defined packages are those which are developed by java programmer and supplied as a part of their project to deal common specific requirements. If we want to give any class name or interface name to many number of java programmer then such type of classes and interfaces must be placed in a packages. In other words the package classes and interfaces must be placed in a package’s. In other words the packages classes and interfaces must be placed in a package. In other words the package classes and interfaces are meant for common access. What are all classes the guide lines are followed by sun micro system for development of pre defined packages, same guidelines will be followed by us for development of user Creating a user defined package is nothing but creating those package names as directory / folders in the current working machine. For Example : java.awt.event.*; In the above package, java is one of the root of the root directory, awtis one of the sub directory in java directory and event is sub directory in awt directory. Java is sub directory in the current working machines of sun micro system. Similarly p1.p2.p3.*; in the statement p1 is called directory, p2 is sub directory and p3 is called sub – sub directory in our current working machine. Syntax for creating package: Package pack1[pack2[pack3…….[pack n]]]; In the above syntax 1) Package is keyword used for developing user defined package. 2) Pack1, pack2……pack n represents java valid variable names and they are treated as name of user defined packages. 3) Pack2, pack3……pack n represents name of the sub packages and whose specification is optional. 4) Pack1 represents an upper package or outer package and whose specifications is For Example : Guide lines or steps for development of user defined package: Sun micro system has prescribed the following guidelines for development of user defined packages. 1) Choose an appropriate package for placing the common classes and interfaces and ensure that the package statement must be the first executable statement. 2) Choose an appropriate class name / interface name and ensure that whose modifier must be public. 3) The modifier of the constructor of the class which is present in the package must be 4) The modifier of the method of class / interface must be public (this rule is optional in the case of interface’s_ because every method of interface is public. 5) Whichever class name / interface name present in the package, the class name/ interface name must be given as file name an extension (.java). 6) At any point either we need to write a class definition or interface definition in the package, but not both the definition in the single window in the same package. // Create a package pack1 and place class called Demo Public class Demo System.out.println(“user defined package”); Public void display() System.out.println(“user defined display method”); Public interface Test Public void show(); Compiling package classes and interface. For Compiling package classes and interfaces, we use the following java –d . Demo.java Then we can find under Pack1 package Demo.class and Test.class as follows
OPCFW_CODE
using System; using System.Collections.Generic; using Microsoft.VisualStudio.TestTools.UnitTesting; using Dane; using ServerLogic; namespace LogicUnitTests { [TestClass] public class MenuManagerUnitTest { [TestMethod] public void AddDishToMenuTest() { MenuManager menuManager = new MenuManager(); List<Dish> menu = menuManager.GetMenu(); Assert.AreEqual(0, menu.Count, "Wrong dishes number"); List<Ingredient> ingredients = new List<Ingredient>(); menuManager.AddDishToMenu("testDish", "Description", ingredients, CategoryDTG.alcohol, 15.67); menuManager.AddDishToMenu("testDish2", "Description", ingredients, CategoryDTG.dinner, 20.75); menu = menuManager.GetMenu(); Assert.AreEqual(2, menu.Count, "Wrong dishes number"); Assert.AreEqual("testDish", menu[0].Name, "Wrong dish name"); Assert.AreEqual(Category.dinner, menu[1].Category, "Wrong dish category"); } [TestMethod] public void GetDishByNameTest() { MenuManager menuManager = new MenuManager(); List<Dish> menu = menuManager.GetMenu(); List<Ingredient> ingredients = new List<Ingredient>(); menuManager.AddDishToMenu("testDish", "Description", ingredients, CategoryDTG.alcohol, 15.67); menuManager.AddDishToMenu("testDish2", "Description", ingredients, CategoryDTG.dinner, 20.75); menu = menuManager.GetMenu(); Assert.AreEqual(1, menuManager.GetDishByName("testDish2").Id, "Wrong dish id"); Assert.AreEqual(0, menuManager.GetDishByName("testDish").Id, "Wrong dish id"); } } }
STACK_EDU
Chemeketa Community College GEO 143 Online Activity 4 Web Site Part I. (10 pts) - Watch the video linked below. This is a documentary about the Columbia River basalt. There is a youtube link, as well as an mp4 download link. - Watch the video and consider the different aspects of the Columbia River Basalt Group eruptions. Think about the igneous processes presented in Chapter 5 of your text book. - After you watch the videos, write a 2 page paper and email to the instructor. Use the same format as is listed for the other papers (in the syllabus). Write about three important facts that you learned from the video. The deadline to submit this entire activity is 5/26/2015 at 6 PM. - The table below lists the areal extent (square km), the volume (cubic km), the volume percent (the percent that the flow comprises of the total of all the CRB volume), the estimated number of flows, the average volume per flow (cubic km), and the Isotopic Age (Ma; based upon radioactive half-lives) for the Columbia River Basalt Group Units. I provide two grids for you to plot these data upon. One is a linear-linear plot (both the horizontal and vertical scales are linear scales) and one is a log-linear plot (the horizontal scale is linear and the vertical scale is logarithmic). Print out the grid sheets. We will be plotting only the CRBGs that have age data in the table. This table is also found in the Activity 4 pdf linked below. - (2.5 pts) Use the linear-linear grid. Plot the Volume (cubic km) on the vertical axis and the Age (Ma) on the horizontal axis. Use points as your plotting style. Label the axes and label the points for which CRBG the point stands for. - (2.5 pts) Use the log-linear grid. Plot the Volume (cubic km) on the vertical axis and the Age (Ma) on the horizontal axis. Use points as your plotting style. Label the axes and label the points for which CRBG the point represents. - (5 pts) In the space given below the grids, discuss shortly the advantages and/or disadvantages of both the linear-linear and log-linear plots. Turn in these plots at our next class meeting. Either watch the online video or download the file on your own. The file is large, but it is in better resolution. One may want to also download the digital presentation (linked above) and look at this while they watch the video (if the video is difficult to read, the digital presentation has higher resolution and may be easier to read). - Here is the video that discusses the Columbia River Basalt Group. ~260 MB mp4 file Here are embedded videos of the above mp4 files. Columbia River Basalt Group Video:
OPCFW_CODE
How to link curves First of all, let me make it clear that I don't know much about programming. So after I got that out of the way, thanks for reading my question. So what I currently want to cram into my little C# programm is the following: Draw a line from pA to pX Draw a curve from pX to pY Draw a curve from pY to pZ Draw a line from pZ to pD My problem with this is the following: How on earth do I "switch" from a line to a curve, to another curve and then back to a line in C#? I'd be really happy if anyone could help me with this. Greetings from Belgium, -g2609 which framework? winforms, WPF, the web? show the code you've written so far, and describe the problems with it. In general, I only need the x and y coordinates between points. It's for controling a Robot. a line is just an edge case of a curve. it does not matter if you connect line-line, curve-line or curve-curve. anyway, we need to know a bit more about input and expected output to help here. here is a recent answer showing how a graphics library can be used to get a list of points from a path (which in turn may consist of a sequence of lines/curves, which are just drawn one after the other Ok, didn't know that. So basically I will tell the robot the start position, the end position and "weight points" and how many steps it should take from start to end. As output, I need xy coordinates for each step don't forget the Z coordinate, its not discworld :) Thanks for thinking about that, however, for what I'm doing I only need xy, not z. ok. I figured that was what "pZ" might mean in "... curve from pY to pZ" No, pY and Pz are both points, both on the same plane but with different x and y coordinates. Any idea how to go from line to curve? The real question is in the 1st comment. you may need to study graphicspath with both the pathpoints and the pointtypes arrays. Seems you want to provide smooth connection of line segments and curves. Note that Bezier curves at the end points have direction (tangents) to control points. So just put control points at the continuation of straight segments. Distance from the enpoints to the control ones is responsible for curvature. Try to use values like distXY / 3 to start. For curve-curve connection you have to define some rule. For example, define tangent direction (and maginute again). If you need smooth curve chain, consider interpolation splines - this approach calculates cubic curves parameters for all curves and provides continuity. Pseudocode for line A-X, cubic Bezier X-Y, line Y-Z. VecAX = X - A uAX = (VecAX.X / VecAX.Length, VecAX.Y / VecAX.Length) curveXY.P0 = X curveXY.P1 = X + uAX * VecAX.Length / 3 curveXY.P2 = Y - uXZ * VecXZ.Length / 3 curveXY.P3 = Y Sounds reasonable. Thanks for the help mate
STACK_EXCHANGE
Add FMD Clues to transactions Is your feature request related to a problem? Please describe. Currently, we include FMD keys in the key hierarchy and in addresses, but don't include Clues in transactions, so it's not possible to use FMD to detect them. At this stage, supporting FMD as an accelerated scanning mechanism doesn't help us, because we don't have enough transaction volume that the scanning is actually a bottleneck, and this will continue to be the case for some time. However, we want to be forward-compatible enough to be able to implement detection later, when it's useful to do so. Including detection keys in addresses is the first step. The next step is to include Clues in transactions. The original idea was to attach an FMD Clue to each OutputBody, since each OutputBody contains a new NotePayload to be scanned, and to thereby filter the scanning to be done. But actually, this doesn't work, and would wreck privacy. The problem is that each transaction contains a group of outputs, but the FMD detection false positives are all independent, occurring with probability $p$. This gives a distinguisher between true and false positives that isn't part of the original FMD game. Consider a transaction with two change outputs: if a detector uses the correct detection key, they'll detect both with probability 1, but if a detector uses the wrong detection key, they'll detect both with probability $p^2$. So now, instead of having cover traffic occurring at false positive rate $p$, the false positive rate drops to $p^2 \ll p$, just by an accident of transaction construction. To fix this, we should to attach the clues to the transaction, rather than the output, so that there is one clue per recipient clue key per transaction. This adds a new problem -- we don't want to leak the number of distinct recipients -- but we can fix that by requiring that there are always as many clues as there are outputs, and padding the list of clues with clues addressed to dummy clue keys. Describe the solution you'd like [x] Add an fmd_precision_bits: u8 field to the ChainParams structure, defaulting to 1 - https://github.com/penumbra-zone/penumbra/pull/1217 [ ] Add a fmd_clues: Vec<Clue> to TransactionBody [ ] Add a consensus rule to the shielded pool component enforcing that tx.fmd_clues.len() == tx.outputs.len() [ ] Figure out how to have dummy clue keys in the TransactionPlan [ ] Add clue creation to the transaction build method. There are still some open questions about how exactly we want to include clues in the transaction plan. We probably want the signer to be able to see who will get clues, and consider them authorizing data, but they may not be able to derive the clues deterministically, or they might have difficulty doing so. After a bit of reflection, I think the original suggestion I made to put the FMD bits in the ChainParams was a mistake: https://github.com/penumbra-zone/penumbra/pull/1217#issuecomment-1197377796 Currently, we don't have any chain parameters that are set by the chain, the chain parameters are all high-level config options. Since the idea is for the FMD precision to be auto-adjusted according to transaction volume, it's likely that we'll have some parameterization, e.g., length and decay for an EWMA of message volume, target anonymity set, etc., and those parameters might need to be adjusted. I think it would make more sense to put those parameters (whatever we decide them to be) into the ChainParams, and then have the bit sizes actually used for FMD be derived from them. Otherwise, we'll end up having the chain parameters be a mix of configuration options and configuration-derived data. Instead, we should put the FMD parameters in a separate payload (also in the CompactBlock).
GITHUB_ARCHIVE
We like to use external tools for our Policy Migration and storing in version control system (Git). The way to go is using the GMU and a basic sample is in docops: Version Control Example - CA API Gateway - 9.2 - CA Technologies Documentation What we are facing now is how to deal with dependency objects. The most clear example is for XML schema's from the Global Resources. The GMU is only facilitating on exporting folder, service or policy. The schema's must be migrated within a bundle including a dependency on a specific XML schema. What I see here is: 1. Exporting a service build from a WSDL importing several schemas creates a bundle including all resources referred withing this WSDL (this is ok, we have all references) 2. Exporting a service with an Validate XML assertion referring a XML schema from Global resources does include this resources. However it does not include the imported schema's within this referred XML schema resource (this is not ok). 3. Theoretically the XML schemas could be reused and the current version on a system should be deployed in a controlled manner depending on the Git branch. Above will result in a problem migrating these kind of dependency objects. Some times we are missing deeper nested referenced schema's (runtime issues!). This can only be found analyzing and validating the schema on missing dependencies. Also we would like to migrate the schema's separatly independent from the services. Migrating a service only needs to fail if the referred objects are missing on the target gateway. It seems we need a mix of GMU and Restman calls here? Does any of you have tips and best practices how to deal with this specific issue and other dependencies in general? Have you found a solution for that? I recently came accross the same issue. It's still the same for API Gateway 9.3. I would like to see GMU be able to recognize Includes/Imports in XML/XSD Files as a depency and including them to the Export bundle. No, not really. We had to setup all schemas and dependent schemas to be unique so all referalls from wsdl were independant. The includes are still not migrated as part of a bundle for a specific service (or folder as we do this). For this we migrate the schemas with restman and group them based on import location (which is our github). We need to take care of this in advance before migrating services and have this as a dependency in our build tooling. In our service bundle we map the schema dependency to Existing to be sure the schema is at least imported once before deploying the service. I've walked through the scenario that you outlined and found that as you mention if a schema file has an import statement it will only include the first level without the additional XSD files. The resources are available through Restman (https://gw.support.local:8443/restman/1.0/resources) and can be pulled and push between gateways. Each item will need to be extracted at the l7:ResourceDocument element to be insert individually to the destination Gateway cluster. I've also create an Idea to allow for more granular export controls through the GMU tool. Enable Gateway Migration Utility (GMU) export function to be more granular
OPCFW_CODE
Event Sourcing is a pattern for data storage, where instead of storing the current state of any entity, all past changes to that state are stored. With Axon Framework, you can implement Event Sourcing with minimal boilerplate code, giving you the benefits of Event Sourcing, without the hassle. Tiered Storage Cookbook Apr 04, 2023 By understanding and using common tiered storage strategies, one can improve the speed and cost-effectiveness of event store. High Availability Deployments with Axon Server Mar 21, 2023 Learn some of the trade-offs you should consider as well as some starting points you can use for your Axon Server Enterprise cluster. Tiered storage capabilities in Axon Server Mar 17, 2023 In this blog post, we'll look at how to use Axon Server's tiered storage to find a good balance between speed and cost using real-world examples. Upgrading to Axon Framework 4.7 - Automated Mar 01, 2023 An update of the previous Axon Framework 4.7 blog, explaining how you can benefit from OpenRewrite recipes to automate your migration to 4.7. Axon Framework 4.7 ready to use Feb 02, 2023 Axon Framework 4.7 release announcement. MongoDB Transactions with Axon Framework Feb 01, 2023 A blog about using MongoDB Transactions with Axon Framework. Upgrading to Axon Framework 4.7 Jan 26, 2023 Different than usual, upgrading to Axon Framework 4.7.x may just impact your application. For that, we want to extend our apologies, as this differs from our desired approach towards minor releases. This blog describes several upgrade scenarios, like what you need to do when you want to move to Jakarta or stay on Javax. Each scenario provides a list of actions to upgrade to 4.7 to guide you. Optimizing Event Processor Performance Jan 11, 2023 Ever wondered how to run your event processors with ultimate performance? This blog will dive into how you can tune your event processors in the best ways possible. Jan 10, 2023 Repositories are essential components in Axon Framework. However, they mostly remain behind the scenes, and developers don't need to interact with them directly. Usually, the framework can configure, instantiate and use the correct repository based on how the developer has constructed the domain components. That convenience sometimes leads to misunderstandings. Those are caused by the many meanings of the term "repository" and the assumptions developers make when they hear or read it. How dumb do you want your pipes? Dec 09, 2022 A comparison of Apache Kafka and Axon Server with respect to how they work as message pipe. Will go through the format used and how sending messages work. As well as how messages are routed and how the solutions scale. September 28th, Amsterdam Join us for the AxonIQ Conference 2023, where the developer community attends to get inspired by curated talks and networking. September 27th, Amsterdam The event to collaborate, discuss, and share knowledge about techniques, tools, and practices for building complex, distributed event-driven applications.
OPCFW_CODE
Microsoft Switzerland Academic Team launched You Make IT Smart robotics campaigns in October 2008. As a part of the campaign we promised to give away up to 300 LEGO MINDSTORMS NXT for full-time students and faculty members of Swiss educational institutions to enable hands-on experiences on embedded development with Microsoft Robotics Developer Studio and robotics hardware. Over the past months hundreds of students have been participating to various campaign activities, such as Robotics Simulation Online Exercise, Imagine Cup and Academic events. As a latest step in the You Make IT Smart campaign, we organized Robotics Expo, parallel to Swiss Imagine`09 Cup Finals, in Holiday Inn Bern on Friday 8th May 2009. Idea of the Expo was to give students a chance to show what kind of innovative ideas they have been able to realize with their selected hardware and Microsoft Robotics Developer Studio. The valuable jury of Robotics Expo and Swiss Imagine Cup`09 finals consisted of: • Prof. Dr. Torsten Braun, University of Bern • Martin Frieden, gibb - Gewerblich-Industrielle Berufsschule Bern • Vance Carter, EducaTec AG • Milan Kubicek, Lead Microsoft Student Partner Bern • Dr. Marc-Alain Steinemann, Lead Academic Relations, Microsoft Switzerland Here`s short overview of the projects presented at the event: 1. Place: Alexandru Rusu (EPFL), Bogdan Stroe (EPFL), Dante Stroe (EPFL) The winner team had developed remote controlled robot using inclination sensor of a mobile phone with wireless video transmission to the PC. As a next step the team had planned also video transmission to the mobile phone. Used hardware included LEGO MINDSTORMS NXT, mobile phone (Nokia) and wireless camera. The project was implemented with VPL, C# and Python technologies. The judges liked the nice mixture of technologies and good realization of the project – the robot was performing perfectly in the demo. 2. Place: Neeraj Adsul (ETHZ), Pradyumna Ayyalasomayajula (EPFL), Neetha Ramanan (Unine) The team had built a robot that was able to scan books and had a robot arm turning the pages. The project used LEGO MINDSTORMS NXT, webcam and self-build robot arm. Implementation was done with VPL. 3. Place: Alexander Suhl (Hochschule Luzern), Marco Wyrsch (Hochschule Luzern), Meier Stephan (Hochschule Luzern), Philippe Schnyder (Hochschule Luzern), The team had developed remote controlled robot using inclination sensor of a mobile phone. The project was build with LEGO MINDSTORMS NXT, mobile phone (Windows Mobile) and VPL. 4. Place: Bruno Barbieri (University of Geneva), François Amato (University of Geneva) The team has built a plotter using LEGO MINDSTORMS NXT brick, Legos and a marker pen. The implementation was done with VPL. Congratulations for all the winners and competitors! If you are interested to start working on robotics on your own or with your class, we still have some LEGO MINDSTORMS NXT in distribution via the Robotics Simulation Online Exercise for students and for professors and teachers. Academic Audience Manager, Microsoft Switzerland
OPCFW_CODE
How to speed up data transfer between nodes to my Elasticsearch cluster Usually my ES cluster has three nodes, one primary and two replicas. With every new deployment we move ES data to the new ES cluster by changing Elasticsearch configuration and add the new three nodes: discovery.zen.ping.unicast.hosts: ["HOSTNAME1", "HOSTNAME2", "HOSTNAME3","NEW_HOSTNAME4", "NEW_HOSTNAME5", "NEW_HOSTNAME6"] Data is replicated and splited between nodes and then exclude the old nodes with the API call: "cluster.routing.allocation.exclude._ip" : "IP_HOSTNAME1,IP_HOSTNAME2,IP_HOSTNAME3" And finally ALL data is moved on the new nodes, old ES cluster is destroyed and deployment is done. The Issue is that data is growing fast and the process is taking longer to replicate and move data between ES nodes. We wait like 1hour and 30min for ~200GB of data to be moved. Is there some fine tuning for ES to speed up data transfer between nodes? We have this process because the AMI/OS for the ES nodes needs to be updated with the new security implementations every month. I managed to reduce the time by helf, setting: ccr.indices.recovery.max_bytes_per_sec to 200Mb depending on the instance resources. For more advanced settings on cluster transfer please check ccr.indices.recovery.max_bytes_per_sec. If every shard in the cluster is replicated, I'd suggest just adding new nodes based on the new AMI in the cluster, and then removing old nodes. Note that you should wait for all shards to be assigned before removing any node (i.e. cluster state is "green"). Remove nodes one by one, and wait for the cluster state change to green in between. This procedure should be easy to automatize. If not all shards have replicas, you can either: Setup replicas=1 for these indices, and after finishing "refreshing" the nodes, set up replicas=0 again. Tell the cluster to stop routing shards to the nodes being removed: PUT /_cluster/settings { "transient" : { "cluster.routing.allocation.include._ip " : "<ip of the node>" } } For more information, see: https://www.elastic.co/guide/en/elasticsearch/reference/7.14/modules-cluster.html#cluster-shard-allocation-filtering I think that this last method is the best, either if you have or not replica shards. Yes, from the replication point of view this is the procedure, but I thought there is an API call or some workaround to increase the replication data transfer when adding new nodes and remove the old ones. Sorry, can't help you with that.
STACK_EXCHANGE
Today I was debugging an issue where in a ghost row is getting inserted into the grid in level 2 even if the user is not entering any value for the grid in level 2. There are many instances where you find the ghost row issue in PeopleSoft. This one was particular to the sequencing number and which is almost a common case across the product. So I will explain a bit about the ghost row issue with the particular case. The page has a structure up to level 3. And my level 2 additional key is a sequence number which is auto populated by the system. Now when the user goes in add mode and enter any data up to level 1 (assuming he is adding only one row which is presented by default) and saves the page, then everything seems fine. The data up to level one is inserted into the database as user has filled only up to level 1. Now if he tries to add another row at level one and enter data only in level 1 or if he just clicks the + button on the level 2 of the level 1 first row, then a ghost row of level 2 is getting inserted into the database even though user has not entered any data into the level 2 rows. The reason for the ghost row is the sequencing logic written in RowInsert event. Since the additional key for level 2 is a sequence number which is populated automatically by the system, whenever the sequencing logic executes and assigns a value from the RowInsert event, the system automatically sets that particular row as changed and will mark it for database insert during the time of save. Most of the places where auto sequencing is used, the sequence number field is read only and user cannot delete its value thus forcing the system to insert the row into database; making the case more complicated. PeopleBooks has clearly mentioned that if you change the value of any field in the RowInit or RowInsert events, the system will mark the row as changed and will be considered during the save processing. But in most cases, you are forced to write the logic in RowInsert event because it is apt to display the next sequence number whenever user clicks the plus button. Now back to the issue why there was no ghost row inserted my first default row at level 1? The reason is pretty straight forward, the RowInsert event will get fired only when the user adds a row, for default rows that event is ignored. This might be a well known situation so that PeopleTools has provided the solution for the problem as well. There is a delivered property for rowset class called ChangeOnInit. All you need to do is to set that property to false during the component load processing or during the RowInit processing of the higher level (in my case it is level 1). Once this property is set to false, the component processor will no longer mark the row as changed whenever a field value is programmatically updated in RowInit or RowInsert events, thereby that row will not be considered for database insert until and unless user updates any value on the field. This is a pretty handy property which avoids programmatic defaults be inserted into the database when it is not actually supposed to be inserted. Thus, this one line code will take away the headache. You can see the sample code given below which will address the given situation of ghost row. rem This code should be written on the RowInit of the primary record one level up; rem In this case code is written at Level 1 to resolve ghost row issue at level 2; GetRow().GetRowset(Scroll.LEVEL2_REC).ChangeOnInit = False;
OPCFW_CODE
This tutorial is heavily based on Rupert’s article here. However, the steps in this article are very different on some points. There are some steps that I had to figure out using other sources. Generating OpenStreetMap tiles database We’re going to use to download OSM tiles for a specific region. Download, compile, and install GEO-OSM-Tiles 0.04. In Terminal: $ wget http://search.cpan.org/CPAN/authors/id/R/RO/ROTKRAUT/Geo-OSM-Tiles-0.04.tar.gz $ tar -xf Geo-OSM-Tiles-0.04.tar.gz $ cd Geo-OSM-Tiles-0.04 $ perl Makefile.PL # make sure there are no errors/warnings $ make $ make test $ make install # you might have to use sudo If you get errors like this on Warning: prerequisite YAML 0 not found.”, install the missing Perl libraries first before continuing. Determine the region you want to download. You can use OSM: go to http://openstreetmap.org and select Export from the top menu. You should be able to see 4 fields specifying the selected region coordinates. Click on Manually select a different area to define your own region. downloadosmtiles.plwith the coordinates and your desired zoom levels to download the tiles: $ downloadosmtiles.pl --lat=0.871:1.79 --long=103.342:104.3 --zoom=0:15 --destdir=/your/tiles/folder When specifying values for long, enter the lowest value first. Call downloadosmtiles.pl helpto see more options. The destination folder should now contain image files organized like this: Download and run map2sqliteto convert the tile set to a sqlite database. You can download the source here and compile it in XCode. Or you can download the compiled binary here. We’ll use the compiled binary in this example. $ wget http://shiki.me/blog/assets/posts/2012/03/map2sqlite-bin.zip $ tar -xf map2sqlite-bin.zip $ ./map2sqlite -db /your/mymap.sqlite -mapdir /your/tiles/folder Using the offline map in Route-Me For a quick example, we’ll use a sample project included in route-me. - Download and extract the latest route-me library from GitHub: https://github.com/route-me/route-me - Open the sample project at samples/SimpleMap/SimpleMap.xcodeproj. Select the SimpleMapscheme and test to make sure it runs before we do anything to it. Add the map database you just created to the project’s resources. Make sure that it is also added to the Copy Bundle Resources. MapViewViewController.m, add an import for Add this to the bottom of // Use the bundled database as our map source RMDBMapSource *mapSrc = [[[RMDBMapSource alloc] initWithPath:@"mymap.sqlite"] autorelease]; [[[RMMapContents alloc] initWithView:mapView tilesource:mapSrc] autorelease]; // Constrain our map so the user can only browse through our exported map tiles [mapView setConstraintsSW:CLLocationCoordinate2DMake(mapSrc.bottomRightOfCoverage.latitude, mapSrc.topLeftOfCoverage.longitude) NE:CLLocationCoordinate2DMake(mapSrc.topLeftOfCoverage.latitude, mapSrc.bottomRightOfCoverage.longitude)]; // Move to the center of our exported map [mapView moveToLatLong:mapSrc.centerOfCoverage]; The first 2 lines instruct the map view ( RMMapView) to use the offline map as the map source. The -setConstraintsSW:NEcall is not necessary but I think it’s a good idea to only show the map region that we have exported. The user will see empty gray spaces for regions that we don’t have map tiles for without the constraints. And we’re done!
OPCFW_CODE
Researchers working with nonlinear programming often claim "the word is non linear" indicating that real applications require nonlinear modeling. The same is true for other areas such as multi-objective programming (there are always several goals in a real application), stochastic programming (all data is uncer tain and therefore stochastic models should be used), and so forth. In this spirit we claim: The word is multilevel. In many decision processes there is a hierarchy of decision makers, and decisions are made at different levels in this hierarchy. One way to handle such hierar chies is to focus on one level and include other levels' behaviors as assumptions. Multilevel programming is the research area that focuses on the whole hierar chy structure. In terms of modeling, the constraint domain associated with a multilevel programming problem is implicitly determined by a series of opti mization problems which must be solved in a predetermined sequence. If only two levels are considered, we have one leader (associated with the upper level) and one follower (associated with the lower level). Table of ContentsPreface. 1. Congested O-D Trip Demand Adjustment Problem: Bilevel Programming Formulation and Optimality Conditions; Yang Chen, M. Florian. 2. Determining Tax Credits for Converting Nonfood Crops to Biofuels: An Application of Bilevel Programming; J.F. Bard, et al. 3. Multilevel Optimization Methods in Mechanics; P.D. Panagiotopoulos, et al. 4. Optimal Structural Design in Nonsmooth Mechanics; G.E. Stavroulakis, H. Günzel. 5. Optimizing the Operations of an Aluminium Smelter Using Non-Linear Bi-Level Programming; M.G. Nicholls. 6. Complexity Issues in Bilevel Linear Programming; Xiaotie Deng. 7. The Computational Complexity of Multi-Level Bottleneck Programming Problems; T. Dudás, et al. 8. On the Linear Maxmin and Related Programming Problems; C. Audet, et al. 9. Piecewise Sequential Quadratic Programming for Mathematical Programs with Nonlinear Complementarity Constraints; Zhi-Quan Luo, et al. 10. A New Branch and Bound Method for Bilevel Linear Programs; Hoang Tuy, S. Ghannadan. 11. A Penalty Method for Linear Bilevel Programming Problems; M.A. Amouzegar, K. Moshirvaziri. 12. An Implicit Function Approach to Bilevel Programming Problems; S. Dempe. 13. Bilevel Linear Programming, Multiobjective Programming, and Monotonic Reverse Convex Programming; Hoang Tuy. 14. Existence of Solutions to Generalized Bilevel Programming Problem; M.B. Lignola, J. Morgan. 15. Application of Topological Degree Theory to Complementarity Problems; V.A. Bulavsky, et al. 16. Optimality and Dualityin Parametric Convex Lexicographic Programming; C.A. Floudas, S. Zlobec. Index.
OPCFW_CODE
SDP1 uses CBC mode and thus should negotiate a strongly unique IV ("initial value") for each datagram to prevent same-plaintext packets from appearing as the same cipher-text. The use of the context IV ("CIV") is inadequate by itself to generate a unique datagram IV ("DIV"), as that is fixed across all datagrams within the context. Hence the first block in the plaintext is also the effective IV. This is organised by creating a Pad that includes sufficient contents for the purpose of making the IV unique. In this manner, the first block encrypted is the DIV, and it uses as IV the CIV. The result provides the next IV for the remaining payload part of the datagram. The requirement that the DIV must be unique is easy to specify, but substantially difficult to guarantee in practical implementations. For that reason, we describe here (at least one) suggested layout that is designed to give a highly reliable unique DIV under regular field circumstances. Implementations MAY follow layouts specified here. An additional benefit of these layouts may be the provision of useful statistical and debugging data. Whichever layout is followed, implementations are responsible for negotiating the Pad within the context, if the receiving end needs to understand it. It would appear that the construction of putting a random block before the payload part of a message, in order to meet the uniqueness requirements of an IV, has been used before. In Kerberos V, it is called a "confounder". In SSH 2, SSH_MSG_IGNORE message with random data in it is prepended to the plaintext. The suggested Pad is constructed of the following elements. Nonce Sequence Number. This is a Compact Integer (see Appendix 3) that increments with each new packet sent in the relevent direction. Time. This is a Compact Integer of the time as modified by the time base kept by the context. Random. Remaining bytes are random and pad the Byte Array up to the length required to make the combined plaintext length a factor of the encryption algorithm's block size (16 bytes). The sequence number of the packets sent is included as a Nonce in the first packet. . The sequence number is sent as a Compact Integer. The sequence number SHOULD start from a base number that is determined from the context, and increment with each successive packet. The value zero is reserved. It is permitted to reuse numbers by resetting the counter, although this should be limited to circumstances such as where a node is restarted. (This results from the difficulties of guaranteeing a sequence number recording over a crash.) In this case, an implementation must ensure that the Time or the Randoms ensure uniqueness. The sequence number is not purposed for retries or for unique packet identification and is not reliable. Specifically, it is permitted for the sequence number to leave gaps, and it is permitted to reuse numbers. To assist the uniqueness over restarts, a Time is included. The Time is the number of milliseconds since a timebase as obtained from the context. It is stored in a Compact Integer. When combined with a time base taken from the context, the resultant time may indicate the GMT/UCT when the packet was first created. The time base should be before now, so that any time so expressed is positive. Also, a rollover and/or expiry decision is reached in 49 days or so with this method. Other ideas... The following are possible: a millisecond difference from the timestamp in the context (generally expected to be a GMT timestamp). a UNIX seconds-since-1970 4 byte unsigned integer. WIN32 8 byte 100ns time? Java 8 byte milliseconds time? To Be Determined... > The Pad is a Byte Array that is expanded out to fill out the concatenated stream to a boundary of the AES cipher block size (16 bytes). The random padding is extended as required in order to make the Pad the right length. If the earlier elements require 8 bytes, then the random bytes will be at least 7 bytes (and one for the Pad length). See the Security Notes for variations. In order to support this layout, an implementation may be assisted by the following additional features available from the context. The time base is a set number of milliseconds that applies to the beginning of the context. Each successive packet calculates its time as the number of milliseconds from this timebase. The Time Base MAY be UTC Unix Time, but implementations should cope with alternates. The first sequence number in the context. This MAY be 1 and must not be zero. The additional information (sequence number, timestamp) in the Pad can be used for: client-side context re-initialisation, debugging and traces, and statistical tracking of losses of packets, A client implementation MAY monitor the information and initiate that an out-of-bad context switch when any high water marks are exceeded. A receiving node MAY retain the information from the last packet received for the purposes of monitoring packet losses, round trip times, etc. As packets are unordered, there will be no reliability to the measurements, reliable protocols must be implemented over the top of SDP1. In construction of the DIV, these are used: Sequence Count. Include a count of datagrams as a nonce in first block. This guarantees uniqueness as long as the count can be kept, but this becomes a difficult problem if a restart of the processes occurs. Further, it is predictable by an attacker. Including the count has the advantage that the receiver can generate statistics and warnings on any dropped packets (but this is unreliable, and not to be used for protocol considerations). Time. A time stamp is guaranteed to be unique as long as the time does not get reset, and as long as no more than one packet per unit time is sent. By itself, time as a nonce is unreliable, and is predictable by an attacker. Random data. Use a random block as first packet. Guarantees randomness and uniqueness, but also means that the protocol is strongly dependent on the source of random data. In practice, while random sources are well understood, they are often subject to misconfiguration, slowness and worse than all, unreliability. To achieve a practically good IV, Pad1 is constructed as an array of all three elements. The sequence count and the Time work together to provide a guarantee of uniqueness even in the event of restarts and losses of a secure storage of the sequence count (we do not consider the case where time itself is reset!). The inclusion of some random data helps to make the IV unguessable. But the inclusion of the unique count/Time also means that the protocol is not dependent on the quality of that randomness; implementations should use what quality they have available and not hold back when faced with a poor quality source of entropy. In essence, the protocol should deliver good security if two out of three of the elements are good, and should even deliver security if only one of the elements is functioning. This makes for a strongly practical protocol in face of administration, engineering and configuration difficulties. Nicolas Williams, posts to Cryptography list, 25-27 April 2007. Phil Rogaway's comments on Nonces.
OPCFW_CODE
World's First DIY HSM Last week, Prof. Dr. Björn Scheuermann and I have published our first joint paper on Hardware Security Modules. In our paper, we introduce Inertial Hardware Security Modules (IHSMs), a new way of building high-security HSMs from basic components. I think the technology we demonstrate in our paper might allow some neat applications where some civil organization deploys a service that no one, not even they themselves, can snoop on. Anyone can built an IHSM without needing any fancy equipment, which makes me optimistic that maybe the ideas of the Cypherpunk movement aren't obsolete after all, despite even the word "crypto" having been co-opted by radical capitalist environmental destructionists. An IHSM is basically an ultra-secure enclosure for something like a server or a raspberry pi that even someone with unlimited resources would have a really hard time cracking without destroying all data stored in it. The principle of an IHSM is the same as that of a normal HSM. You have a payload that contains really secret data. There's really no way to prevent an attacker with physical access to the thing from opening it given enough time and abrasive discs for their angle grinder. So what you do instead is that you make it self-destruct its secrets within microseconds of anyone tampering with it. Usually, such HSMs are used for storing credit card pins and other financial data. They're expensive as fuck, all the while being about the same processing speed as a smartphone. Traditional HSMs use printed or lithographically patterned conductive foils for their security mesh. These foils are not an off-the-shelf component and are made in a completely custom manufacturing process. To create your own, you would have to re-engineer that entire process and probably spend some serious money on production machines. Inertial HSMs take the concept of traditional HSMs, but replace the usual tamper detection mesh with a few security mesh PCBs. These PCBs are coarser than traditional meshes by orders of magnitude, and would alone not even be close to enough to keep out even a moderately motivated attacker. IHSMs fix this issue by spinning the entire tamper detection mesh at very high speed. To tamper with the mesh, an attacker would have to stop it. This, in turn, can be easily detected by the mesh's alarm circuitry using a simple accelerometer as a rotation sensor. In our paper, we have shown a working prototype of the core concepts one needs to build such an IHSM. To build an IHSM you only need a basic electronics lab. I built the prototype in our paper at home during one of Germany's COVID lockdowns. You can have a look at our code and CAD on my git. What is missing right now is an integration of all of these fragments into something cohesive that an interested person with the right tools could go out and build. We are planning to release this sort of documentation at some point, but right now we are focusing our effort on the next iteration of the design instead. Stay tuned for updates ;)
OPCFW_CODE
My first beamerposter, portuguese language not working I'm trying to make a poster based off the example in this site: http://www-i6.informatik.rwth-aachen.de/~dreuw/latexbeamerposter.php and using the this theme http://www-i6.informatik.rwth-aachen.de/~dreuw/download/beamerthemeIcy.sty When I try changing the title to something with portuguese accents (with [portuguese]{babel}), the output is this One blank page and some gibberish. Again, I only changed the titled of the exampled posted on that website. It's my first poster and i'm kinda lost.. forgot to mention: I'm using TeXLive 2010 under Arch Linux Example: .tex file http://pastebin.com/HeQe02S2 .sty theme file http://pastebin.com/mHNa0Dg4 This combination isn't working so well for me. Also: I need to use the beta symbol throught the text, but i can't make it Sans Serif (helvet). Please add a minimal working example illustrating the problem. Also, add any error messages that you get after compilation of the example code. adding \usepackage{utf8} and \usepackage[utf8]{inputenc} fixed the gibberish but I still don't have word breaking support, and thus proper line breaking. @Santiago: change the line input encoding: \usepackage[utf8]{inputenc}. That should fix the problem. (Unless you're getting errors from missing images etc. from the original example.) yes that fixed it, but hyphenization isn't working and, as a consequence, line breaking is ugly (columns aren't | |) @Santiago: it's hard to guess what the problem might be with no actual code (the code you posted shows no hyphenation problems); as I suggested before, post a minimal working example illustrating your hyphenation issues. ok I added both .tex and .sty files in a simple way. I think the culprit is the theme file.. making $\beta$ NOT italic and helvetica is starting to be my most frustrating experience with latex so far... I have latin characters now (except in the footnote, which is set in the .sty file), only good line breaking (maybe there's no such thing in beamer?) and sans-serif (helvetica) math left to fix @Santiago Your are most likely to find solutions here by asking specific questions and providing a small example inline that demonstrates your problem. The font problem is unrelated to the hyphenation problem and would be best addressed separately. At this point it is no longer clear which of your problems with hyphenation still exist. have you added the line \usepackage[T1]{fontenc}? It is necessary for hyphenation to work. Besides loading babel with the portuguese option, you should also add the following to your preamble: \usepackage[utf8]{inputenc} which fixes the issue with non-ascii characters being displayed as "gibberish" (original comment by Alan Munn) and \usepackage[T1]{fontenc} which is necessary for the hyphenation to work (original comment by Mateus Araújo). When you compile your latex, what appears into the output regarding Babel and hyphenation? In my case this is what I get: Babel <v3.8l> and hyphenation patterns for english, dumylang, nohyphenation, catalan, croatian, ukenglish, usenglishmax, galician, spanish, loaded. I have written documents with beamer and latex in Spanish and English, and a beamerposter in English, and I've had no problem so far related to hyphenation.
STACK_EXCHANGE
Python is a popular programming language with many applications. The bulk of you is already aware of machine learning and web development, two fields in which the Python language is used. Python, a high-level, dynamically typed computer language, is one of the most well-liked general-purpose programming languages. Python is one of many high-level, object-oriented, and interpretive programming languages. It is referred to as an interpreted language since its source code is translated into bytecode before being interpreted. CPython frequently converts Python code to bytecode before examining it. The Python framework also includes modules and packages that help with code reuse. Python has open-source software available. It can be downloaded for free and used in applications. It is also possible to access and modify the source code. When we wish to only run a piece of code if a certain condition is met, we must make a decision. Python uses the if…else statement to make decisions. In this case, the program analyses the test phrase and only executes the statement if the result is True. The statement is not performed if the test expression returns False. The indentation in Python marks the location of the if statement’s body. The first unindented line after the indentation signals the end of the body. Python considers values that are not zero as True. False is the interpretation of None and 0. Nested if else in Python In real life, there are times when we must make choices, and based on those choices, we determine what to do next. Programming encounters similar scenarios where we must make choices and then carry out the following block of code following those choices. Python’s decision-making statements are used to do this. It is possible to nest one if…elif…else expression inside of another. Nesting is the term used in computer programming for this. These sentences can be nested inside of one another in any number of ways. The level of nesting can only be determined by indentation. If possible, we should avoid doing this because it can be confusing. One if…elif…else…expression can be nested inside of another. The phrase for this in computer programming is nesting. There are numerous ways to nest these statements inside of one another. Only indentation can reveal the degree of nesting. They can be confusing and should be avoided unless absolutely necessary. The syntax of nested if else in python is: # Executes when condition1 is true # Executes when condition2 is true # if Block is end here # if Block is end here Let us look at an example of nested if else in Python: # Python program to demonstrate # nested if statement num = 20 if num >= 0: if num == 0: Output – Positive Number Flowchart for if else When using a condition statement, the program chooses whether to execute a specific code block based on the input and the conditions. Like any other fully-featured programming language, Python allows a variety of ways to make decisions; one of the most popular approaches is to use the if else statement. To determine whether the given condition is true or false, an If Statement is utilized. Only when the condition is satisfied is the code block below run. Similar to the If statement, the If Else statement adds a second code block that is run if the conditions are not met. We shall examine this statement type and an example of it in this post. The flow chart for the if-else statement is as follows: Condition → if block (when the condition is true) Else Block (when the condition is false) The condition in an if-else statement creates two paths for the program to follow, as you can see in the flowchart above. The program executes the Else block statement instead of the code below it if the condition is not met. On the other hand, if the “if” condition is satisfied, the computer jumps to the next block of code below and ends the “if else” statement. The syntax for If else statement is as observes: # statements to execute when the conditions are met are inserted here # Statements to be executed when the requirements are not met. - if, elif, else syntax in Python Elif is an acronym meaning “else if.” It allows us to search across a variety of expressions. The condition of the next elif block is examined if the condition is False, and so on. If every condition is false, the otherwise body is executed. Only one of the numerous if…elif…else blocks is executed as a result of the condition. The if block can only include one additional block. There might be more than one elif block, though. - If, else syntax in Python The if..else statement evaluates the test phrase and runs the if statement’s body when the test condition is True. If the condition is False, the else clause’s body is put into action. There is an indentation separating the blocks. One of the fundamental tenets of programming is decision-making. To become proficient in programming, you must be able to construct appropriate conditional statements, but it’s also crucial to complete tasks frequently. If you are familiar with conditional expressions like if, if-else, and nested if, you can use the program to make decisions and obtain logically sound results.
OPCFW_CODE
21 января, 2020 - 09:02 Pyridium | Cod Accepted Fast Delivery Looking for a pyridium? Not a problem! Buy pyridium online ==> http://newcenturyera.com/med/pyridium ---- Guaranteed Worldwide Shipping Discreet Package Low Prices 24/7/365 Customer Support 100% Satisfaction Guaranteed. Tags: pyridium otc store coupon pyridium pharmacy cod saturday delivery buy pyridium fed ex effect pyridium delivery free shipping want to buy pyridium pyridium purchase online no prescription pyridium usa overnight delivery pyridium pills cod accepted generic pyridium discount cheap purchase pyridium alaska without prescription pyridium kansas city can i order pyridium buy pyridium online coupon generic pyridium discount discounted pyridium usa discount pharmacy baridium pyridium pills want to purchase pyridium discount pyridium headache low cost pyridium overnight where to purchase next pyridium cost pyridium saturday shipping order cheap pyridium information where to buy next pyridium get pyridium saturday delivery buy pyridium user buy drugs pyridium buy pyridium priority mail #pyridium can i purchase pyridium pyridium cod accepted fast delivery pyridium buy online women medicine discount pyridium samples france cheap no rx pyridium cost pyridium overnight amex without prescription pyridium discount paypal pyridium wire transfer no rx order pyridium without prescription nebraska pyridium low price brand name where to order next pyridium need pyridium pharmacy uk somerset can i buy pyridium buying pyridium rx fast delivery cost pyridium fedex buy delivery pyridium 200mg buy now pyridium without rx cheap pyridium phenazodine usa cost how to buy pyridium generic pyridium no prescription required buy pyridium online freepurchase pyridium low price pyridium prescriptions buy how to purchase pyridium http://altmedi.top/pyridium delivery cheap pyridium in quebec low cost pyridium kentucky pyridium site buy iud price pyridium solihull where can i buy pyridium order pyridium shop discounts discount pyridium increased urination order pyridium overnight mastercard how to order pyridium buy pyridium virginia want to order pyridium can i purchase pyridium legally Michigan has multiple pharmacies that currently employ 1,000 people. You can check the details with the company name, sort of drug, price, expiry date along with the dosages. It can probably be said as the simplest way of buying medicine. Now before you start having suspicious thoughts concerning the effectiveness of generic drugs, you ought to read what the experts need to say. This ensures that those who drop beyond school early and quickly obtain a GED might still stop eligible to work like a New Jersey pharmacy technician. Yet a job in pharmaceuticals is quickly becoming one from the hottest jobs in America, along with the road to success could be both simple and easy , convenient. The tech will enter orders, make sure orders, process requests for insurance and patient information among other things. This is the place a large amount of problems are encountered however it is important to remember it is almost always never the pharmacy's fault a claim has become rejected. In the situation of pharmacy specialist jobs, giving the wrong prescription may be the widespread error that is certainly certainly mostly documented. There are two ways to get a pharmacist technician, getting certified or registered by your state. College pre-pharmacy study as little being a day or two and can still pass the exam. If possible try to use the same pharmacy up to possible. A reputable company including Canada Drug Center won't ever divulge your personal data to anyone else. Progression and cancer risk factors can therefore be safely assumed for similar conditions existing in NSCLC. Homeopathy medicine continues to be around for many years.
OPCFW_CODE
how to measure dissolved oxygen? How do you measure dissolved oxygen reliably in a culture of cells? This page has info on one sensor: http://water.me.vccs.edu/concepts/domeasure.html how would you know that this sensor is working - meaning that only dissolved oxygen triggers the electrical change in the sensor, and not other atoms? and how would you calibrate the sensor to translate changes in electric signal to the change in number of dissolved oxygen atoms? also, how would one get pure dissolved oxygen in varying amounts as 'positive control' for the sensor? To start, the figure and explanation in the link are almost comically misleading. Potassium chloride solution does not "attract" oxygen. The oxygen flowing through the wire is pretty silly and Where the probe joins the wire, oxygen mingles with the electricity. Oxygen is not very ionized, meaning that it does not have a negative charge as electricity does, so the oxygen dilutes the current at the electrode beyond the probe. is just plain ridiculous. This one is better. The way this type of probe works is by amperometric reduction of $\ce{O2}$, the first example being the Clark electrode. Essentially, an electrolyte is separated from the measurement solution by a porous film of poly(tetrafluoroethylene) (Teflon). These pores are small enough that only small substances like oxygen can diffuse through. On the electrolyte side, there are two electrodes in one of two possible configurations: In the original Clark-type electrode, the working electrode is platinum, which is chemically-inert but catalyzes the reduction of oxygen. The counter electrode is a AgCl-coated Ag electrode that acts as a reference for the measurement. (Its potential is supposed to remain constant) To reduce the oxygen, an external potential must be applied across the two electrodes (normally using a potentiostat). The galvanic cell–type electrode has one electrode made of silver (where the oxygen reduction occurs) and one made of an easily oxidized metal like lead. It works the same way as the Clark electrode, but instead of applying an external potential, the current required to reduce the oxygen is supplied by the oxidation of the lead electrode. Any oxygen that reaches the working electrode is reduced and produces a current that is measured to produce the analytical signal. The amount of current produced depends on how much oxygen diffuses through the membrane, which depends on the concentration of dissolved oxygen in the test solution. Now, for your questions: how would you know that this sensor is working - meaning that only dissolved oxygen triggers the electrical change in the sensor, and not other atoms? This one is a bit tricky. You're right that oxygen isn't the only thing that could produce a signal—amperometry isn't a very selective technique on its own. You can control the redox reactions that occur at the working electrode, to a limited degree, by changing the potential you apply, i.e. different substances require different potentials to be reduced and won't be reduced if an insufficient potential is applied. In the case of this specific type of electrode, selectivity comes mainly from the PTFE membrane. Since it limits diffusion to only a few very small molecules, that greatly increases the selectivity of the electrode, though depending on the solution you're trying to measure, there still may be interferences. and how would you calibrate the sensor to translate changes in electric signal to the change in number of dissolved oxygen atoms? In principle, you could calculate the number of reduction reactions happening at the surface from Faraday's laws, but to turn this into a measure of the exact concentration in solution requires a lot of thought of how the transport of oxygen to the electrode occurs, from diffusion from the electrolyte to the electrode surface and transport from the test solution into the electrolyte. In practice, the assumption that the measured current is directly proportional to the concentration in solution is made (and is valid under many conditions), and a calibration is made using a standard solution. also, how would one get pure dissolved oxygen in varying amounts as 'positive control' for the sensor? Calibration solutions seem to be commonly made using tables of oxygen saturation at different temperatures. It's also possible to measure dissolved oxygen through other means, such as titration. Solutions can be made up, titrated, then the value used to calibrate an electrode. Now, for the measurement specifically in cell cultures, a membrane electrode may not be the best bet, at least not without sample pre-treatment. The PTFE membrane is quite susceptible to adsorption of proteins and the like, which will probably affect the electrode's response. It may be necessary to somehow remove these contaminants (without affecting the dissolved oxygen concentration). Other options that might be viable are colourimetric measurements or titration, though there are problems with these, as well.
STACK_EXCHANGE
OK so this is a Monday question, for a long time I've just wanted Emoji's inside Vim, so what are the considerations to make this work: Why you want everything in UTF-8 First of all just to demystify, an emoji to a computer is just another Unicode character. Unicode is a huge character set that is designed to include basically every character every invented by people. I can remember distinctly, someone, was it Brian, I've forgotten his name, barging into my office explaining how important it was to move from Code Pages where Windows would switch from one character set to another to just have a 32-bit Unicode number and no switching. Before Unicode, you had to know the Code page and the encoding (think of it like segmented memory), but with Unicode, you could just have one character set that was of course huge. The trick is to use variable encoding called UTF-8 so that characters that are frequently used only need 1 byte. Pretty clever as long as you have a fast computer. Having a font that displays all Unicode characters That's a long way of saying that you definitely want to use UTF-8 in your website and every thing. So for instance, if your browser is UTF-8, you can see me say "Thank You" in English and Chinese 谢谢 without any problems. Now of course, we have a new set of characters call Emojis and the solution is to give them their own Unicode points, so now I can say thank you in Emoji too! 🙏🏻 And I can even change the color of the hands because they are new Unicode as well (which is appropriate on MLK day!) 🙏🏿. The good news is that most modern fonts support this so for instance the FiraCode Nerd Font Mono that I use with iTerm2 works perfectly as does this default Mac font, San Francisco. Entering Emojis, the Gitmoji and the real Emoji Of course as a programming it is pretty inconvenient to have representations like this because you have to either know the Unicode number to enter something or you have to enter the Emoji data entry mode. Instead, there's a simple GitHub Short codes so the idea is that if you enter :boom: :heart: :smile: Then it should automatically translate into the equivalent Emojis. On a Mac, you turn on Emoji Viewer and then scroll up to reveal the Search box and type these in. So how to enter on a Mac? Well to enter things into Vi, you need a font (as discussed that displays then) and then an easier way then opening up the Emoji Viewer and use the shortcut Ctrl-Command-Space or with all the emoji power (which includes the strange Mac characters for Option and Command). The way you can set all this with the Keyboard System Preferences which lets you enable it with System Preferences. Note that this will get you the Character Viewer and then you hit expand on the upper right which turns it from an Emoji viewer to a complete character set viewer. So for instance to display the Command key in MacOS and then you have to enable Technical Symbols to see them (whew!). So after you click on that tiny symbol on the upper left, there is another nearly invisible gear icon on the upper right (it is greyed and very small, click on it and a huge number of additional lists are opened up. You want Technical Symbols, although when I was there I turned on Windows Dingbats for sentimental reasons. So it turns out that the Command key is actually a Place of Interest sign ⌘ and the Option key which I have no idea what it means ⌥ In addition, you have to know the Unicode points, but there are actually these things as actually keys. These are hard to get to, but you can just copy them from websites like this one now: ⌘Command (or Cmd) called Place of Interest in Technical Symbols ⇧Shift called Upwards White Arrow in Technical Symbols ⌥Option (or Alt) called Option Key in Technical Symbols ⌃Control (or Ctrl) and Up Arrowhead in Technical Symbols Entering in Vim as markdown or Unicode Now with Vim, you can use the native Mac entry system, but of course there are plugins to help you. The first let's you quick direct entry of Emojis with vim-emoji-complete where you use <C-X><C-E> to to the eXtra Enter. Or with my new found love of Unicode, you could write this as ⌃X and ⌃E. The second is vim-emoji which lets you type :boom: and has autocomplete as well that you get with <C-X><C-U> so just type the : and then you can select the proper GitHub Markdown Emoji. This Markdown is used in lots of other places like Slack for instance.
OPCFW_CODE
RADIUS Authentication Process Overview Applies To: Windows Server 2008, Windows Server 2008 R2 This section contains overview information about the Remote Authentication Dial-In User Service (RADIUS) authentication process, from when a RADIUS client sends an Access-Request message to a RADIUS server to when the RADIUS server sends the Access-Accept or Access-Reject message. In addition, detail about how Network Policy Server (NPS) processes Access-Request messages under different configurations is provided. In this section The RADIUS authentication process begins when a user attempts to access a network by using a computer or other device, such as a personal digital assistant (PDA), through a network access server (NAS) that is configured as a RADIUS client to a RADIUS server. For example, when the user sends credentials by using Challenge Handshake Authentication Protocol (CHAP), the RADIUS client creates a RADIUS Access-Request message containing such attributes as the Port ID the user is accessing and the results of the CHAP authentication process (the user name, the challenge string, and the response of the access client). The RADIUS Access-Request message is sent from the RADIUS client to the RADIUS server. If a response is not received within a specific length of time, the request is re-sent. The RADIUS client can also be configured to forward requests to an alternate server or servers in the event that the primary server is unreachable. An alternate server can be used either after a specified number of non-responses from the primary server, or the RADIUS client can take turns sending the connection request to both the alternate and primary RADIUS server. If you are using the Routing and Remote Access service (RRAS), you can add and prioritize multiple RADIUS servers using a scoring mechanism. If a primary RADIUS server does not respond within three seconds, RRAS automatically switches to the RADIUS server with the next highest score. After the RADIUS server receives the request, it validates the sending RADIUS client. Validation occurs by verifying that the RADIUS Access-Request message is sent from a RADIUS client that is configured on the RADIUS server. If the Access-Request message is sent by a valid RADIUS client, and if the Message-Authenticator attribute is required for the RADIUS client, the value of the Message-Authenticator attribute is verified by using the RADIUS shared secret. If the Message-Authenticator attribute is either missing or contains an incorrect value, the RADIUS message is silently discarded — that is, the message is discarded without logging an event in the Event Log or making an entry in the NPS accounting log. For more information, see Incoming RADIUS Message Validation. If the RADIUS client is valid, the RADIUS server consults a database of users to find the user whose name matches the User-Name attribute in the connection request. The user account contains a list of requirements that must be met to allow access for the user. This list of requirements can include verification of the password, and it can also specify whether the user is allowed access. If any condition of authentication or authorization is not met, the RADIUS server sends a RADIUS Access-Reject message in response, indicating that this user request is not valid. If all conditions are met, the list of configuration settings for the user is placed into a RADIUS Access-Accept message that is sent back to the RADIUS client. These settings include a list of RADIUS attributes and all necessary values to deliver the desired service. For Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP) service types, this can include values such as encryption types, the Class attribute, Maximum Transmission Unit (MTU), and the desired compression and packet filter identifiers. For more details about how NPS processes connection requests under different configurations, see Access-Request Message Processing.
OPCFW_CODE
import numpy as np class AntennaArray: def __init__(self, n_antennas=16, lambda_ratio=0.5): self.n_antennas = n_antennas self.lambda_ratio = lambda_ratio self.antenna_index = np.arange(0, n_antennas); self.ang_const = 2*lambda_ratio*np.pi; # Angle transformations (abs from -pi to np.pi, rel from -2*pi*lambda_ratio to 2*pi*lambda_ratio) def ang_abs2rel(self, ang): return np.asin(np.array(ang)/(2*self.lambda_ratio*np.pi)) def ang_abs2rel_2(self, ang): my_ang = ang output = 0 while my_ang > np.pi/2: output += 2*self.ang_const my_ang -= np.pi while my_ang < -np.pi/2: output -= 2*self.ang_const my_ang += np.pi return output + self.ang_const*np.sin(my_ang) def ang_rel2abs_2(self, ang): my_ang = ang output = 0 while my_ang > self.ang_const: output += np.pi my_ang -= 2*self.ang_const while my_ang < -self.ang_const: output -= np.pi my_ang += 2*self.ang_const return output + np.asin(my_ang/self.ang_const) # Steering function def bp_steer(self, b, ang): return [b_ii*np.exp(2j*np.pi*ang) for b_ii, ii in zip(b, range(len(b)))] # Beam-pattern creation (relative angle) def bp_sinc(self, width): half_antennas = (self.n_antennas-1)/2 return [np.sinc((width/(2*np.pi))*(ii-half_antennas)) for ii in range(self.n_antennas)] # Array response def set_ang_domain_rel(self, x): self.ang_domain_rel = x self.response_domain_rel = np.exp(1j*self.antenna_index[:, np.newaxis]*np.array(x)[np.newaxis, :]) def array_response_rel(self, bp): return np.dot(bp, self.response_domain_rel) def set_ang_domain_abs(self, x): self.ang_domain_rel = x self.response_domain_abs = np.exp(1j*self.ang_const*self.antenna_index[:, np.newaxis]*np.sin(x)[np.newaxis, :]) def array_response_abs(self, bp): return np.dot(bp, self.response_domain_abs)
STACK_EDU
Icon order changed again in Patch 10.1.5 (Fractures In Time) Hi there, I hope you have recovered and you're well for now, Shushuda. Just wanted to report that the latest minor patch 10.1.5 brought some new minimap icons and it scrambled the blip map once again. I see a new type of exclamation / question mark with pink background maybe that mixed up the order. Would you take a look if you have some time, please. Thank you and good health to you! Hello! Sadly, I'm not recovered. I'm actually even worse than I was. I'm on a break from WoW because of that. Thanks for the report! I will fix this soon, maybe this weekend. I won't test it, but it's just a texture, so it should be fine. Thank you for the good wishes! How could this be you're even worse, Weronika? Are you still in the hospital for a half year? I thought medical treatment is above World average in Germany. You're in Germany afaik, or you live in Austria? Österreich is even better I heard of. If you'd live here in Hungary it would be normal to circulating in the health care system for years but .... Anyway, I'm pretty glad you're looking into this new bug and I still wish you a speedy recovery! After resizing the mini map, it makes the quest icons on the mini map really larhe, went into mappy options clicked on smaller nodes, and it reduced the size for quest icons down to a more normal size, but changes them to what voxxel reported, maybe add a option to changed icon sizes if possible when you are feeling better I'm not at the hospital, but I'm practically home bound. I have occipital neuralgia and regular treatment such as steroid nerve blocks make it worse instead of better. Other treatments, such as physical therapy, stretching and exercise, started helping a bit only very recently. The stretching and exercise part I was unable to do for half a year due to worsening of symptoms, but now I can do like 3 kinds of stretches without becoming worse, so progress I guess. I'm in Poland and I do everything privately because queues are horrible for the public health care. Thank you for good wishes. Thankfully I didn't cancel the sub so I can always log in and do some debugging for the addon. As for the resizing, if I remember correctly, both Blizzard's native option and addons use the same function - SetScale(). This enlarges everything as once - border, minimap bg, icons. Changing the icon size in Mappy is done by manually creating a texture map with smaller icons in Photoshop and replacing the in-game texture map with this addon one. I think I remember researching if this can be done dynamically and didn't find an answer. I think the Mappy's scale option still works the same as the one baked into Edit Mode, but I will recheck. I will fix the textures this weekend, it should make the icons look normal with all Mappy options, sizes, blinking and whatnot. Keep in mind those are for gathering nodes only. Other icons will stay the same size. I could reduce them all, but that'd require quite a bit of manual editing in Photoshop. It is something doable tho. I read after the occipital neuralgia and they say it could be very painful and long lasting. There are quite a lot patients who were able to ease the pain with intelligent neck massager devices, such like this: https://youtu.be/Mpo7K3P0Vls . You probably already tried it but wanted to share it with you either way. I'm looking forward for the new version with the fixed icon map so I can test it right off. Hi there, Any update on this? Just adding that this is still, unfortunately, an issue. Any news on a forthcoming update would be very much appreciated - but i understand if it may not be coming soon. Hey, sorry it took so long. I've uploaded the new release just now and will upload it to Curseforge in the next 30 minutes or so. Hi Weronika! Thank you! I hope you're doing well. I hope the next major patch 10.2 which is around the corner ( nov. 7) won't change the blip order once again. I'll try to test it on the PTR. Oh it definitely will. Fixing the addon did make me want to set up WoW again and play, so I guess I will be fixing it a lot faster this time, hahah! As for my health, I'll be honest - it's worse than it was. I don't know what to do, so might as well try to occupy myself with the new patch. Such is life. I'll preface that i am by no means an experienced programmer, but if you'd like - at some point in the future if you walk me through your process for updating/fixing this issue, I could start submitting PRs to handle this for times when you're otherwise drowning in real life obligations. Ah, the textures themselves are quite easy, it's purely Photoshop work. It's just time consuming and tedious. I really appreciate the offer! The 10.2 patch should be fine, I will be taking a 2 week PTO around the same time so I should be able to fix it much quicker this time. Here's what I do to fix the textures, I use two programs for it + a graphic program to modify the textures (I have Photoshop so that's what I use): Extract the current minimap blip texture with wow.export program. It's called ObjectIconsAtlas.blp. Convert the texture from .blp to .png with BLPNG Converter program. Modify the texture based on the ones that got outdated. There are a few texture files with very specific names (case sensitive) that are used by the addon, they all need to be replaced with updated ones - all based on that one single ObjectIconsAtlas. They're all possible option combinations for modifying those blip icons. I try to keep them pixel perfect, as in align them exactly the same. After creating those new files, save them as .png into Artwork folder in the addon. Use BLPNG Converter program to convert them all into .blp. Put those .blp files into Textures folder in the addon. Done. The newest commit which replaces these textures should make this explanation a bit clearer. As for Lua errors (so far so good but you never know), that's just Lua programming combined with traversing through Wowpedia to figure out Blizz-specific functions and whatnot.
GITHUB_ARCHIVE
Tests for template tags with pytest-xdist and pytest-cov break view tests using the template tags See https://github.com/pytest-dev/pytest-cov/issues/285 Here's a repository that reproduces the behaviour (see README): https://github.com/TauPan/pytest-django-xdist-cov-bug (Which also reproduces #36) General description: 1.) You have a template tag library "foo" 2.) You have a view using a template using that library via load foo 3.) Being a thorough tester, you decide you need a test for the template tag library, which means you have to from app.templatetags import foo in your test. 4.) And of course you need to test the view using the templatetag. 5.) And maybe you have to test the templatetag before the view (not sure if this is relevant) e.g. pytest discovery puts it before the view test. 6.) And since you have many tests, you run pytest --cov -n 2 Which results in an error like the following: django.template.exceptions.TemplateSyntaxError: 'foo' is not a registered tag library. The error only appears if both -n and --cov are used. There are two workarounds at this point: Move the business logic for custom template tags and filters into a separate module and test separately. Explicitly import the template tag library as proposed in https://github.com/pytest-dev/pytest-cov/issues/285#issuecomment-489419338 However the django documentation at https://docs.djangoproject.com/en/2.2/howto/custom-template-tags/ (or 1.11 or any version) does not mention that templatetags libraries need to be imported anywhere. I'm not sure if any of the relevant modules mentions that (pytest-cov, pytest-xdist, pytest-django or django_coverage_plugin). Since production code runs without those imports (and --cov and -n2 on their own as well) I suspect there's still a bug somewhere and importing those modules explicitly is just a workaround, with the advantage that it's simpler than my initial workaround of moving the busines code of the template tags and filters out of the tag library module and testing it separately. So my take would be that discovery of template tag libraries should not depend on the presence of --cov and -n. I was able to get the same result using your test repository and confirmed #64 fixes the issue. Would you please confirm on your end too? Hi folks, I will look at this ASAP, but don't expect to get around too this or #64 until at least this weekend. Thank you for your patience. Andrew What's the status of this issue? I'm running into this issue using Django 2.2.17 and pytest /w plugins: celery-4.4.6, sugar-0.9.4, xdist-2.1.0, cov-2.10.1, django-4.1.0, Faker-4.17.1, forked-1.3.0, subtests-0.3.2 and using django_coverage_plugin. I'm sorry this repo has been so quiet. We don't have an active maintainer at the moment. This is fixed in 6622791. It's a bit late but I can confirm that this fixes the problems for me. The reason I needed some time to validate this was that I saw new errors while running coverage concurrently, which turned out to be problems in my test setup. (I had concurrent tests using the same directory, and after introducing some setup code to fix that, I can run coverage concurrently without issues.) Thanks for the fix!
GITHUB_ARCHIVE
vscode: gracefully handle cancellation errors This PR fixes a fleet of errors that rapidly populate the Developer Tools console. Sorry guys, I just created a bug myself and realized that ... IMO, all new FIXME are worth fixing in the current PR, if they are not blocked by some conceptual issues. Also, there's another sendRequestWithRetry usage in highlithting.ts that we might want to handle the same way. Should we even move the handling into sendRequestWithRetry, changing its return type to something that reflects the cancellation possibility? Otherwise looks good. I thought about amending the return type of sendRequestWithRetry(), but we cannot use null | R here since null may be a valid R value. Other option would be undefined | R but one would be confused with why undefined was chose here (it's because it is not a valid json value and was chosen to signal the cancellation)... Ideally, there should be something like Rust Option<T> so that it supports nesting of optionality that you cannot represent with null | T (i.e. Option<Option> is not representable with this). And we can create something like type Cancellable<T> = Cancelled | NotCancelled<T>; interface Cancelled { isCancelled: true; } interface NotCancelled<T> { isCancelled: false; value: T; } But this may be overhead, catching an error and checking for isCancellationError(err) seems simpler, or doesn't it? As for highlighting this is not required since we don't cancel it anyway (and should we do it anyway?, this is a separate topic that we shouldn't even discuss in anticipation of semantic highlighting API). Other fixmes will be resolved after #3261 is landed... I bet that logError function they use skips cancellations. Is it public? On Sunday, 23 February 2020, Veetaha<EMAIL_ADDRESS>wrote: @Veetaha commented on this pull request. In editors/code/src/inlay_hints.ts https://github.com/rust-analyzer/rust-analyzer/pull/3277#discussion_r382996856 : client, 'rust-analyzer/inlayHints', request, tokenSource.token, ); } catch (err) { if (!isCancellationError(err)) { // FIXME: log the error throw err; } assert(tokenSource.token.isCancellationRequested); return null; @matklad https://github.com/matklad, but cancellation errors are not actually errors, they don't even deserve a line our logs... I agree on catching all errors, since otherwise they will propagate and result into unhandled promise rejections. but cancellations are too pervasive and will flood error logs otherwise — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/rust-analyzer/rust-analyzer/pull/3277?email_source=notifications&email_token=AANB3M4CZ7KKQPFSRF7TA7DREJO5TA5CNFSM4KZTX3UKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCWSE2HY#discussion_r382996856, or unsubscribe https://github.com/notifications/unsubscribe-auth/AANB3M7SV3J5TO4IA2TE63LREJO5TANCNFSM4KZTX3UA . Hmm, let me see... Yes, you are right, though I am not a fan of blindly repeating other's code... https://github.com/microsoft/vscode-languageserver-node/blob/f425af9de46a0187adb78ec8a46b9b2ce80c5412/client/src/client.ts#L3470-L3477 What I noticed is that first requests for inlayHitnts return 0-length array of hints (that is before the workspace loaded message appears), after that for some time requests don't get a response at all (they await for a response indefinite amount of time), only when you do some changes to the source code that cancels these requests in about 30 seconds you do get an expected response with inlayHints. I guess this problem is not connected with this PR at all (see #3057). bors r+
GITHUB_ARCHIVE
Vagrant is so popular among developers and DevOps engineers, because they can be able to continue using existing development tools (E.g. editors, browsers, debuggers, etc.) on their local system. For example, the developers can sync the files from a guest machine to local system, use their favourite editor to edit those files and finally sync them back to the guest machine. Similarly, if they have created a web application on the VM, they can access and test that application from their local system's web browser. In this guide, we will see how to configure networking in Vagrant to provide access to guest machine from the local host system. Configure Networking In Vagrant Vagrant offers the following three network options: - Port forwarding - Private network (host-only network) - Public network (bridged network) 1. Configure port forwarding By default, we can access Vagrant VMs via ssh using vagrant ssh command. When we access a VM via SSH, Vagrant forwards port 22 from the guest machine to an open port in the host machine. This is called port forwarding. Vagrant automatically handles this port forwarding process without any user intervention. You can also forward a specific port of your choice. For example, if you forwarded port 80 in the guest machine to port 8080 on the host machine, you can then access the webserver by navigating to http://localhost:8080 on your host machine. The port forwarding can be configured in the "Vagrantfile". Go to your Vagrant project directory and open Vagrantfile in your favourite editor. Find the following line: Vagrant.configure("2") do |config| [...] # config.vm.network "forwarded_port", guest: 80, host: 8080 end Uncomment it and define what ports to forward where. In this example, I am forwarding port 80 in the guest to port 8080 in the host. Vagrant.configure("2") do |config| [...] config.vm.network "forwarded_port", guest: 80, host: 8080 end Now restart the Vagrant machine with the updated Vagrantfile: $ vagrant reload --provision You will see the port forwarding is configured in the output: ==> default: Halting domain… ==> default: Starting domain. ==> default: Waiting for domain to get an IP address… ==> default: Waiting for SSH to become available… ==> default: Creating shared folders metadata… ==> default: Forwarding ports… ==> default: 80 (guest) => 8080 (host) (adapter eth0) ==> default: Rsyncing folder: /home/sk/myvagrants/ => /vagrant You can also destroy the VM and re-run it with updated Vagrantfile: $ vagrant destroy <VM-name> $ vagrant up Now SSH in to the guest machine with command: $ vagrant ssh Install Apache webserver in it. If the VM is Deb-based, run: $ sudo apt install apache2 If it is RHEL-based system, run this: $ sudo yum install httpd Start Apache service: $ sudo systemctl enable --now httpd Now open the web browser in your host system and navigate to http://localhost:8080 address from your browser. You will see the Apache test page on your browser. Even though we access the web server with URL http://localhost:8080 in the host system, it is not served from the local webserver. The actual website (i.e. Apache test page) is being served from the guest virtual machine and all actual network data is being sent to the guest. 1.1. What if the port 8080 is being used by another application? In our previous example, we forwarded the port 80 from guest to port 8080 in host. In other words, the traffic sent to port 8080 is actually forwarded to port 80 on the guest machine. What if some other application is currently using the port 8080? Port collision happens when running multiple VMs. You may unknowingly forward an already used port. No worries! Vagrant has built-in support for detecting port collisions. If the port is already being used by another application, Vagrant will report it in the output, so you can either free up the port or use another port. Vagrant is also smart-enough to find and correct port collision automatically. If it find a port is collided with another port, it will auto-correct it by using any other unused port for you. To enable auto-correction, add an extra option auto_correct: true in the port forwarding definition in your Vagrantfile. config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true By default, Vagrant will choose auto-correction port between port 2200 and port 2250 range. You can also choose your own custom range by defining the following line in the Vagrantfile like below. config.vm.usable_port_range = (2200..2250) Restart the Vagrant machine to take effect the changes: 1.2. Change network protocol By default, Vagrant uses TCP protocol for port forwarding. You can, however, use UDP protocol if you want to forward UDP packets. To use UDP port, add the following definition in your Vagrantfile: config.vm.network "forwarded_port", guest: 80, host: 8080, Restart Vagrant VM to apply the changes. 2. Configure private network In private or host-only networking, a network connection is created between the host system and the VMs on the host system. The private network is created using a virtual network adapter that is visible to the host operating system. The other systems on the network can't communicate with the VMs. All network operations are allowed within the host system. The private network can also function as DHCP server and has its own subnet. Each VM in the private network will get a IP address from this IP space. So, we can access the VMs directly using the IP from the host system. To configure private or host-only networking in Vagrant with static IP address, open the Vagrantfile, find the following line and uncomment it. config.vm.network "private_network", ip: "192.168.121.60" Here, 192.168.121.60 is the IP address of the VM. Replace this with your own IP. Restart the VM to take effect the changes. If you want to set IP address automatically from DHCP, modify the private network definition like below: config.vm.network "private_network", type: "dhcp" A random IP address will be assigned to the VM. In order to find the IP of a VM, you need to SSH into it using vagrant ssh command and then find its ip address with 3. Configure public network In public or bridged networking, all the VMs will be in same network as your host machine. Each VM will receive its own IP address from a DHCP server (if available in the local network). So all VMs will act like just another physical system on the network and they can communicate with any system in the network. To configure public or bridged networking, edit Vagrantfile, find the following line and uncomment it: Save and close the file. Restart the VM to apply the changes. The VM will automatically get an IP address. If you want to set a static IP, just modify the network definition like below: config.vm.network "public_network", ip: "192.168.121.61" 4. Set hostname You can define hostname using the config.vm.hostname setting in the Vagrantfile. Edit Vagrantfile in your preferred editor and add/modify the following line: config.vm.hostname = "myhost.ostechnix.example" Save and close the file. The above definition will add myhost.ostechnix.example line in Restart Vagrant VM to take effect the changes. Verify if the hostname is changed: [vagrant@myhost ~]$ hostname -f myhost.osechnix.example You can also directly check the contents of the $ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.1.1 myhost.osechnix.example myhost 5. Enable multiple network options Each network option has its own upside and downside. For some reason, you might want to configure all network options to a single VM. If so, Vagrant has the ability to enable multiple network options. All you have to do is just define the network options one by one in the Vagrantfile like below: config.vm.hostname = "myhost.ostechnix.example" config.vm.network "forwarded_port", guest: 80, host: 8080, auto_correct: true config.vm.network "private_network", ip: "192.168.121.60" When you create a new VM with this Vagrantfile, vagrant will create a VM with following network details: - set hostname as myhost.ostechnix.example - configure port forwarding - configure a private with static IP 192.168.121.60 In addition to configuring multiple types of networks, we can also define multiple networks as well. For example, you can define multiple host-only network with different IP address like below. config.vm.network "private_network", ip: "192.168.121.60" config.vm.network "private_network", ip: "192.168.121.61" Hope this helps. At this stage, you might have get the basic idea about Vagrant networking types and how to configure networking in Vagrant from command line. There is more to learn. I recommend you to look into official Vagrant documentation for more detailed configuration.
OPCFW_CODE
The independent component analysis (ICA) is a popular technique adopted to approach the so-called blind source separation (BSS) problem, i.e., the problem of recovering and separating the original sources that generate the observed data. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. Here we provide a MATLAB code which implement a modified variational Bayesian ICA (vbICA) method for the analysis GNSS time series. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. In particular, this method allows to recover the multiple sources of ground deformation even in the presence of missing data. This material is based on the original work of Choudrey (2002) and Choudrey and Roberts (2003), subsequently adapted by Gualandi et al. (2016) and Serpelloni et al. (2018) for the study of GNSS position time series. Steps to reproduce Put your GNSS data in the Data folder. They must be in the same format as the one you can find there. Go to the Scenarios/casestudy/case1/dataset folder. In stn_list is present a file with the path of all the GNSS time series you want to use, which is called by the file data_input_file.txt Make sure that all the paths are correct. Consider now the parameter_files folder and consider the scen_parameters.m file. In this file you can choose the initial/final epoch, the number of components you want to use for the decomposition (options.scen.N ), threshold for time series and epochs missing data. options.scen.select.r and options.scen.select.origin allow you to consider only GNSS stations within a radius of 'center options.scen.select.origin' and length (in km) 'options.scen.select.r'. If you choose a very large radius, you will take into account all the GPS stations. You can take a look also to the other files in the parameter_files folder. Once set the initial epochs and the number of independent components you want, you can run the code (ICAIM_code/ICA_driver_clean.m). If everything goes right, it will generate a matfile called all.mat in ICAIM_code/Scenarios/casestudy/case1/matfiles. In the ICA variable you can find the temporal evolution of each ICs (ICA.V), the spatial distribution (ICA.U) and the weights (ICA.S). ICA.U rows are three times longer than the number of each GPS station because, if for example you have 2 GPS stations ABCD and EFGH, the U columns are (ABCD_east ABCD_north ABCD_up EFGH_east EFGH_north EFGH_up). The first column of ICA.U is associated with IC1, the second column of ICA.U is associated with IC2 ecc.. Please note that this code was developed with an old version of MATLAB (2014b).
OPCFW_CODE
Clarification on how pipe() and dup2() work in C I am writing a simple shell that handles piping. I have working code, but I don't quite understand how it all works under the hood. Here is a modified code snippet I need help understanding (I removed error checking to shorten it): int fd[2]; pipe(fd); if (fork()) { /* parent code */ close(fd[1]); dup2(fd[0], 0); /* call to execve() here */ } else { /* child code */ close(fd[0]); dup2(fd[1], 1); } I have guesses for my questions, but that's all they are - guesses. Here are the questions I have: Where is the blocking performed? In all the example code I've seen, read() and write() provide the blocking, but I didn't need to use them here. I just copy STDIN to point at the at the read end of the pipe and STDOUT to point to the write end of the pipe. What I'm guessing is happening is that STDIN is doing the blocking after dup2(fd[0], 0) is executed. Is this correct? From what I understand, there is a descriptor table for each running process that points to the open files in the file table. What happens when a process redirects STDIN, STDOUT, or STDERR? Are these file descriptors shared across all processes' descriptor tables? Or are there copies for each process? Does redirecting one cause changes to be reflected among all of them? After a call to pipe() and then a subsequent call to fork() there are 4 "ends" of the pipe open: A read and a write end accessed by the parent and a read and a write end accessed by the child. In my code, I close the parent's write end and the child's read end. However, I don't close the remaining two ends after I'm done with the pipe. The code works fine, so I assume that some sort of implicit closing is done, but that's all guess work. Should I be adding explicit calls to close the remaining two ends, like this? int fd[2]; pipe(fd); if (fork()) { /* parent code */ close(fd[1]); dup2(fd[0], 0); /* call to execve() here */ close(fd[0]); } else { /* child code */ close(fd[0]); dup2(fd[1], 1); close(fd[1]); } This is more of a conceptual question about how the piping process works. There is the read end of the pipe, referred to by the file handle fd[0], and the write end of the pipe, referred to by the file handle fd[1]. The pipe itself is just an abstraction represented by a byte stream. The file handles represent open files, correct? So does that mean that somewhere in the system, there is a file (pointed at by fd[1]) that has all the information we want to send down the pipe written to it? And that after pushing that information through the byte stream, there is a file (pointed at by fd[0]) that has all that information written to it as well, thus creating the abstraction of a pipe? execve doesn't return unless there is an error; call close(fd[0]) before execve. Ah good point I forgot about that Nothing in the code you've provided blocks. fork, dup2, and close all operate immediately. The code does not pause execution anywhere in the lines you've printed. If you're observing any waiting or hanging, it's elsewhere in your code (eg. in a call to waitpid or select or read). Each process has its own file descriptor table. The files objects are global between all processes (and a file in the file system may be open multiple times, with different file objects representing it), but the file descriptors are per-process, a way for each process to reference the file objects. So a file descriptor like "1" or "2" only has meaning in your process -- "file number 1" and "file number 2" probably mean something different to another process. But it's possible for processes to reference the same file object (although each might have a different number for it). So, technically, that's why there are two sets of flags you can set on file descriptors, the file descriptor flags that aren't shared between processes (F_CLOEXEC), and the file object flags (such as O_NONBLOCK) that get shared even between processes. Unless you do something weird like freopen on stdin/stdout/stderr (rare) they're just synonyms for fds 0,1,2. When you want to write raw bytes, call write with the file descriptor number; if you want to write pretty strings, call fprintf with stdin/stdout/stderr -- they go to the same place. No implicit closing is done, you're just getting away with it. Yes, you should close file descriptors when you're done with them -- technically, I'd write if (fd[0] != 0) close(fd[0]); just to make sure! Nope, there's nothing written to disk. It's a memory backed file, which means that the buffer doesn't get stored anywhere. When you write to a "regular" file on the disk, the written data is stored by the kernel in a buffer, and then passed on to the disk as soon as possible to commit. When you write to a pipe, it goes to a kernel-managed buffer just the same, but it won't normally go to disk. It just sits there until it's read by the reading end of the pipe, at which point the kernel discards it rather than saving it. The pipe has a read and write end, so written data always goes at the end of the buffer, and data that's read out gets taken from the head of the buffer then removed. So, there's a strict ordering to the flow, just like in a physical pipe: the water drops that go in one end first come out first from the other end. If the tap at the far end is closed (process not reading) then you can't push (write) more data into your end of the pipe. If the data isn't being written and the pipe empties, you have to wait when reading until more data comes through. Okay, I understand the first 3 answers. I'm still confused on the 4th one however. It sounds to me that calling this a pipe is a little misleading. It seems akin to Person A dumping water in a bucket, and Person B filling up their cup from that same bucket that Person A dumped water into. Ends on the pipe are just abstractions - there are no ends, just designated memory backed files to push data and pull data from the same source. Is this a correct understanding? Also, where is the buffer located if not on the disk? Memory? The cache? Actually, I lied about fully understanding #3. How come I can get away with not closing all the ends of the pipe? It seems that I should in a perpetual state of blocking. Once the data is read from the buffer, is it removed? Because if so, then shouldn't the kernel just go back to blocking since it's an empty buffer? How can execution continue without closing everything? How am I, as you put, "getting away with it"? Answer 2,3,4 expanded. Why shouldn't execution continue without closing everything? On which line of your code would you expect execution to pause? The processor just carries on executing, line by line; execution won't stop until the kernel pauses you when you make a system call it can't reply to immediately (eg you say "give me some data" and there isn't any, so it pauses you until it can reply with some data). First of all you usually call execve or one of its sister calls in the child process, not in the parent. Remember that a parent knows who its child is, but not vice-versa. Underneath a pipe is really a buffer handled by the operating system in such a way that it is guaranteed that an attempt to write to it blocks if the buffer is full and that a read to it blocks if there is nothing to read. This is where the blocking you experience comes from. In the good old days, when buffers were small and computers were slow, you could actually rely on the reading process being awoken intermittently, even for smallish amounts of data, say in the order of tens of kilobytes. Now in many cases the reading process gets its input in a single shot. Gotcha. I didn't think about that for execve. Also, thanks for the clarification on where the blocking comes from - that was really stumping me! It was pretty much the only thing Nicholas Wilson left out of his answer. Also, does that mean that read() and write() don't actually do any blocking themselves? Is it just the kernel that blocks reading from an empty buffer and writing to a full buffer? read() and write() are part of the kernel, so yours is sort of a moot question. In any case that's the standard way of working for those functions: when you write to a disk file your process is likely to produce output more quickly than the disk is able to take. In an analogous way in an interactive, text based program when you read user input your process is made to wait until your user does insert some characters, possibly until return is pressed.
STACK_EXCHANGE
An algorithm specialist is a computer scientist who researches and designs algorithms for academic and real-world applications. Algorithms are sequences of instructions that perform different types of tasks, and they can be categorized by how long they take to execute. A person who researches algorithms spends a great amount of time trying to find ways to substitute faster-running sequences of instructions for sequences that make an algorithm complicated. How Algorithms Are Analyzed The slowest algorithms require an exponential number of steps in relation to the number of input values. The fastest algorithms can be executed in some constant number of steps, and they aren’t affected by the number of input values. In algorithm design, the number of input values is represented by the variable n, and sometimes additional variables are used for algorithms whose running times depend on the sizes of more than one set of input values. Exponential algorithms typically run in some order of O(C^n) time complexity, where C is a constant and n is the variable number of input values. For example, a simplistic brute-force algorithm for finding a password has to process n combinations of 256 ASCII characters, so it runs in O(256^n) time. One of the most important areas of algorithm research is the problem of P versus NP, or polynomial-time algorithms versus nondeterministic polynomial-time algorithms. There is a $1 million prize being offered to anyone who can prove that P is or is not NP. This prize has been on offer for several decades, and so far, none of the smartest computer scientists have figured out a way to prove it. If it can be proved that they’re the same, then functions such as the brute-force password crack could run in polynomial time complexity, or O(n^C), where C is any constant. Computer Science Research Work Environment Becoming an algorithm specialist usually requires a doctoral degree. These scientists design programs to do sophisticated work, such as automated financial trading, artificial intelligence, data mining, physics simulations and quantum computing. Computer researchers can be employed by universities or by companies that invest in algorithm technology, such as IBM and Google. A famous example of the work of these scientists is the Watson computer program that competed on Jeopardy in 2011. IBM developed Watson for the purpose of playing against humans on Jeopardy, and it succeeded in beating two former first-place Jeopardy champions. Algorithm researchers are also employed by banks and investment funds to create automated trading software that reduces the risk of making hundreds or thousands of trades for mutual funds and other types of investments. According to the U.S. Bureau of Labor Statistics, these researchers earned a median annual salary of $102,190 in 2012, and the industry was expected to grow by 15 percent over the next ten years. In the development of computer technology, algorithm research is equally as important as hardware innovation because faster algorithms allow existing hardware to work more efficiently. Currently, developments in software are significantly greater than hardware developments. Researching algorithms takes a great love of mathematics and quantitative problem solving. Getting a PhD usually takes six to eight years, and a computer science PhD is one of the most difficult degrees to get. If you have a passion for discrete mathematics and want to make breakthroughs in computer science, consider becoming an algorithm specialist.
OPCFW_CODE
Operations Manager – Extending UNIX/Linux Monitoring with MP Authoring – Part IV March 27, 2011 1 Comment In Part III of this series, I walked through creation of data sources, a discovery, and a rule for discovering dynamically-named log files and implementing an alert-generating rule for log file monitoring. In this post, I will continue to expand this Management Pack to implement performance collection rules, using WSMan Invoke methods to collect numerical performance data from a shell command. Using Shell Commands to Collect Performance Data Whether it is system performance data from the /proc or /sys file systems, or application performance metrics in other locations, performance data for UNIX and Linux systems can often be found in flat files. In this example Management Pack, I wanted to demonstrate using a WSMan Invoke module with the script provider to gather a numeric value from a file and publish the data as performance data. In many cases, this would be slightly more complex than is represented in this example (e.g. if the performance metric value should be the delta between data points in the file over time), but this example should provide the framework for using the contents of a file to drive performance collection rules. The root of these workflows is a shell command using the cat command to parse the file, which could be piped to grep, awk, and sed to filter for specific lines and columns. Additionally, if the performance data (e.g. hardware temperature or fan speed, current application user or connection count) that you are looking for is not stored in a file, but available in the output of a utility command, the same method could be used by using the utility command instead of cat. Collecting Performance Data from a File In this example, the MyApp application stores three performance metrics in flat files in the subdirectory ./perf. I have built three rules that cat these files, and map the values to performance data. The three rules are functionally identical, so I will only describe one of them. Performance Collection Rule: MyApp.Monitoring.Rule.CollectMyAppMem This rule collects the value from the ./perf/mem file in the application directory, which represents the current memory used by the application in KB. The rule targets the MyApp.Monitoring.MyApp application class. Rule Data Source: The rule uses the MyApp.Monitoring.ShellCommandMonitoring data source, described in Post I of this series, with the configuration: <Interval>300</Interval> <TargetSystem> $Target/Host/Property[Type="MicrosoftUnixLibrary! Microsoft.Unix.Computer"]/NetworkName$</TargetSystem> <ShellCommand> cat $Target/Property[Type="MyApp.Monitoring.MyApp"] /InstallPath$/perf/mem </ShellCommand> <Timeout>120</Timeout> Notice that the ShellCommand is our cat command: cat $Target/Property[Type=”MyApp.Monitoring.MyApp”]/InstallPath$/perf/mem Rule Condition Detection: A System.Performance.DataGenericMapper is used as the condition detection module to map the StdOut to performance data, with the configuration: <ObjectName> $Target/Property[Type="MyApp.Monitoring.MyApp"] /Name$ </ObjectName> <CounterName>Memory used (KB)</CounterName> <InstanceName>Total</InstanceName> <Value> $Data///*[local-name()="StdOut"]$ </Value> The rule defines two write actions: Microsoft.SystemCenter.CollectPerformanceData and Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData to collect the data and publish it to the DW. These require no configuration. The end result is that the scheduled rule grabs the value from the text file, maps it to performance data, collects and publishes the performance data to the OM and DW databases. This mechanism can be used for nearly any numerical performance metric that is accessible (in a timely fashion) from a shell command pipeline or script.
OPCFW_CODE
Full Staking Node This is a quick guide on how to setup a full Vaporum Coin staking node utilizong all of the 64 segID addresses possible. Requirements to run the node Enterprise class (recommended) linux server that is able to run 24/7. Following setup tested on ubuntu 18/04 server but may work on other versions. minimum specifications should be 4 cores, 120gb hdd, and a minimum of 4gb ram (8gb recommended) Security and backup. This staking node will be holding coins, make sure it is secure and has proper backups. coins: Since you are staking you will need a decent number of coins for this setup. Coins are distributed across 64 segID addresses, you should have enough coins to distribute evenly to these 64 addresses. CLI commend line experience. Recommend you start with small amount of your coins to ensure staking node is operational and functioning as it should before large amounts of coins are moved over. The following assumes your server is operational Install Vaporum Coin Daemon Log into server terminal using SSH, or start terminal on machine. Intal needed dependencies sudo apt-get install build-essential pkg-config libc6-dev m4 g++-multilib autoconf libtool ncurses-dev unzip git python python-zmq zlib1g-dev wget libcurl4-gnutls-dev bsdmainutils automake curl libsodium-dev Clone and Build Vaporum Coin # Clone the VaporumCoin repo git clone https://github.com/VaporumCoin/vaporum.git --single-branch # Change master branch to other branch you wish to compile # Change -j4 to specify the number of cores to use. ex: -j2 # This can take some time. Setup Vaporum Coin Change directory to vaporumcoin Start new screen for vaporumcoin deamonpendencies screen -S vaporum Vaporum coin daemon will startup and begin syncing. To exit this screen use ctrl + A + ctrl +D to exit screen. you can return to this screen at any time but typing screen -r vaporum Type ./vaporumcoin-cli getnewaddress to get a new vaporum address. You fund your Vaporum staking node by sending funds to this address. Check to see if vaporum is fully sync'd by typing ./vaporumcoin-cli getinfo when the node is fully sync'd it will read true under sync. You will also use command above to check the balance of the staking node. Intal needed dependencies sudo apt-get install python3-dev sudo apt-get install python3 libgnutls28-dev libssl-dev sudo apt-get install python3-pip pip3 install setuptools pip3 install wheel pip3 install base58 slick-bitcoinrpc Clone pos64staker and enter directory git clone https://github.com/KMDLabs/pos64staker Run the genaddresses script to create your 64 addresses It will ask you to specify which chain you want to stake with. Type VPRM at prompt. Please specify chain:VPRM This will then create a list.json file in the current directory (/home/pos64staker) THIS FILE CONTAINS YOUR PRIVATE KEYS ***DO NOT SHARE*** Copy the list.json file to the directory vaporumcoin is located in. cp list.json ~/vaporumcoin/src/list.json Open the list.json file so you can retrieve your pubkey. Copy /paste the pubkey of the first segID to a txt file so you can use it later. Use ctrl + x to exist file. Distribute your vaporum to all 64 addresses Specify VPRM as your chain and decide on the size and amount of UTXOS to send. Best you try to use all the whole balance you sent to vaporum node. Please specify chain:VPRM Please specify the size of UTXOs:9 Please specify the amount of UTXOs to send to each segID:2 Please review what is being asked here: The above example will send 1152 coins in total. It will send 18 coins in 2 UTXOs (9 each) to each of the 64 addresses (segIDs) - ((9*2)*64)=1152 It will give an error if you enter amounts more than you balance. It will tell you how much available you have for each segID change directory to vaporum/src cd && cd vaporum/src Stop the running vaporum coin daemon. resume the vaporum coin screen screen -r vaporum restart the vaporumcoin chain with -pubkey and -blocknotify parameters. ./vaporumcond -pubkey=<pubkey_from_list.json> '-blocknotify=/<path_to>/pos64staker/staker.py %s VPRM' Make sure to replace <pubkey_from_list.json> with pubkey you copied earlier from the list.json file. Make sure to replace <path_to> in -blocknotify= with the correct path to the staker.py script file. Example /home/$USER/pos64staker/staker.py Use ctrl + A + ctrl + D to exit screen ./vaporumcoin-cli setgenerate true 0 Your Vaporum coin staking node should be finished and staking To verify you are staking type you are looking for "staking": True, for staking info type After some time you balance will begin to grow, to see type As blocks are staked and you ern more coins, pos64staker will distrubute the new coins to each of your 64 segID addresses. Note: If you stop vaporumcoin at all for updated, reboots etc. simply follow the steps from Launch Vaporumcoin to start staking again. If you'd like to withdraw funds from the staking node without messing up the 64 segID distribution do the following cd && cd pos64staker It will walk you through the withdraw steps. If you have issues running ./withdraw.py returning Permission denied. Run chmod +x withdraw.py to fix the permission.
OPCFW_CODE
XLCubed Excel Edition brings the full power and flexibility of OLAP reporting to Excel. The strengths of Excel as a familiar and powerful calculation and modelling environment are retained, while the risk of Excel as a data store is removed. Connecting to Microsoft Analysis Services cubes (the BI component of SQL Server) and SQL etc, XLCubed provides slice and dice analytics, free format asymmetric reporting, and a rich environment for interactive, data connected Excel dashboards. XLCubed provides an easy to use yet powerful analytics environment. With XLCubed Grids, users can explore and understand their data through slice and dice, drill through the available hierarchies, and quickly specify level or descendant based selections for dynamic reporting. User defined calculations are simple to add, but olap aware, and respect drill downs and hierarchy repositioning. Report sections can be linked together by hierarchy, and the selection criteria can use the XLCubed dialogs, native Excel drop downs, or direct user entry. Any number can be quickly broken down to its constituent parts, and in-grid visualisation is available within a fully interactive environment. Read our blog post on XLCubed Analytical Applications For some reporting layout is key, and there are precise corporate templates which must be adhered to. Asymmetric reporting is also a requirement in many environments. XLCubed's formula model means that it's possible to create a data-connected report of any shape and layout achievable in native Excel. The key formulae are simple in concept and are created through intuitive dialogs, or the conversion of an XLCubed Grid. Despite the free format layout, the reports still offer drill down, drill through, and number decomposition (breakout). Excel is a highly effective environment for dashboard reporting. The user has fine grain control over the positioning, layout and sizing of charts and tables, and the ability to easily calculate additional metrics using Excel itself. The primary issue has always been that of Excel as an isolated data island, or 'Spreadmart'. XLCubed maintains an active connection to the data, taking away the maintenance burden associated with most traditional Excel dashboards, but retains the full flexibility of layout available in native excel. The in-cell charting available with MicroCharts, a licensed component of XLCubed Excel Edition, significantly extends what is achievable in terms of data visualization, and is ideally suited to dashboard reporting. Read our blog post on XLCubed Dashboards. Despite the free format layout, the reports still offer drill down, drill through, and number decomposition (breakout). Want to Know More? For more information on how XLCubed Excel Edition could benefit your company fill out the enquiry form or contact us on 02 9672 4222 for a FREE no obligation chat. What have you got to lose!
OPCFW_CODE
[Note: For additional information, including embedded checks for understanding and teacher directions, refer to the lesson here: Whole Lesson (with comments) or the entire lesson in PDF form here: Whole Lesson [PDF]] I like this lesson mainly because it's a ton of fun. While it is fun (obviously), I also like it because I feel it isn't purely activity-based - a large number of students have self-reported that this lesson really helped them "get" the rock cycle and the types of rocks. The materials are listed below, but the lesson itself revolves around a mini-lab where students get to actually model the stages of the rock cycle in creating rock types with some tasty ingredients. Keep in mind, you may need slightly more time than an hour (or split it over two days), and you may want to allocate enough time for clean up at the end! Needed Materials (per group - I have groups of 4 students/group): Students come in silently and complete the Do Now. After time expires (anywhere from 2-4 minutes depending on the type of Do Now and number of questions), we collectively go over the responses (usually involving a series of cold calls and/or volunteers) before asking a student to read the objective to start the lesson. As a note, the Do Now serves a few purposes: general review of the previous day's material, re-activation of student knowledge to get them back into "student mode" and get them thinking about science, an efficient and established routine for entering the classroom, and as a strategy for reviewing material students have struggled with. In this section, students are first given some time to collectively work on a problem together. The first section in the Rock Cycle Lab serves as a brief review of the rock cycle through a problem that students tackle together. They're given a few minutes to read and write out their responses before the mini-lab is introduced. The mini-lab itself consists of a few different ingredients with which students will model the steps of the rock cycle. It's fun, engaging, and the activity is actually fairly good at elucidating the steps of the rock cycle in real time (in the past few times this has been taught, many students have referenced this lesson as one that helps them remember the steps). This is also a way for students to demonstrate the steps in real-time. Much of the temporal challenge of the rock cycle and visualizations of it is that it is often too slow - it's hard to see or picture something on a timeframe in the millions of years. This allows students, in the context of just a few minutes, to see how one step transfers into another, and how the method of formation, despite similar "ingredients" ultimately determines the type of rock it will become. Logistically, refer to the Rock Cycle Lab for directions, while are fairly straightforward, but as an aside, it definitely helps to have the ingredients and materials pre-sorted into groups and ready, to both save time and prevent any classroom messes from happening. After completing the lab, students will continue their group work in answering the Discussion questions in laboratory groups [Note: They should clean up first!]. The Discussion, similar to the previous day's activity (and how they'll be assessed on the state exam), asks them to summarize the steps of the rock cycle as demonstrated in the lab. Some elaborative questions ask students to differentiate between different rock "states" as well as to think about how the model might be improved, or might not fully represent the rock cycle in true fashion. It's essential and important here that students use the correct vocabulary, so I often have them utilize their Earth Science Reference Tables [ESRT] (look on page six for a rock cycle diagram) to facilitate this process. Since they're still cementing the ideas together in their brains, this visual anchor has been extremely helpful for them during this step. Again, definitely allow enough time for clean up (it helps if one student in the group is the designated 'Materials Manager' or responsible for actually cleaning up the lab area). I think a great way to make sure this happens is to have a hard stop in your lesson - regardless of what is happening, how engaged children are, or how much learning is taking place, everything needs to grind to a halt and clean up needs to begin. Since this lab is a bit messy, absolutely allocate some extra time to allow your room to get back in order (which is especially important for shared classrooms like the one I teach in!). Students take the Exit Ticket (daily assessment) for the day before we go over it together as a class. Before students are dismissed, one or two students are called on to summarize the learning for the day ("What are the three major types of rocks?" or "Tell me about how metamorphic rocks are formed..."). Students also have an 'Exit Ticket Tracker,' which is nothing more than a simple piece of paper with column headings for 'Date,' 'Lesson' (all lessons are titled with Unit.Lesson as a format, like this one, which is 1.6 - 1st Unit, 6th Lesson), and 'Score'. I collect these at the end of each unit as a summative grade. I also periodically collect exit tickets to determine where students are at, and where any content or learning gaps are.
OPCFW_CODE
Azure Stack Fundamentals (Series 02) In this series of documentation, will try to understand more about Azure Stack technical design: The internal foundation of Azure Stack is Windows Server 2016 technology, which allows - Azure Stack to build cloud inspired infrastructure: Azure Resource Manager Storage Spaces Direct (S2D) Azure Resource Manager: Azure Resource Manager enables you to work with resources in your solution as a group. You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. An ARM is consistent management layer that saves resources, dependencies, inputs, and outputs as an idempotent deployment as a JSON file called an ARM template. These templates give you the ability to work with different environments such as testing, staging, and production. The goal is that once the template is designed, it can be run on each Azure-based cloud platform, including Azure Stack. Using ARM, you can manage subscriptions and RBAC and defines the gallery metric and usage data, too. There are some terms, which you need to be aware of while working with the ARM. Resource: A resource is a manageable item available in Azure. Virtual Machine, Storage, Database etc. are manageable resources in Azure. Resource group: Container of resources that fit together within the service. Resource Provider: A resource provider is a service that can be consumed within Azure. Resource Manager Template: A resource manager template is the definition of a specific service. Declarative Syntax: Syntax that lets you state "Here is what I intend to create" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. To create your own ARM templates, you can get it done by using different ways and can get it modified based on your requirement. Visual Studio templates Quick Start templates on GitHub Azure ARM templates Reference Link: ARM Overview VxLAN (Virtual Networking) Microsoft introduced Software Defined Networking (SDN) and the NVGRE (Network Virtualization using Generic Routing Encapsulation) Technology with windows server 2012. Hyper-V Network Virtualization supports NVGRE as the mechanism to virtualize IP addresses. In NVGRE, the virtual machine's packet is encapsulated inside another packet. VxLAN comes as the new SDNv2 protocol; it is RFC compliant and is supported by most network hardware vendors by default. The Virtual eXtensible Local Area Network (VxLAN) RFC 7348 protocol has been widely adopted in the marketplace, with support from vendors such as Cisco, Brocade, Arista, Dell, and HP. The VxLAN protocol uses UDP as the transport. Reference Link: Network Virtualization Nano Server offers a minimal-footprint, headless version of Windows Server 2016. It completely excludes the graphical user interface, which means that it is quite small, headless, and easy to handle regarding updates and security fixes, but it doesn't provide the GUI expected by customers of Windows Server. Storage Spaces Direct: Storage Spaces and Scale-Out File Server were technologies that came with Windows Server 2012. The lack of stability in the initial versions and the issues with the underlying hardware was a bad phase. The general concept was a shared storage setup using JBODs controlled from Windows Server 2012 Storage Spaces servers, and a magic Scale-Out File Server cluster that acted as the single point of contact for storage: With Windows Server 2016, the design is quite different and the concept relies on a shared nothing model, even with locally attached storage: Reference Link : Storage Spaces Direct In the upcoming articles, will be continue with technical design. Happy Learning and looking for your feedback here.
OPCFW_CODE
Markdown is a powerful conversational tool for developers. Developers can use it to share their ideas across multiple applications Have you ever tried to copy text out of a communication app, and it doesn’t format well in another text editor? Have you ever started to write your documentation in a word processor, migrate to a new platform, and it doesn’t adapt? In this article, we’ll learn how to improve our writing skills so we don’t fall into those traps. Markdown is a powerful conversational tool for developers. Developers can use it to share their ideas across multiple applications easily, clearly, and in most cases platform-independent. In this post, I will demonstrate how we communicate effectively with co-workers, using markdown. We can apply those strategies in a daily chat using Slack, Discord, or git repositories like GitHub, Bitbucket, or GitLab. It is important to be clear and contribute with your ideas using well-written messages. We will start with some recommendations to use in your Git repositories. You can suggest a specific change to one line or multiple lines of code when reviewing a pull request using the markdown Suggestions. Then, the merge request author can apply these suggestions with a click. This action generates a commit in the merge request, authored by the user that suggested the changes. In GitHub, to select a multi-line code block, you can either: - click and hold to the right of a line number, drag and then release the mouse when you’ve reached the last line of the desired selection; or - click on a line number, hold Shiftclick on a second line number and click the “ +” button to the right of the second line number. Once you’ve selected the code block, click the diff icon and edit the text within the suggestion block. With Mermaid, you can easily create diagrams, sequences, Gantt charts, and more based on the textual description. The purpose of Mermaid is to help with visualizing documentation and helping it catch up with development. The documentation of the textual description can be found here https://mermaid-js.github.io/mermaid. You can try it and get more examples in this Mermaid Live Editor. Task lists in issues, comments, and pull request descriptions are incredibly useful for project coordination and keeping track of important items. A task list item consists of a minus sign ( -) followed by a left bracket ( [), either a whitespace character or the letter x in either lowercase or uppercase, and then a right bracket ( When rendered, the task list item is replaced with a semantic checkbox element; in an HTML output, this would be an <input type=”checkbox”> element. If the character between the brackets is a whitespace character, the checkbox is unchecked. Otherwise, the checkbox is checked. You can create interactive maps in your markdown using GeoJSON syntax. GeoJSON is an open standard file format for representing map data. You can find more information in the GeoJSON specification. Title and Headers If you want to create a title or header you have to add a hashtag at the beginning of the line. As you add in more the headers become smaller. Three dashes symbolize a horizontal line which is good for different sections of your document. To emphasize something add asterisks at the beginning and end of the word or phrase. One set means you are italicizing and two you are bolding. To create a blockquote, add a carrot at the beginning of the line. You can create an ordered list by numbering each item in the list. But instead of using one, two, three, and so on. I have a trick for you, you can replace the one, two, three with all ones. This way if you change the order of your list the numbers stay the same. You can add in emojis by add in colons at the beginning and end of the emoji name. For example :alarm_clock::pig::airplane: When Pigs Fly. ⏰ 🐷 ✈️ When Pigs Fly. To add a link in your text, just surround the word or phrase that you want to create a hyperlink for in brackets and then add the link between parentheses. If you want to add in some code in your text you can add a back-tick at the beginning and end of the code and that will make it look all cody. For block code and syntax highlighting you will add three back-ticks at the beginning and end of the code. You can specify what language you are using. Footnotes allow you to add notes and references without cluttering the body of the document. When you create a footnote, a superscript number with a link appears where you added the footnote reference. Readers can click the link to jump to the content of the footnote at the bottom of the page. To create a footnote reference, add a caret and an identifier inside brackets ([¹]). Identifiers can be numbers or words, but they can’t contain spaces or tabs. Identifiers only correlate the footnote reference with the footnote itself — in the output, footnotes are numbered sequentially. Add the footnote using another caret and number inside brackets with a colon and text ([¹]: This text is inside a footnote.). You don’t have to put footnotes at the end of the document. You can put them anywhere except inside other elements like lists, blockquotes, and tables. You can build tables to organize information in comments, issues, pull requests, and wikis. - The first line contains the headers, separated by “pipes” ( - The second line separates the heads from the cells. – The cells can contain only empty spaces, hyphens, and (optionally) colons for horizontal alignment. – Each cell must contain at least one hyphen, but adding more hyphens to a cell does not change the cell’s rendering. – Any content other than hyphens, whitespace, or colons is not allowed - The third, and any following lines, contain the cell values. – The cell sizes don’t have to match each other. They are flexible but must be separated by pipes ( – You can have blank cells. - Column widths are calculated dynamically based on the content of the cells. Additionally, you can choose the alignment of text in columns by adding colons ( :) to the sides of the “dash” lines in the second row. To add an image, add an exclamation mark (!), followed by alt text in brackets, and the path or URL to the image asset in parentheses. You can optionally add a title in quotation marks after the path or URL. See markdown isn’t that intimidating, if you have all these ideas at your disposal you’ll be off to the races and writing fast and efficiently to communicate with your colleagues. Because of the reality that we live in today, developers must now devise clever ways to communicate. Learning how to communicate better with your colleagues is important to create a remote-friendly company or organization.
OPCFW_CODE
Responsive website design and front end development for a London based actress Carla-Marie Metcalfe wanted a website to act as a hub for all of the acting collatoral a potential agent might want to see. This included photos (headshots and production stills), sound (voice reel), video (show reel, monologues) and contact details. She also wanted the website to be built in a content management system (CMS) that could be easily updated, by her, with new content when neccessary. I was tasked with creating the website design and setting it up in a CMS. What I did I began by interviewing Carla-Marie to determine her needs from the project, as well as conducting research of other actor's websites. Common features of other websites included the ability to store and display photos, to play video and sound (such as a voice reel), and display contact details. From here I was able to create a website architecture and site map for her. Once this stage was signed off I created paper wireframes to test the flow of her website. Once the wireframes were created I began work on determining the look and feel of the website. This included choosing the typography, colours and photographs to be used. The next step was determining how I would turn my visual designs into a working website within a CMS. I explored a variety of options, but decided on WordPress primarily for it's use of themes but also because of it's popularity among many developers and abundance off forum resources. I set up the domain using 1and1 and created the database using MySQL in order to install and run WordPress. I decided to find a fairly blank theme that I could then customise using custom HTML and CSS. I was then able to create the other features, such as displaying photos, using a WordPress plugin. In order to add video and sound I added widgets from SoundCloud and YouTube. Since launching the website, Carla-Marie has been able to update her website freely using the plugins and widgets. She has been able to point potential agents to her site in order to further demonstrate her abilities. Because she has been able to show her Headshots, Production stills, Voice reel and her Show reel she has garnered lots of interest from potential agents, and has since been signed by a BAFTA award-winning agency called Actorshop. If I were to go back and do this project again, I would approach it slightly differently. First off all I would attempt to interview some agents to discover what their needs are from an actors' website and find out what problems they have found with such sites in the past. I would also attempt to validate my wireframes with them before moving onto the visual designs. In terms of the development of the CMS, I would set up a better workflow for development. This would include creating a local site, a staging/test site, and a live site. This would mean I would be able to use a tool like GitHub to version control changes I would make. Although it is a small website, this is good practice and does no harm in helping me learn more about website development.
OPCFW_CODE
Facebook Developer App Secret Then go to website option. Share photos and videos, send messages and get updates. Tools and information to help you with the app development process. Facebook developer app secret. Explore ai, business tools, gaming, open source, publishing, social hardware, social integration, and virtual reality. To do this, we're going to use the vonage sms api. From here, you’ll need to click on the settings link to view your app id and secret key. Once you're registered as a facebook developer, click on the apps button in the top title bar of the web page. Click on the create new app button in the drop down menu to create a new. Now that we have created facebook for developer’s profiles and have gotten access to our secret key and app id we can move over to firebase project to take the necessary steps for log in. If you have registrated as a facebook developer you will see in top of the right create new app, otherwise you have to do registration. After you’ve filled out the required fields and clicked create a new app id, you’ll be taken to your new app’s dashboard. If you've already integrated facebook into your app or website through the facebook for developers page, you will see it here. We will loop through our participants and tell them who their secret santa is, how much to spend, and where to send their gift. Here you can find the app id and app secret that you will need for software that uses facebook app. “วิธีการรับ app id และ app. App secrets are stored in a separate location from the project tree. With the app id, you can send api requests to facebook for data. Click on show in front of app secret, and you need to enter your facebook account password to see the app secret key. Here are the steps to create a facebook app: Code to connect people with facebook for developers. In my case, i was configuring facebook albums plugin & it worked flawlessly after pasting the values in the plugin settings. How to make your app availabe to facebook users. In this context, a piece of sensitive data is an app secret. Use the app dashboard to create an app and access app and account settings. The secret crush feature was first announced in april 2018 at facebook’s f8 developer conference, revealing how it leverages users’ existing network of friends to connect them to secret. Locate and copy your app id and secret key. No need to follow the remaining steps. Enter the app name which you have want. If you are creating app to track facebook share count, click show button in front of app secret. If you don't already have a facebook developer account, click on the apps button in the top title bar of the web page. Follow the steps in the wizard to register as a facebook developer. ・facebookアプリid(app id) ・シークレットキー(app secret) が必要になります。 自動投稿の設定方法は「記事投稿時にsnsにも自動で投稿する」をご参照ください。 既にfacebook deveroperに登録済みの方は「facebookアプリを作成」からお進みください。 Create an account or log into facebook. Thus, i will have the app wait 1 second between requests. After this you have to choose category, you can choose app for pages. Features on the facebook app include: I'm doing this part with a vonage lvn, which throttles at one message/second. Please follow below steps to create facebook application: Store the facebook app id and secret สมัคร facebook app สำหรับ developer เพื่อที่จะนำ app id และ app secret ไปใช้. Keeping up with friends is faster and easier than ever. This is the new app’s dashboard. Copy app id and app secret and paste these in the facebook app id and facebook app secret options in the social share section in plugin settings. The facebook app secret can be used to decode the encrypted data. Your appid and appkey is created automatically. Click on create facebook app. Go to developer tab and click on it. The app secrets are associated with a specific project or shared across several projects. A facebook app is needed so that your site has the permission to retrieve the facebook counts to display them next to your facebook icon in the ultimate social media plugin. Once app is created, you’ll be redirected to the app “add a product” dashboard. Use the top right [my apps] menu to start creating facebook app. If you are creating app for facebook login, follow the. Connect with friends, family and other people you know. Scroll down and find your app or website. Follow the below steps to create a facebook app and generate app id & app secret. Register as a facebook developer to gain access to our app development tools. Go to the facebook for developers page and login with your facebook account. After than create a app. At the right corner of the top navigation bar, click the my apps link and select add new app. Fill the application name (you can enter website name) and contact email and click on [create app id] button. The same key and secret can be used for a facebook connect website or an app. Make a facebook app with these simple steps i have written below: Click the settings link on the left. Learn about facebook’s global programs to educate and connect developers. * connect with friends and family and meet new people on your social media network * set status updates & use facebook emoji to help relay what’s going on in your world * share photos, videos. On this page, make a note of your app id and your app secret.you will add both into your asp.net core application in the next section: Click settings > basic link in the left navigation. If you were already registered for facebook developer account, you can skip. The secret manager tool stores sensitive data during the development of an asp.net core project. When deploying the site you need to revisit the facebook login setup page and register a new public uri. If your app requires manage_pages and publish_page permissions read this article to learn how to request them. Share updates and photos, engage with friends and pages, and stay connected to communities important to you. Enter the display name and contact email. Scroll down to browse the list or use the search field at the top to quickly find an app. Adding this setting means that, the first time the user logs in, they will automatically be asked to grant those permissions to our app:
OPCFW_CODE
Research:Language switching behavior on Wikipedia Wikipedia is decidedly multilingual. Many concepts have corresponding articles in many languages. While these articles sometimes might be translations (e.g., via Content Translation), oftentimes they contain additional content or varying perspectives on a given topic. Readers can easily access this content via the interlanguage links on the sidebar for a given article, and, while certain readers only ever see the content that exists in their native language, many readers do take advantage of these varying perspectives and view content in multiple languages. Anecdotally through conversations with readers and feedback related to the Universal Language Selector , a variety of reasons for language switching have been noted: reading about a topic in a more comfortable language, looking at how different cultures write about a concept, switching to a language that the reader believes will have more extensive content, and learning a language or testing one's skills. This project focuses on the following question: for what types of articles do readers switch languages? The hope is that by identifying classes of articles where readers often switch, this might indicate that these articles have gaps in content, maybe should be prioritized for content translation or section recommendation, or should be surfaced more strongly as providing additional context to the reader. Article types could be related to categories, content, the structure of the article, etc. The goal of this project was to identify when a reader switched languages for a given article. Theoretically, there are several ways this could be done: - Examine the referer data in the webrequest table. When a page view is associated with one project and contains a referer from another project, this might be evidence of a language switch. This method is used for various analyses (evaluation of compact language links; interlanguage navigation table) and while it likely works quite well for generating aggregate numbers, we rule it out for the following reason: - With modern browsers and HTTPS, when the domain changes (e.g., from "de.wikipedia.org" to "en.wikipedia.org"), everything is removed but the domain from the referer. It is not possible therefore to easily determine whether the previous page was the same article in another language or a page like the Main Page or a user page. - The exception is IE browsers, and from these and some manual checking, we can see that only about 60% of switches between languages are for the same article. So this method was result in a large number of false positives - Record switches via EventLogging on the interlanguage links. - This has happened in the past via this schema, but is currently not active. Future projects could explore reintroducing this logging, but we preferred to work from existing data sources at least at this stage. - Reconstruct reader sessions and record when multiple projects are viewed in the same session - This approach was not taken for the same reason mentioned above that the presence of two different language projects does not actually indicate that the user had chosen to read an article in two different languages. - Reconstruct reader sessions, associate all page views with their Wikidata concepts, and identify when the same Wikidata concept is viewed on multiple projects. - This is the approach we took as described below. It is the most complex, but it also is the most exact and allows us to distinguish between simple co-occurrence of language projects and actual language switches. The following steps were taken to build the dataset of language switches: - Collect reader page views across all of the Wikipedia languages and associate each page w/ its corresponding Wikidata ID. This will be used for identifying when a reader views the same article but in multiple languages. - See this Phabricator task for documentation of how this mapping was generated efficiently. - Associate each page view with a device via a hash of the user-agent and client-IP. - See this analysis of the appropriateness of this method for reconstructing reader sessions. - Reconstruct the sessions associated with each device hash for a given day -- i.e. order all page views by device hash and timestamp - For each session, determine whether there were multiple projects viewed. If there was: - For each article viewed, determine if an article with the same Wikidata ID and different project was viewed at a later point. - Record each pair of <from-language> and <to-language> for a Wikidata concept. Policy, Ethics and Human Subjects Research At this stage, this research is solely based on an analysis of logs. Before any data would be publicly released, it will go through a privacy/security review.
OPCFW_CODE
There, that was a simple blog post to write. Well, some testers think it’s that simple, but there’s a more useful approach. To help remember it, use the RIMGEA testing mnemonic coined by Dr. Cem Kaner: Replicate it, Isolate it, Maximize it, Generalize it, Externalize it, And Say it Clearly and Dispassionately. 1. Replicate It Is the bug reproducible? If it’s a typo on the welcome page, that’s an easy one. Otherwise, make sure you can reproduce the issue. Is the login screen totally broken, or is it because you were using a user name of “Mr Tester%ForeignChæracters”? Making sure it can be reproduced and you have the details nailed down can stop the ping-pong of “works on my machine” when the developer tries to reproduce your broken login bug by using an user name of “Fred”. 2. Isolate It Having reproduced the problem, can you find a way to make it happen in fewer steps? Does it really need 15 steps, or can it be done in 3? 3. Maximize It Can it be made more serious? “Login screen accepts special characters” doesn’t sound that serious of a bug, but “system allows user to create names that delete the database” is one that might make people pay more attention. 4. Generalize It Go on a few steps. “Error message for an invalid image upload is misspelt” sounds trivial. The developer fixes it and makes a new release. But what if you then find that, after dismissing the error message, the system is frozen. If you’d gone on a few more steps when you first found this bug, then this cycle would not have been wasted. 5. Externalize It Does it exist elsewhere? Can it be more general? Try to avoid the cycle where Bug A is fixed on Screen 1, you test the new version and find a variation of the bug on Screen 2. 6. And Say it (Clearly & Dispassionately) Log the bug. If you need info on how to report bugs effectively, read Kaner’s “Bug Advocacy: How to Win Friends, and SToMp BUGs”. 7. Remember it! This is my addition. After working on a range of apps, you’ll soon have a lot of different things to try out. And when the new project has an alert dialog, you’ll remember to scroll the screen first before making the dialog appear to check that it appears on-screen and not off. What steps do you follow when you find a bug? Let me know in the comments.
OPCFW_CODE
How to measure the latency of globally load balanced tagging server deployments? I have several globally distributed tagging server deployments that are deployed in GCP Cloud Run. The Cloud Run deployments are reachable through serverless network endpoint groups. The traffic is balanced by a Classic HTTPS Application Load Balancer. I now want to measure the RTT from several source locations (e.g. India, China, etc.) to the tagging servers, means the time the clients browser needs to get a result from the tagging server. I did use dotcom-monitor.com to measure the latency to the IP of the LB, but this obviously only reflects the response of the LB and does not take into account the time that the backend need for processing. In this blog written by gauravmadan, he has explained in detail on how to calculate RTT for backends deployed on public clouds like GCP. I think this will answer your queries. I hope the response below addresses your query, if you are still having some queries revert back here There are multiple things which needs to be considered while calculating Round Trip Time(RTT) like Application architecture Cold start duration Server response time Apart from the above factors there will be multiple other factors as well, but for now let's concentrate on these factors alone because these are directly related to the issue which you are asking. As you mentioned, if we monitor the load balancer IP alone it’s not going to help us find the RTT properly. To calculate the RTT properly we need to add a few functions in our code which will assist in collecting the response timestamps using which we can calculate correct RTT. If you are already having your response logs generated for your cloud run deployment you can compare the difference between the timestamps for calculating the exact RTT taken. If you are calculating RTT for fixed locations you can use the GCP’s Network Intelligence Centre dashboard for collecting insights about various geo locations as mentioned in this blog written by Gauravmadan. In the question it is mentioned that you are using classic HTTPS load balancer instead of using Global HTTPS load balancer. Classic load balancer doesn’t support geo load balancing, as a result there might be a chance that the latency will be much higher. If you are planning to have an application with global presence it is suggested to use Global HTTPS load balancers over the classic HTTPS load balancer. Thanks for your answer! A few remarks... The network intelligence centre is nice, but it is only intended for compute engine instances.. As I'm running Cloud Run it seems it is not helpful for me. Also, I do not have control over the tagging servers code. Second, I'm using the classic HTTPS load balancer, but in premium tier. So I'm assuming it is working globally with geo load balancing. So.. what could I do? Maybe deploy a test container with a URL routing and then use a monitoring service?
STACK_EXCHANGE
Introduction to packaging Java Packaging Java libraries and applications in Fedora has been my daily bread for almost a year now. I realized now is the time to share some of my thoughts on the matter and perhaps share a few ideas that upstream developers might find useful when dealing with Linux distributions. This endeavour is going to be split into several posts, because there are more sub-topics I want to write about. Most of this is going to be based on my talk I did @ FOSDEM 2011. Originally I was hoping to just post the video, but it seems to be taking more time than I expected If you are not entirely familiar with status of Java on Linux systems it would be a good idea to first read a great article by Thierry Carrez called The real problem with Java in Linux distros. A short quote from that blog: The problem is that Java open source upstream projects do not really release code. Their main artifact is a complete binary distribution, a bundle including their compiled code and a set of third-party libraries they rely on. There is no simple solution and my suggestions are only mid-term workarounds and ways to make each other’s (upstream ↔ downstream) lives easier. Sometimes I am quite terse in suggestions, but if need be I’ll expand them later on. Part 1: General rules of engagement Today I am going to focus on general rules that apply to all Java projects wishing to be packaged in Linux distributions: - Making source releases - Handling Dependencies - Bugfix releases For full understanding a short summary of general requirements for packages to be added to most Linux distributions: - All packages have to be built from source - No bundled dependencies used for building/running - Have single version of each library that all packages use There are a lot of reasons for these rules and they have been flogged to death multiple times in various places. It mostly boils down to severe maintenance and security problems when these rules are not followed. Making source releases As I mentioned previously most Linux distributions rebuild packages from source even when there is an upstream release that is binary compatible. To do this we need sources obviously Unfortunately quite a few (mostly Maven) projects don’t do source release tarballs. Some projects provide source releases without build scripts (build.xml or pom.xml files). Most notable examples are Apache Maven plugins. For each and every update of one of these plugins we have to checkout the source from upstream repository and generate the tarball ourselves. All projects using Maven build system can simply make packagers’ lives easier by having following snippet in their pom.xml files: This will create -project.zip/tar.gz files containing all the files needed to rebuild package from source. I have no real advice for projects using Ant for now, but I’ll summarise them next time. I have a feeling that most Java projects don’t spend too much time thinking about dependencies. This should change so here are a few things to think about when adding new dependencies to your project. Verify if the dependency isn’t provided by JVM Often packages contain unnecessary dependencies that are provided by all recent JVMs. Think twice if you really need another XML parser. Try to pick dependencies from major projects Major projects (apache-commons libraries, eclipse, etc.) are much more likely to be packaged and supported properly in Linux distributions. If you use some unknown small library packagers will have to package that first and this can sometimes lead to such frustrating dependency chains they will give up before packaging your software. Do NOT patch your dependencies Sometimes a project A does almost exactly what you want, but not quite…So you patch it and ship it with your project B as a dependency. This will cause problems for Linux distributions because you basically forked the original project A. What you should do instead is work with the developers of project A to add features you need or fix those pesky bugs. Every software project has bugs, so sooner or later you will have to do a bugfix release. As always there are certain rules you should try to uphold when doing bugfix releases. Use correct version numbers This depends on your versioning scheme. I’ll assume you are using standard X.Y.Z versions for your releases. Changes in Z are smallest released changes of your project. They should mostly contain only bugfixes and unobtrusive and simple feature additions if necessary. If you want to add bigger features you should change Y part of the version. Bugfix releases have to be backwards compatible at all times. No API changes are allowed. No changes in dependencies You should not change dependencies or add new ones in bugfix releases. Even updating dependency to a new version can cause massive recursive need for updates or new dependencies. The only time it’s acceptable to change/add dependency version in bugfix release is when new dependency is required to fix the bug. An excellent example of how NOT to do things was Apache Maven update from 3.0 to 3.0.1. This update changed requirements from Aether 1.7 to Aether 1.8. Aether 1.8 had new dependency on async-http-client. Async-http-client depends on netty, jetty 7.x and more libraries. So what should have been simple bugfix update turned into need for major update of 1 package and 2 new package additions. If this update contained security fixes it would cause serious problems to resolve in timely manner. - Create source releases containing build scripts - Think about your dependencies carefully - Handle micro releases gracefully Next time I’ll look into some Ant and Maven specifics that are causing problems for packagers and how to resolve them in your projects.
OPCFW_CODE
Materials: “Lifesavers” made of paper (prepared in advance), large piece of paper, markers Prepare lifesavers in advance by cutting them out of a large piece of paper (see photograph). Write out the names of different social groups on them: - the army - multinational corporations (e.g. McDonalds, Coca-Cola, etc.) - peace workers - senior citizens - people with disabilities - black people - the media - the government Each social group appears on two lifesavers. An alternative to the lifesavers is to write the names of social groups in big letters on an A4 piece of paper (that participants can tape to their chests using masking tape, sticky tape or a clothes pin). Step 1: Select groups. Participants have to select one of the social groups and take the corresponding lifesaver. Step 2: Describe the task. Describe the task slowly and clearly: “Participants who represent the same group are a pair. Each pair has one vote. Pairs cannot be separated. You are all travelling by plane to a conference. The topic of the conference is conflict and violence in the world and possible responses. At the conference you will be representing the group you have selected. Suddenly, the pilot informs you that due to technical difficulties, everyone has to evacuate the plane within the hour. However, there aren’t enough parachutes for all the passengers. Three pairs will be left without parachutes. The pilot has their own personal parachute and isn’t willing to give it to anyone else. Each pair has the task of writing out the reasons why they should be given a parachute, thus ensuring that they will continue their journey to the conference and their work on dealing with conflict and violence in the world. Decide which one of you two will be the spokesperson. You have five minutes for this task.” Step 3: Presentations. Spokespeople present the reasons why they (their group) should be saved. They have three minutes for their presentation. Step 4: Voting in pairs. Pairs have five minutes to discuss who should be given a parachute and to choose five groups to cast their votes for (they cannot vote for themselves). Voting is done in secret: pairs write out a list of five groups on a piece of paper. Step 5: Voting. A list of all the groups that are on the airplanes is on the large piece of paper. Trainers tally the votes next to each group’s name. Three pairs who have the fewest votes are not given parachutes and they have to take off their lifesavers. If there is a draw, the pilot declares that she will not wait around for a decision and that they have five minutes to make up their minds or else she will catapult herself out of the plane to save her own life and leave everyone else to their fate. Therefore, they have to vote again. Suggested questions for evaluating the exercise: - How did you select which group to represent? How satisfied were you with your choice? How difficult was it to justify your survival? - How did you feel when deciding who to cast a vote for? How did you decide who to cast a vote for? - How do the groups that weren’t chosen feel? What did you get out of/learn from this exercise?
OPCFW_CODE
/* This one contains flags used in the error information object The flags are combined together by groups */ var ForeignCodesFlags = { // General 1-bit (0) Ok: 0x000000, Success: 0x000000, Failure: 0x000001, // Where 2-bits (1,2) Cancel: 0x000002, // Operation was forcibly cancelled - ignore results and continue if and as possible RemoteParty: 0x000000, // Remote party (callee) failed while performing the task Transport: 0x000004, // Transport level problem occured ProxyStub: 0x000006, // Failure occured at proxy or stub level (usually generally unacceptable argument or something like this) // Reserved 1-bit (3) // General kind 4-bits (4-7) - general character of the error General: 0x000000, // General error (no specific kind can be specified) NotFound: 0x000010, // Something assumed/addressed/searched is not found. NotAllowed: 0x000020, // Operation/task/execution (but NOT argument(s) type/content) is not allowed because of the state of the "where" part. Argument: 0x000030, // Argument(s) cause problems are not acceptable/serializable etc. NotAvailable: 0x000040, // A required service/component/protocol is not available at this time or permanently. Only active components, resources are NotFound Pending: 0x000060, // This is reported as an error to make sure that callers not capable of waiting will treat it as an error, but others will know // to wait (currently not used) AccessDenied: 0x000050, // Access was denied to a resource/service/component at the specified layer. Format: 0x000070, // Format of a structure is incorrect. // Specific 16-bits (8,23), meaning may be different depending on Where this happens errors 0001-00FF are reserced define your codes outside this range. // The reservation is an attempt to define some set of well-known and frequently needed values. // Can be left 0 if unsure what to report. NoCode: 0x000000, KraftError: function(_success, _layer, _kind, _code) { var success = (_success == true); var layer = ((typeof _layer == "number")?_layer: ((typeof this[_layer] == "number")?this[_layer]:0)); var kind = ((typeof _kind == "number")?_kind: ((typeof this[_kind] == "number")?this[_kind]:0)); var code = ((typeof _code == "number")?_code: ((typeof this[_code] == "number")?this[_code]:0)); var err = 0x000000 | ((success)?this.Ok:this.Failure); err |= ((typeof layer == "number" && layer >= 2 && layer <= 6)?layer:0); err |= ((typeof kind == "number" && kind >= 0x10 && kind <= 0xF0)?kind:0); err |= ((typeof code == "number" && code >= 0x000100 && code <= 0xFFFF00)?code:0); return err; }, IsSuccess: function (_err) { var err = ((typeof _err == "number")?_err:parseInt(err,10)); if (typeof err == number) { if (err & this.Failure) return false; return true; } else { return false; } }, IsFailure: function (err) { var err = ((typeof _err == "number")?_err:parseInt(err,10)); if (typeof err == number) { if (err & this.Failure) return true; return false; } else { return true; } }, Origin: function (err) { return (err & 0x000006); }, OriginName: function(err) { var o = Origin(err); for (k in this) { if (this[k] == o) return k; } return "unknown"; }, Kind: function (err) { return (err & 0x0000F0); }, KindName: function(err) { var o = Kind(err); for (k in this) { if (this[k] == o) return k; } return "unknown"; }, Success: function() { return 0x000000;} };
STACK_EDU
SVScore generates multiple VCFs per input file Hi , Is it normal for SVScore to generate multiple VCFs per input? I thought that it finally generates one output VCF per input. Could you tell me why this is happening? I'm testing it on my laptop with 16 GB memory and has 4 cores. Thanks, Archana Hi Archana, Could you share the command you used to call SVScore? If you're using split.pl, you should of course be seeing multiple input and output files, but otherwise you should only be getting one output file per input. Another possibility is that you're looking in the svscoretmp directory. While it runs, SVScore generates some temporary files in a subdirectory called svscoretmp, some of which are VCF files. But unless you're supplying the -d flag, SVScore should clean these up when it terminates. I'm also curious as to the filenames of the extra VCF output files you're seeing. If you could share those, it might help me diagnose the issue. Thanks! Liron Hi Liron, Thanks for following up on this. Yes I was looking into the tmp svscoretmp dir below are the files I see lr_only1568938578.50622.sort.vcf lr_only1568938578.50622.preprocess.bedpe lr_only1568940523.90302.sort.vcf.gz lr_only1568940523.90302.preprocess.bedpe lr_only1568940642.22113.sort.vcf.gz lr_only1568940642.22113.preprocess.bedpe lr_only1568941052.03972.sort.vcf.gz lr_only1568941052.03972.preprocess.bedpe lr_only1568941202.04311.sort.vcf.gz lr_only1568941202.04311.preprocess.bedpe lr_only1568941202.04311.out.bedpe The command I used to run SVscore was ./svscore.pl -dv -o top10weighted -e refGene.exons.bed -f refGene.introns.bed -c whole_genome_SNVs.tsv.gz -i inputs/lr_only.vcf Where are my finalized results , how should I interpret these outputs? Thanks, Archana Hi Archana, These are all temporary files and should be ignored. In fact, you can safely delete that entire directory. These files are only being kept because you are supplying the -d flag in your command, which runs SVScore in debug mode. Your final output should be written to standard output (i.e. the shell). If you want it in a file, you can redirect it, like this: ./svscore.pl -dv -o top10weighted -e refGene.exons.bed -f refGene.introns.bed -c whole_genome_SNVs.tsv.gz -i inputs/lr_only.vcf > inputs/lr_only.svscore.vcf. Are you getting any output or errors at the command line when you run your command? If not, there is a different issue, and I'd recommend you send me a slice of your VCF. Liron Great thank you so much Liron. I didn’t realize I have to redirect it to a file . I don’t think I see any issues then . Do you have any recommendations for the -o flag ? I’m going with the default . I would also love to hear about the easiest way I can prioritize outputs ( any score cutoff you recommend or that has worked for you ) Thanks a lot for the help . Archana Get Outlook for iOShttps://aka.ms/o0ukef From: Liron Ganel<EMAIL_ADDRESS>Sent: Friday, September 20, 2019 11:29:06 AM To: lganel/SVScore<EMAIL_ADDRESS>Cc: Archana Natarajan Raja<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [lganel/SVScore] SVScore generates multiple VCFs per input file (#6) Hi Archana, These are all temporary files and should be ignored. In fact, you can safely delete that entire directory. These files are only being kept because you are supplying the -d flag in your command, which runs SVScore in debug mode. Your final output should be written to standard output (i.e. the shell). If you want it in a file, you can redirect it, like this: ./svscore.pl -dv -o top10weighted -e refGene.exons.bed -f refGene.introns.bed -c whole_genome_SNVs.tsv.gz -i inputs/lr_only.vcf > inputs/lr_only.svscore.vcf. Are you getting any output or errors at the command line when you run your command? If not, there is a different issue, and I'd recommend you send me a slice of your VCF. Liron — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/lganel/SVScore/issues/6?email_source=notifications&email_token=ABSZAP34KL2WNMG422VXUTDQKUI7FA5CNFSM4IXWCWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7HQ2OY#issuecomment-533663035, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABSZAP2FIDLMYTRCADRR35DQKUI7FANCNFSM4IXWCWZQ. Glad that solved your problem! As for score cutoffs and -o suggestions, these may depend on your specific application. In our paper (https://academic.oup.com/bioinformatics/article/33/7/1083/2748212), most of our analyses were done using the top10weighted operation (the default), which worked well for our purposes. I definitely wouldn't recommend using a hard score cutoff, though that may depend on your specific application. Again, I'd recommend you check out the paper, where we used a variant's score percentile within our callset to predict its pathogenicity. Hope this helps! Liron Thank you so much for the information. Archana From: Liron Ganel<EMAIL_ADDRESS>Sent: Friday, September 20, 2019 11:51 AM To: lganel/SVScore<EMAIL_ADDRESS>Cc: Archana Natarajan Raja<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [lganel/SVScore] SVScore generates multiple VCFs per input file (#6) Glad that solved your problem! As for score cutoffs and -o suggestions, these may depend on your specific application. In our paper (https://academic.oup.com/bioinformatics/article/33/7/1083/2748212), most of our analyses were done using the top10weighted operation (the default), which worked well for our purposes. I definitely wouldn't recommend using a hard score cutoff, though that may depend on your specific application. Again, I'd recommend you check out the paper, where we used a variant's score percentile within our callset to predict its pathogenicity. Hope this helps! Liron — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/lganel/SVScore/issues/6?email_source=notifications&email_token=ABSZAP2GJZIJ32RUIZCCLGDQKULR3A5CNFSM4IXWCWZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7HSROY#issuecomment-533670075, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABSZAP2N54F4DVCUT25ENMTQKULR3ANCNFSM4IXWCWZQ.
GITHUB_ARCHIVE
A regex tutorial will give you the knowledge you need to write effective regexes. This article will discuss the basic structure of a regex, common mistakes, syntax, and metacharacters. You will also learn how to use a regex to filter your data. After reading this article, you should feel comfortable using regex in your own projects. So, let’s get started! Here are some of the basics: The basic structure of a regex A regular expression consists of several elements, each with a special purpose. These elements’ main function is to match strings in a particular order. You can add or remove tokens by naming them using the regular expression syntax. For example, a regular expression that matches cat or dog will match cat food. The capturing group is used for the next part of the regex expression. The capturing group can be any number of words, as long as the elements are in the same order. The basic structure of a regex pattern is composed of atoms. An atom is one point within the regex pattern, and the simplest atom is a literal character. You must include metacharacters to group parts of a regex pattern to match atoms. Metacharacters are characters that help group elements and form an atom. These characters include quantifiers, greedy quantifiers, logical OR, NOT characters, backreferences, and more. When using the lookahead feature, you must include the Unicode character set. Initially, most regex libraries only supported ASCII character sets. However, with the introduction of Unicode, many modern regex engines now include some Unicode support. However, this doesn’t mean you can’t use Unicode in your regex. This feature is still in its infancy, and you should use a Unicode-compatible library. The atoms of a regex match when all of its atoms are identical. Even though regex is a relatively simple programming language, it has a huge number of possibilities. Because of its comprehensibility, regex has become a popular way of identifying patterns and finding matching strings. A common example is in an internet search. The first occurrence of a character in a search string matches a match. The second occurrence of that character in the search string matches the second occurrence. Regular expressions are a powerful tool for automating various tasks. If you know how to use them, you can automate editing and searching tasks in programs such as EditPad Pro. Using these tools can help you write applications for various languages. Unfortunately, learning how to use them properly can be time-consuming. To avoid common pitfalls, look for a tutorial that explains the concepts behind regular expressions. Second, ensure you understand spaces’ importance when working with regular expressions. Spaces are essential to regular expressions, and they need to be in the right places. Spaces can make your program harder to understand if you’re not careful. This is especially important when learning regex in the first place. A good tutorial will demonstrate where spaces go and where they should go. When you’re ready, try applying these rules to your application and see if it works. Finally, don’t forget the importance of using the right regex engine. You’ll want to ensure the regex engine supports Unicode characters. While the atomic group’s first successful match occurs on bc, you’ll want to ensure it matches the rest of the string. This will make your program perform much better in the long run. It will also allow you to eliminate backtracking, making it easier to work with regex. A regexp is a string matching algorithm. The pattern of characters matches any string in the search string that contains one or more of the characters. Each character matches any other character in the string, except the last one. If the pattern matches, the regexp returns a match. The first part of the regex represents a pattern, and the second part of the regexp represents a match for the first pattern. A regex consists of a set of smaller sub-expressions. The string “Friday” is an example of a regex. By default, regex matching is case-sensitive but can be set to be case-insensitive with a modifier. A vertical bar denotes the operator. For example, a regex four-for-floor accepts the string “four” or “floor”. A regex processor reads the string and translates the regular expression into an internal representation. The result is an algorithm that recognizes substrings that match the regular expression. It is based on the Thompson construction algorithm, which constructs a nondeterministic finite automaton. The automaton then recognizes substrings that match the regular expression. For example, a syllable containing a number is a single-digit number. Regular expressions are constructed from a set of metacharacters and characters. Each character in a regex has a different meaning and is made up of atoms. The simplest type of atom is a literal one. In more complicated regexes, a metacharacter groups the parts. For example, a “=”@” will match any number only if the other character matches both atoms. If you are a beginner to regular expressions, it is important to know what metacharacters are. Metacharacters are special characters with more than one meaning, depending on the context and regex engine. For example, a digit (D) matches only one other digit, while a period or a full stop matches only one other character. You can also use metacharacters to represent “plus” and “minus” signs. The most common metacharacter is “.”. This character has two meanings inside the regular expression. It matches any character in a set or any subset of the characters. The set can have any value, including a single digit, and is predefined in Perl. You can find a list of these character classes at perlrecharclass. You can find more detailed information on the different classes in the “Bracketed Character Classes” section of the reference page. When writing regular expressions, make sure that you escape metacharacters. This will prevent your regex from causing any weird behavior. For instance, if you use a?= character in your regex, you may encounter an ‘f’ error if the metacharacter is not escaped properly. To help solve this problem, it is helpful to understand the mechanism that governs regular expressions. In this way, you can troubleshoot any problems that may arise. If you need to match one or more characters, you can use metacharacters with a backslash before the character. The backslash will prevent the metacharacter from being matched with the literal characters. Also, if you want to match a group of characters, you can use metacharacters to match a group of characters. You can use the d character to match more than one character. It will also match a number that contains more than one digit. Default Unicode encoding If you are familiar with HTML or CSS, then you’ve probably already worked with Default Unicode encoding. The first thing you need to understand about Unicode encoding is that it represents code points, such as letters and numbers. For example, U+0061 matches an a without an accent, while U+00E0 matches a mark. Default Unicode encoding is not supported by many languages, including Perl, PCRE, Boost, and std:::: While it’s possible to use a character set that uses Unicode as a base, you should still be careful when using it in regular expressions. Unicode is a set of character codes defined by the Unicode Consortium. Its goal is to make all human languages represented in software as uniformly as possible. This means that the standard has been implemented by many software vendors and is used in various settings. The encoding value is also important. The value of a subsequence will be the one that has been captured most recently. If the second evaluation is unsuccessful, the previously captured value is retained. By contrast, a string “aba” matches the expression (a(b)?)+, leaving group two at “b.” The difference between a capturing group and a non-capturing group is that groups beginning with (?) are pure non-capturing, whereas a named catching group counts towards the total. Default Unicode encoding is a standard that enables the use of non-capturing groups. Comments are closed, but trackbacks and pingbacks are open.
OPCFW_CODE
there is a bad apple in very truck load. i think this sums it up pretty well. however, if you are not the "lucky one" to get that bad apple, you shall be set for a long time. however, the question remains, are you going to be willing to use a 6 year old system in 2011? answer that question keeping in mind that the gap in technology between 2005 and 2011 will be greater than that between 1999 and 2005. speaking of which i have an earlly 2000 desktop..Dell. and after some upgrades, more ram, new and faster hdd, nvidia 5700 128mb, it still is being used every day and can even play some somewhat recent stuff, however i doubt that my late 2004 Uniwill will be able to do that by 2011 (that is why i am ditching it in 2007). although this is an example with pc computers (and desky vs lappy), simular is true for the mac world. also whoever said that macs actually get faster with software upgrades, that is only true if you keep it in perspective. if that were to always be true, a 133mhz powerpc will be running circles in os x vs the same powerpc running system 7. even more so, g3 2nd gen imacs (the ones with the cd slot, not tray) would run OS X faster than OS 9. that is simply not true (yes, i speak from experience). perhaps if you add more ram to the system (say 256mb vs the 128) than it will, but here you're forced tp upgrade hardware. keep in mind that what you said is only true on the latest gen systems. lets say you run os 9.7 on the 1.25ghz ibook, and then run 10.3 on the same config, clearly the 10.3 will be faster . as for your opposite trend on the x86 systems, also not very true. you clearly named windows. let me remind you that the NT platform (xp) is faster (assuming the system is recent enough with plenty of ram) than Win98 or even Me (9x). They promise the longhorn kernel will further improve this speed, but, it will require quite high hardware to start off with in the first place. also, in linux, kernel 2.4 was slower than 2.6 on the same hardware (speak from experience on this one). the bottom line being, there is only so much the software can improve the speed and efficiency of the hardware untill it is just simply time to buy a new system.
OPCFW_CODE
How refactorable are AWS CDK applications? I'm exploring how refactorable CDK applications are. Suppose I defined a custom construct (a stack) to create an EKS cluster. Let's call it EksStack. Ideally, I'd create the role to be associated with the cluster and the EKS cluster itself, as described by the following snippet (I'm using Scala instead of Java, so the snippets are going to be in Scala syntax): class EksStack (scope: Construct, id: String, props: StackProps) extends Stack(scope, id, props) { private val role = new Role(this, "eks-role", RoleProps.builder() .description(...) .managedPolicies(...) .assumedBy(...) .build() ) private val cluster = new Cluster(this, "eks-cluster", ClusterProps.builder() .version(...) .role(role) .defaultCapacityType(DefaultCapacityType.EC2) .build() ) } When I synthetize the application, I can see that the generated template contains the definition of the VPC, together with the Elastic IPs, NATs, Internet Gateways, and so on. Now suppose that I want to refactor EksStack and have a different stack, say VpcStack, explicitly create the VPC: class VpcStack (scope: Construct, id: String, props: StackProps) extends Stack(scope, id, props) { val vpc = new Vpc(this, VpcId, VpcProps.builder() .cidr(...) .enableDnsSupport(true) .enableDnsHostnames(true) .maxAzs(...) .build() ) } Ideally, the cluster in EksStack would just be using the reference to the VPC created by VpcStack, something like (note the new call to vpc() in the builder of cluster): class EksStack (scope: Construct, id: String, props: StackProps, vpc: IVpc) extends Stack(scope, id, props) { private val role = new Role(this, "eks-role", RoleProps.builder() .description(...) .managedPolicies(...) .assumedBy(...) .build() ) private val cluster = new Cluster(this, "eks-cluster", ClusterProps.builder() .version(...) .role(role) .vpc(vpc) .defaultCapacityType(DefaultCapacityType.EC2) .build() ) } This obviously doesn't work, as CloudFormation would delete the VPC created by EksStack in favor of the one created by VpcStack. I read here and there and tried to add a retain policy in EksStack and to override the logical ID of the VPC in VpcStack, using the ID I originally saw in the CloudFormation template for EksStack: val cfnVpc = cluster.getVpc.getNode.getDefaultChild.asInstanceOf[CfnVPC] cfnVpc.applyRemovalPolicy(RemovalPolicy.RETAIN) and val cfnVpc = vpc.getNode.getDefaultChild.asInstanceOf[CfnVPC] cfnVpc.overrideLogicalId("LogicalID") and then retried the diff. Again, it seems that the VPC is deleted and re-created. Now, I saw that it is possible to migrate CloudFormation resources (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/refactor-stacks.html) using the "Import resources into stack" action. My question is: can I move the creation of a resource from a stack to another in CDK without re-creating it? EDIT: To elaborate a bit on my problem, when I define the VPC in VpcStack, I'd like CDK to think that the resource was created by VpcStack instead ok EksStack. Something like moving the definition of it from one stack to another without having CloudFormation delete the original one to re-create it. In my use case, I'd have a stack define a create initially (either explicitly or implicitly, such as my VPC), but then, after I while, I might want to refactor my application, moving the creation of that resource in a dedicated stack. I'm trying to understand if this moving always leads to the resource being re-created of if there's any way to avoid it. I'm not sure if I understand the problem, but if you are trying to reference an existing resource you can use a context query. (e.g. Vpc.fromLookup). https://docs.aws.amazon.com/cdk/latest/guide/context.html Additionally, if you would like to use the Vpc created from VpcStack inside of EksStack you can output the vpc id from the VpcStack and use the context query in the eks stack that way. this is C# code but the principal is the same. var myVpc = new Vpc(...); new CfnOutput(this, "MyVpcIdOutput", new CfnOutputProps() { ExportName = "VpcIdOutput", Value = myVpc.VpcId } and then when you create the EksStack you can import the vpc id that you previously exported. new EksStack(this, "MyCoolStack", new EksStackProps() { MyVpcId = Fn.ImportValue("VpcIdOutput") } where EksStackProps is public class EksStackProps { public string MyVpcId { get; set; } } Thank you for your reply! I jotted down a little additional explanationin the question. If I understood well, the only thing I can do is reference the existing VPC from another stack. Hence, the VPC would still "belong" to EksStack (i.e. I'll see it in the Output tab in CloudFormation, for example). The more I think about it the more it makes sense, anyway. May I ask you what's the behavior of the look-up if the VPC doesn't happen to exist? I investigated this for the Roles, and it seems the look-up never returns null if the resource doesn't exist: https://github.com/aws/aws-cdk/issues/9941 Oh okay I think I get it. You can't move the 'creation' itself to another stack. But in any case, you don't need to delete an existing vpc. You can change your EksStack to just use Vpc.fromLookup. Your EksStack can only do one of two things. Create a new VPC or use an existing one. It becomes your decision as the developer if you want the EksStack to be responsible for this or if you want to do it another way. To answer your other question, it will throw an exception during synth if it cant find the vpc in the target region/account. I think when it comes to refactorability when CDK and CloudFormation in general, especially multi stack configurations, there are a few principles to keep in mind. The entire app should be able to be completely deleted and recreated. All data management is handled in the app, there is no manual processes that need to occur. Don't always rely on auto interstack dependency management using Stack exports. I like to classify CloudFormation dependencies into two categories: Hard and soft. Hard dependencies means that you cannot delete the resource because the things using it will prevent it from happening. Soft dependencies are the opposite, the resource could be deleted and recreated without issue even though something else is using it. Hard dependency examples: VPC, Subnets. Soft dependency examples: Topic/Queue/Role. You'll have a better time passing stack soft dependencies as Stack parameters of SSM Parameter type because you'll be able to update the stack providing the dependencies independent of those using it. Whereas you get into a deadlock when using default stack export method. You can't delete the resources because something else is importing it. So you end up having to do annoying things to make it work like deploying once with it duplicated then deploying again deleting the old stuff. It requires a little extra work to use SSM Parameters without causing stack exports but it is worth it long term for soft dependencies. For hard dependencies, I disagree with using a lookup because you really do want to prevent deletion if something is using it bc you'll end up with a DELETE_FAILED stack and that is a terrible place to end up. So for things like VPC/Subnets, I think it's really important to actually use stack export/import technique and if you do need to recreate your VPC because of a change, if you followed principle 1, you just need to do a CDK destroy then deploy and all will be good because you built your CDK app to be fully recreatable. When it comes to recreatability with data, CustomResources are your friend. Those principles really shed a light on how AWS CF works, thanks! Based on that, I figure the way to go in my use case is to simply pass references to the CDK objects created by one stack to other stacks. As long as the stacks are in the same region, this should be synthesizes AWS CloudFormation exports in the producing stack and an Fn::ImportValue in the consuming stack.
STACK_EXCHANGE
South Whidbey Commons Cafe & Books (the Commons), a 501(c)(3) nonprofit located in downtown Langley on Whidbey Island, recently received a bridge loan from the Whidbey Community Foundation (WCF) as part of WCF’s new impact investing initiative. With the bridge loan, the Commons is building a bold new vision for its café, gathering space(s), workforce development program, and the organization as a whole. WCF’s mission is to improve the quality of life on Whidbey Island by providing support for the nonprofit sector, assisting donors to build and preserve enduring assets for charitable purposes, and meeting community needs through financial support. Impact investing uses flexible investments, such as low- or no-interest loans to advance social and environmental solutions to systemic and emerging community needs, such as affordable housing, child care, and climate change. Significant benefits include supporting sustainable community development and recycling philanthropic dollars back into the communities they come from. In response to post-COVID financial challenges, WCF recently adopted an Impact Investment policy that provides for the use of tax-deductible donations to make loans to local businesses and organizations. The mission of the Commons is to strengthen community by providing an intentional space for people of all ages and backgrounds to gather, learn and grow. The programs, premises, volunteers, and job-training opportunities are designed to empower community, build skills and competencies, and create connections. A place to meet friends, make new ones, and exchange ideas to build the future of our island community. “Working with local community partners like Whidbey Community Foundation has been life sustaining for us—enabling us to both fulfill our social impact mission and financial goals,” said the Commons Board President, Wendy Cordova. In addition to partnering with WCF, the Commons has been working closely with local climate justice organizations Kicking Gas and rePurpose (both fiscally sponsored by regional nonprofit For The People), to develop a sustainable, equitable business plan that becomes an even more dynamic asset to the South Whidbey community and its young people in the years to come, designed to implement updated clean energy, public health, and zero waste measures while upgrading and expanding its unique training program. This exciting new partnership has plans to announce an innovative capital campaign in the spring that will help pay back the WCF bridge loan and fund this new vision. The objective is to ensure not only that the Commons can continue as a favored, long-standing meeting place in the heart of Langley, but also further evolve its role in the development of South Whidbey’s community, economy, and culture. According to Steve Shapiro, Treasurer of WCF, “This bridge loan to the Commons has the potential to result in significant positive impact for the local community. It provides the Commons with short-term capital to reinvent their business model in service to the community. When the loan is paid off, WCF can then redeploy the funds for future loans to other community projects.” More about the Whidbey Community Foundation (WCF) WCF connects people who care to causes that matter. WCF was founded in 2016 by long-time local community leaders who understand Whidbey’s needs and strengths and who are committed to making the Foundation a gateway to more meaningful relationships between donors and local nonprofit organizations. Since 2016, WCF has opened 35+ funds and made over 440 grants totaling over $2.54 million for various causes. More about the South Whidbey Commons Cafe and Books (the Commons) The Commons’ job-training program for local youth, which is state certified for school credit, creates an experience where they can build functional and teamwork skills in a safe work environment, preparing them for success in the real-world of employment. Many of the trainees are hired for paying positions at the Commons and local businesses. Returning college students earn money and help fill the schedule during busy summer months and school breaks.
OPCFW_CODE
A helper to organise the your express routes chains middlewares Wirexroutes is node module, that basically, helps to organise the express routes definition and the associated middlewares. I used only with express 3.0. I developed this module when I took the challenge to design, create and code, as well, a new web (REST) system from scratch for a new start. My concerns, as a Software Engineer, are always to try to create a modular and scalable system, bearing in mind that all the code as possible be reusable, easy to maintain and all those type of things that avoid to create the minimum "spaghetti code" as possible. Express is an awesome web application framework that provide "a thin layer of features fundamental to any web application, without obscuring features that you know and love in node.js". You build the web application routes using simple middlewares and each route is not limited to only one, so you can "chain" different middlewares (simple functions) to fulfil all the "actions" that the request require; take a look to documentation app.VERB (http://expressjs.com/api.html#app.VERB) if you don't know that I am referring. From my point of view, the simplicity and flexibility of express' route chaining functionality allows to create a bunch of middlewares which performs generic operations, and use them in several routes, so you can build middlewares bearing in mind the reusability, more testable, security management and so on, and all of this is possible just using express. But then if express is awesome, what does it do? Well, I wondered, how I could track the chains of each route when your application start to have a huge bunch of them and how I can make difference between middlewares that performs some operations but call 'next' without sending any response (remember if a response is sent, no possibility to send another), the middlewares that send the response and the middlewares that perform some operations after sending a response (i.e. logging operations, etc). I replied myself, building this module as a helper organise them. $ npm install wirexroutes Wirexroutes is a class (constructor) that accepts the following parameters in this order: A wirexroutes' definition object is an object which has the next properties: The defaults options object, in the present time, only support one property, 'method'. If it is defined, and some routes don't specify the 'method' to apply to the route, then it will be applied, but if any route doesn't define any 'method' and no default 'method'' has been provided, wirexroutes constructor will throw an error Wirexroutes instance is simple object that has some properties; basically it hold the provide parameters under the same name and has one more, 'routePathWords' which is an object whose properties' name are the words used in the routes (express route parameters won't be taken into account), and their values are an array of integers where each one is the path's position where it appears; I reckoned that you're wondering why wirexroutes instance has the 'routePathWords' property, well, I added it, because in some point, I needed to know the words used in my routes, because the could create some stuff where he could choose the URL's slug to use, so I needed to know what words my routes use to avoid that the user could choose one of them to avoid that the users' slugs clash with the application's routes. This section show a simple basic example about how I think that this module may help to organise the express routes and their associate middlewares. The module is a helper, it doesn't define how to organise the several files of your web application, that is up to you, but I needed to define one to write this section, and I used the directory structure that I am comfortable, also bear in mind that this directory structure only has the needed directories to write this wirexroutes' example. Note, that this example is too simple, so maybe it is so difficult to gauge if it is helpful for all the stuff that I mentioned above. App route path└─┬├─┬ controllers│ ├── public│ ├── user│ └── project│└─┬ routes├── public.js├── user.js├── project.js└── index.js App route path└─┬└─┬ controllers└─┬ user├── actions├─┬ middlewares│ ├── pre│ └── post└── index.js The module disaggregate the middlewares in three types, actions, pre-middlewares and post-middlewares. The chain of each route is: pre-middlewares --> action --> post-middleware. The action is the only required and pre/post-middlewares can be 0 or N. Pre-middlewares are just express middlewares (refer to http://expressjs.com/api.html#app.VERB for more info), the action and post-middlewares are as express middleware but with different arity, but an action is the last registered express route's middleware are not appended to the list of middlewares of express route definition. The concept is, that the action will send the response to the client, although pre-middlewares may send the response if the pre-condition/s that it performs is not accomplished, then it should also abort the route chain (in express is, not to call next()). On the other hand, the post-middlewares must never send the response, they are only to perform operations that don't affect the response to the client, for example logging, tracking, ... I create one file for each pre/post-middleware and action and, of course, I put them into the corresponding controller's directory. I define a controller for each entity element of the web application, in this basic example there are three (public, user and project). Maybe it seems that there will be lots of files, but I think that putting each one in a different files is more manageable for a development team (repository synchronization, etc.). Because I define one file per pre/post-middleware and action, I export just the middleware function, so the next code samples are that. This a basic pre-middleware which checks if the user is logged, so it would be used in all the routes which required an authenticated user. i.e. file's name: checkUserAuth.js // User has been authenticated before and his session is validif reqsessionusernext;elseressend401 'The user is not authenticated';; This a basic action which performs an user logout. i.e. file's name: logout.js reqsessiondestroyif errconsole.log'Error when destroying the user\'s session. ' + err;ressend500 'Application error';posterr req res;elseressend200;postnull req res;;; Simple example of post-middleware would manage an error reported by a precedent action or post-middleware. i.e. file's name: errorReporter.js if err// Here you can report the error into log, send an email or whatever you would like to do// with it// And after call post to continue the route's chain; the post middleware is agnostic about if// there are more post-middlewares or notpostnull req res;; I use the index.js file defined in each controller to get all the pre/post-middlewares and actions of the controller together, so in that file I reference then for afterwards only need to import (require) on file rather than several files. So an index.js file that reference the three above samples is: moduleexportsactions =logout: require'./actions/logout';moduleexportsmiddlewares =pre:checkUserAuth: require'./middlewares/pre/checkUserAuth'post:errorReporter: require'./middlewares/post/errorReporter'; I use one file per entity, alike I do with one directory per controller, to define the its associated routes. Each route file exports an array with wirexroutes' definition and I use route/index.js to get all the routes together and to create wirexroute instance, which register them in express. i.e. the routes/user.js file associate with a route which uses the above pre/post-middleware and action samples is: var userCtrl = require'../controllers/user';moduleexports =path: '/user/logout'method: 'get'action: userCtrlactionslogoutpre: userCtrlmiddlewaresprecheckUserAuthpost: userCtrlmiddlewaresposterrorsReporter; and route/index.js for this example is ('settings' is a module that I use to hold application settings and global variables, so wherever I instantiated express application I added it 'settings', and here I add into it the Wirexroutes instance): var settings = require'../settings.js';var routes = ;routespushapplyroutes require'./public';routespushapplyroutes require'./user';routespushapplyroutes require'./project';// Register the routes in express applicationvar Wirexroutes = require'wirexroutes';settingswireXRoutes = settingsexpressApp routes;; License (The MIT License) Copyright (c) 2013 Ivan Fraixedes Cugat email@example.com
OPCFW_CODE
Financial website Lendedu.com has released a list of the 400 U.S. cities that are best positioned for economic advancement in the next decade. Analyzing a variety of socioeconomic factors like recent income growth, population changes, and educational attainment levels for hundreds of cities in the United States, Lendedu ranked each based on how well positioned it is for economic advancement in the new decade. Georgia earned high marks in the survey, with 13 cities making the list. Atlanta and Savannah made the top 20, while Athens made the top 50. Georgia’s cities were ranked as follows: #196 Warner Robbins The data that was used to develop this report came from two data sources: The U.S. Census Bureau and the Bureau of Labor Statistics. Every data point other than unemployment rate statistics came from the former, while that one came from the latter. In total, 380 of the largest metropolitan areas in the United States had data for each data point and were analyzed for the report’s rankings. For two data points, the percent change in the employment-to-population ratio and the percent change in unemployment rate, multiple years had to be individually pulled to complete the final calculation that was used in this report. Almost all of the data that came from the U.S. Census Bureau was pulled from the American Community Survey. The following data points were used to evaluate each city: - Percentage of 18-24 Year Old Population With At Least an Associate’s Degree: The percentage of a city’s 18-24 year old population that holds at least an Associate’s Degree (Source: U.S. Census Bureau; seen in table as “% of Pop. 18-24 Yrs. W/ Min, of Assoc. Degree”). - Net Business Openings: The net amount of businesses that have opened in each city from 2015 to 2016. Closed businesses were subtracted from opened businesses during that time (Source: U.S. Census Bureau; seen in table as “Net Business Openings (’15-’16)”). - Net Population Change: The net population change in each city from 2010 to 2018. People leaving each city and mortality statistics were subtracted from people moving into each city and birth statistics during that time (Source: U.S. Census Bureau; seen in table as “Net Population Change (’10-’18)”). - Percent Change in Income: The percent change in average income in each city from 2017 to 2018 (Source: U.S. Census Bureau; seen in table as “% Change in Income (’17-’18)”). - Number of Residential Building Permits: The number of residential building permits in each city in 2018 (Source: U.S. Census Bureau; seen in table as “# of Res. Building Permits in 2018”). - Percent Change in Employment to Population Ratio: The percent change in the employment to population ratio in each city from 2015 to 2017 (Source: U.S. Census Bureau; seen in table as “% Change in Employment to Pop. Ratio (’15-’17)”). - Percent Change in Unemployment Rate: The percent change in the unemployment rate in each city form 2015 to 2019 (Source: Bureau of Labor Statistics; seen in table as “% Change in Unemployment Rate (’15-’19)”). To complete the rankings, each individual city was ranked amongst its peers for each individual data point. After that was completed, each city’s respective rankings for each of the six data points were averaged together to formulate an overall ranking for each city. You can view the complete report here.
OPCFW_CODE
Configuring an AFP Share 3 minute read Apple Filing Protocol (AFP) is a network protocol that allows file sharing over a network. It’s similar to SMB and NFS. However, it was made to work flawlessly on Apple systems. In this document, you will learn how to create and connect to a general purpose AFP share. The AFP protocol has been deprecated by Apple Beginning in 2013, Apple began using the SMB sharing protocol as the default option for file sharing and ceased development of the AFP sharing protocol. It is recommended to use SMB sharing over AFP unless files will be shared with legacy Apple products. For further information please read: https://developer.apple.com/library/archive/documentation/FileManagement/Conceptual/APFS_Guide/FAQ/FAQ.html To get started, make sure a dataset has been created. This dataset serves as share data storage. If a dataset already exists, proceed to turning on the AFP service. Go to Sharing > Apple Shares (AFP) and click ADD to create an AFP share.You will need to confirm that you want to Continue with AFP set up. Next, use the file browser and select a dataset to share. Enter a descriptive name for the share. If this share is is to be used for Time Machine backups, set Time Machine. This advertises the share as a disk for other Mac systems to use as storage for Time Machine backups. It is not recommended to have multiple AFP shares configured for Time Machine backups. If desired, you can set Use as Home Share. When this setting is enabled, users that connect to the share have home directories created for them. Only one share can be used as a home share. At the time of creation, the AFP share is enabled by default. If you wish to create the share but not immediately enable it, unset the Enable checkbox. Clicking SUBMIT creates the share. Opening the ADVANCED OPTIONS allows you to modify the shares permissions, add a description, and specify auxillary parameters. Existing AFP shares can be edited by going to Sharing > Apple Shares (AFP) and clicking . If you chose to enable the service when you created the AFP Share, it will be running. If not, to turn the AFP service on, go to Services and click the slider for AFP. If you wish to turn the service on automatically when the TrueNAS system turns on, check the Start Automatically box. The AFP share does not work if the service is not turned on. The AFP service settings can be configured by clicking . Select the database path used for the AFP Share. Don’t forget to click SAVE when changing the settings. Unless a specific setting is needed, it is recommended to use the default settings for the AFP service. Connecting to the AFP Share Although you can connect to an AFP share with various operating systems, it is recommended to use a Mac operating system. First, open the Finder app. Click Go > Connect to Server… in the top menu bar of the application. Next, enter afp://IPofTrueNASsystem and click Connect. For example, entering afp://192.168.2.2 connects to the AFP share on a TrueNAS system with the IP address of 192.168.2.2. By default, any user that connects to the AFP share only has the read permission. Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
OPCFW_CODE
Unique Pupil Numbers (UPNs) Unique Pupil Numbers (UPNs) A Unique Pupil Number (UPN) is a number that identifies each pupil in England uniquely. A UPN is allocated to each pupil according to a nationally specified formula on first entry to school (or in some cases earlier), and is intended to remain with the pupil throughout their school career regardless of any change in school or local authority (LA). A similar compatible system has been introduced for learners in maintained schools in Wales. Independent schools are not required to issue UPNs for their learners although many have done so on a voluntary basis. It is important to ensure that each pupil on roll at your school has a valid UPN. UPNs are generated automatically by your Management Information System (MIS) when a pupil first enters the maintained schools sector using the following template: If a pupil has joined you from another school he or she should have been issued with a UPN which will be transferred electronically in a Common Transfer File (CTF). Contact the pupil’s previous school and request a CTF. If you do not have the contact details then use the government website. Please avoid issuing a temporary UPN if at all possible - check with the previous school to obtain the permanent UPN instead. If a pupil attends two schools, the school at which they spend the majority of the time should issue a UPN. What to do if you issue a UPN, then find that that pupil already has a UPN. If this happens then you will need to remove the most recent UPN (the one issued by yourself) and replace it with the original UPN. The UPN assigned first should always be the one used. If you need advice on how to do this please contact your ICT MIS support provider for further guidance. However if a UPN has been used for registering a pupil for Key Stage assessments, then that UPN should be kept for data continuity purposes. Any previous UPN will automatically be recorded as a “Former UPN”. Allocating a UPN UPNs should be allocated on a pupil’s first entry to a maintained school in England, including entry to a nursery school or a nursery class in an infant or primary school. There are 3 situations which schools should consider when issuing UPN's: - Pupils who are entering school for the first time, including entry to a nursery class should be allocated a permanent UPN; - Pupils transferring to the school from another state funded school in England should already have a UPN. If the previous school: - has passed on the pupil's UPN, then the school should adopt that UPN However, if the UPN provided is invalid, then the school should (a) check that it has been keyed in correctly, and (b) check with the previous school (or the LA) that it has been provided correctly. If it still proves to be invalid the school should allocate a permanent UPN to the pupil instead. - has failed to pass on the pupil's UPN, then the school should try to retrieve the UPN from the Get Information About Pupils service (see section below). If, after a temporary UPN has been allocated, an earlier permanent UPN for the pupil is retrieved, then the school should replace the temporary one with the earlier UPN. If there is not an earlier permanent UPN available then schools should issue the pupil with a permanent UPN. - Pupils transferring to the school from a non-maintained school or from any school outside England are unlikely to have been allocated a UPN, but you can check for these on the Get Information About Pupils service. Some, however, may have been issued a UPN if their school had the means to do so, or some may have been allocated a UPN by their LA in some circumstances. Where the pupil has been allocated a UPN and the UPN is correct and valid, that UPN should be retained. Checking for a UPN on the Get Information About Pupils service Get Information About Pupils (GIAP) allows users to search for and download pupils’ identifiers (including UPN), end of Key Stage results data and their contextual indicators, including Pupil Premium information. GIAP allows all users to search by names and date of birth. To access the GIAP website you need to log-on to the DfE Secure Access website (DfE Sign-in (education.gov.uk) and then select Get Information About Pupils. Each school has a super user/approver who controls access to school users, and can add these services to your log-on. Unfortunately the LA is unable to assist with log-on problems, but help resources are available by clicking on the ‘Need Help?’ button on the initial log-in pages. Once you have located the pupil record on GIAP you can download the information in CTF format, which will import into your MIS and save you having to type the details in. Further information about UPNs can be found on the DfE website: DfE UPN guidance
OPCFW_CODE
[wip] platform.gcc.arch: support for AMD CPUs support for AMD CPUs in localSystem.hostPlatform.platform.gcc.arch I have (or can rent) a test park of [ ] AMD Opteron(tm) X2150 APU (corresponds to -march=btver2) [ ] AMD Opteron(tm) Processor 3365 (corresponds to -march=bdver2) [ ] AMD EPYC 7401P 24-Core Processor (this one is problematic, -march=bdver4 as well as -march=znver1 enable too many features; the GCC flags should be either -march=bdver4 -mno-fma4 -mno-tbm -mno-xop -mno-lwp or -march=znver1 -mno-clzero mno-mwaitx -mno-xsaves, which is impossible to set within constrains of #59225) [x] AMD Ryzen 5 2400G with Radeon Vega Graphics (corresponds to -march=znver1) cc: @matthewbauer @7c6f434c Isn't this a huge pain to maintain? Every time a new architecture is added, every package that uses SSE/AVX will need a new special case added, even if the architecture doesn't add any new features that are relevant to the package. How will you ensure that packages don't get missed when architectures are added in the future? Right now I'm working on a package that uses SSE, and I need a special case for each architecture, even though all of the cases except for the default are the same. It seems like this would be much more maintainable if there were functions that could be used to decide whether a particular platform supports a certain feature. This would only require changes in a single centralized place when a new architecture is added. Yes, tables like this are better to have somewhere in lib with a possibility to query for a particular CPU feature. BTW, the package I am adding a CPU feature table to is g2o, in #61655. This pull request has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/targeting-particular-x86-64-generation-haswell-ivybridge-skylake/2280/27 This looks good to me, sorry I didn't see it earlier. CC @matthewbauer And yes I would like to see the tables move into lib eventually too. For lib, could we perhaps reuse https://github.com/archspec/archspec-json/blob/master/cpu/microarchitectures.json? The archspec package is broken out of the Spack package manager for HPC. https://discuss.python.org/t/archspec-a-library-for-labeling-optimized-binaries/3149 Any chance this could land in master soon? yes, it might need adjustments for gcc 8, 9, 10 and clang 9, 10, 11 Yes, tables like this are better to have somewhere in lib with a possibility to query for a particular CPU feature. For simplicity, I plan only to add AMD architectures here next to Intel and left moving tables to lib for another PR Any chance this could land in master soon? I have a znver1 machine at hand to help with testing. @vorot93 Please cherry-pick the PR to your copy master to test. I rebased it, added AMD CPUs (including znver2 (Zen 2) which was not yet available) and moved CPU tables under lib/ as many people suggested here. It should be ready to test @volth if you need additional testing, I have a 3990X, and a 3990X at home. @volth if you need additional testing, I have a 3990X, and a 3990X at home. That would be nice. I think we need to end up with a documentation page like https://wiki.gentoo.org/wiki/Ryzen after collection testing results. It is especially important for AMD whose processors do not correspond to -march= values exactly (like EPYC 7401P case in the first message). Should we integrate this with lib/systems/parse.nix more? Is not sandybridge-unknown-linux-gnu a valid triple, for example? Hm, I do not know. Does the 1st component of the triple always equal to -march= value ? No, but when it does (e.g. armv7-m) we track both. See https://github.com/NixOS/nixpkgs/blob/70cfd9d25e1d4f5a40f5d0a518f0749635792667/lib/systems/parse.nix#L73 Also, gccarch might be extended from the -march= value to the list of gcc flags (to cover weird CPUs like AMD EPYC 7401P 24-Core Processor described above) and that would not fit into a single word in the triple We should still distinguish the march flag, but sure we can have some "extra flags" needed to fill the difference. See https://github.com/NixOS/nixpkgs/blob/70cfd9d25e1d4f5a40f5d0a518f0749635792667/pkgs/development/compilers/gcc/common/platform-flags.nix We should use targetPlatform.parsed.arch, if it exists, to be another way to provide the march, and then can also provide the extra flags however you like. @volth I don't want to hold this up indefiniely / let the perfect be the enemy of the good, but based on my last comment did you take a look at integrating with parse.nix some more? I just want to reign in complexity wherever I can, and also reign in the potential for arch-related info to drift out of sync. Is there a command I can run to verify my 3990X and 3900X are to match the extensions in builds? I am not sure I understood the spirit of the triples and how far the changes could go (where "sandybridge" of "sandybridge-unknown-linux-gnu" would come from? Will the be "sandybridge-linux" system along with "x86_64-linux"? How would it affect meta.platforms? etc. ). It would be faster and easier if you make the changes here or in a new pr On 9/2/20, John Ericson<EMAIL_ADDRESS>wrote: @volth I don't want to hold this up indefiniely / let the perfect be the enemy of the good, but based on my last comment did you take a look at integrating with parse.nix some more? I just want to reign in complexity wherever I can, and also reign in the potential for arch-related info to drift out of sync. -- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/NixOS/nixpkgs/pull/61019#issuecomment-685155975 Fair enough. This pull request has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/tensorflow-slower-as-nixos-native-than-inside-a-docker-container/9967/4
GITHUB_ARCHIVE
Best Practices Managing the Product Backlog | David Tzemach Updated: Mar 3, 2022 Although I hate to use the term Best Practices I still think that it’s the best way to describe the items below that are based on the experience I have gained over the years. The Product Owner has final authority, but everyone should be involved In Agile, we embrace teamwork and full collaboration among team members. Therefore, I always explain to my teams that although the PO has the responsibility and final authority on the product backlog, he must work, collaborate, and respect other team members’ opinions as the team as a whole have a decisive part in maintaining, managing and updating the backlog. Keep your top stories with rough estimations The top prioritized stories of the product backlog must include a rough completion estimation. This is usually determined by the development team during the backlog refinement sessions that are part of any sprint. I suggest using relative estimations such as story points. This allows the PO to gain a high-level understanding of how many stories the team can deliver in the next sprint (based on their velocity). It also enables the team to reduce the time taken to estimate stories during planning sessions. Know that continuous change is part of the game The product backlog is a living document that will change numerous times during the project. Once the backlog is built with the first list of stories, you must ensure it is maintained daily. That way, it will continue adding value to the customer (This is not as simple as it sounds, I have seen too many projects fail because of PO's that neglected their backlogs). The product backlog must always be visible and accessible Management of the backlog can be done using a dedicated application such as JIRA or TFS, or with an old fashioned paper-based backlog. However, once created, it should be visible and accessible to all stakeholders. Not too little detail and not too much The backlog is a container of user stories the team will use during sprints. Once the stories are added and prioritized, the team understands which stories are more likely to be implemented first. Those stories that are prioritized at the TOP of the list should always contain more details than the stories that are lower on the list. To ensure the stories are detailed at an appropriate level, one must ensure the stories contain enough information. The information must enable the team to understand the definition of done, acceptance criteria and any other technical information that will help them meet the “goal” of the story. And what about those stories which are not prioritized at the top places? Don’t worry about it, there is no need to invest time and effort to add details. There is a good chance that those stories will be changed until they become relevant (or removed). Remove dependencies to create independent stories I always say during my lectures that creating a backlog that can truly serve the team is an art. This is why I always ensure my teams understand the whole picture prior to adding new stories. One way is to add a new section to each story that maps its dependencies with other stories. If there are more than two dependencies with other stories, the story will not be added in its current form. The team will have to break it down in order to make it as independent as possible. The Product Owner can say no as long as he respects the other side The product backlog can contain a mixture of both requirements and technical stories. To keep the backlog in good shape, the PO should know when to decline or approve new stories added from both directions (Customer/Team). Without this ability, the backlog will most likely be updated with stories that affect the release goals and will make it more difficult for the team to deliver an incremental product at the end of each sprint. In my teams, I always ensure that the PO does not take authority in a negative direction. This is the main reason why he can decline stories requested by either the development team or the customer. The PO should discuss each story with the relevant party so they will have the opportunity to share his thoughts and explain why and how this story contributes to the overall value of the product. Keep the 1:1 ratio – one backlog per product In the last few years, I was involved in small projects and large-scale projects. In both types of environments, it was clear that the best way to manage the product was to use a single backlog per product. This simplified the project and increased the ability to manage the teams effectively. The following are different variations to backlogs that can be implemented: Enterprise Backlog – The TOP backlog in the project hierarchy, contains a large-scale goal of the product, a dedicated Product Owner manages this backlog. Area Backlogs – Backlogs created based on the goals defined in the enterprise backlog, containing features to be divided based on the different departments. Product Backlogs – These backlogs contain epics and stories based on the features defined in the area backlogs.
OPCFW_CODE
This section is experimental, here is a brief description of each of the tabs: Not currently used. This tab shows code that is processed by inserting it before the abc tune (as edited in the first tab). By looking at this tab, you can see what data abc2sn is inserting before your tune. Look at the "preface area" when you select normal notes [N] and contrast that after you select 7 shape notes . This tab shows abc lines that are appended to the tune being edited. Look at the epilog area with "leadsheet" (the guitar icon) selected, and then watch as the "leadsheet" icon is deselected. By exposing the "under the covers" data that abc2sn is generating, you may (1) get more insight into how abc2sn is achieving the output, and (2) you might experiment with "cut and paste" insertion of the code into your abc tunes. If you have feedback on this function, feel free to post comments to the yahoo group "abcusers" or leave feedback at www.projectnotions.com. An easy to use abc music notation editor. Includes the ability to use shape notes and to play tunes. A few sample tunes are available (menubar -> Load Sample -> [Amazing Grace] or [NEW BRITAIN (45)] or [Speed the Plough] or [abc template]) projectnotions.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com Shape Note Glyphs Information The abc2svg.js program allows the specification of note head glyphs. For more details reference the full specification of abc2svg. See in particular the section on the "map" directive. The glyph definitons below are sent to abc2svg "under the covers" when "Type of Notes: 'Seven' or 'Four'" is selected. When 'Normal' is selected, no glyph definitions are sent to abc2svg. If you desire, you can cut and paste these definitions into the input area and experiment with how they work. The version number of abcsnglyphs The date the version was released The date of the glyphs An array of the version numbers of the glyphs, matching the glyphs in the glpyhs property An array of the release dates matching the glyphs in the glyphs property. The array of glyph definitions. The getSnGlyphs() method can be used to return the most current set. Where shapes is "4" or "7" The return value is the %%map name given to the 4 shape note set or the 7 shape note set. For example: var shapemapname = abcsnglyphs.getVoicemapParam("7") // shapemapname is "7shape" shapes is "4" or "7" or "both" indicating what set of SVG and style directives to return. nohtmlchars; if "true" the returned string will have "<" instead of "<" and so on.
OPCFW_CODE
That's probably because nobody was online and a0.5 doesn't have NPC's. Recent community posts That bug popped up at least three time now. Kubik (?) and I tested it thoroughly back when he had it and it was like this: |Kubik's server||jrb0001's server| |Kubik||failed always||worked always| |jrb0001||worked always||worked always| So I can't see how that can be related to a network issue. You have to be in that directory or the script probably won't work (can't find it so I can't check it). Use "cd <directory>" to change the current directory to the directory where that script is and then type "StartClientConsole.bat". I could host the game for you. I have no idea how playable it is on my strong server, but I could also run it on the tachyonuniverse one (which works as long as there aren't 140 inactive + 1 active players at spawn). All my servers are running 24/7 but the strong server will have a scheduled downtime this weekend because of a hardware upgrade. That thread is way too big to find anything. Are the guides on the wiki (https://tachyongame.miraheze.org/wiki/Guides/Add_new_Ship, https://tachyongame.miraheze.org/wiki/Guides/GenShips) still valid? 1. The server has been listed in the OP in the megathread on the FTL forums since a0.1. 2. How can you tell it to somebody that many things he wrote are wrong without being rude? I hate marketing bla bla so I had to write such a (rude) reply. I see that you modified the OP and all wrong information is gone. 3. As a server admin, you should always inform your (future) players about important things. If the server is down for more than a few minutes, you should announce that as soon as possible. 4. If you need any help hosting the server, just ask me. 5. I think the biggest problem is that the community is split into two parts: The megathread on the FTL forums and the forum here on itch.io. I didn't check this forum in the last 6 months and some / many players don't know about the megathread at all. Unable to connect. I present to you, the Biggest Nope, the database of my server currently has 474 players. (and onlyest) Tachyon Server! I played on at least 4 different servers: My old one, my current one, the CBT one and one from another player to check issues / bugs. Server Always uses the latest version My current server runs on a0.5.1 because that's the last one which allows players to join later. The a0.6.x / a0.7.x series is strongly story based and thus unusable for an MMO type of server. In case somebody else finds this threads and can't connect to your server, the address to mine is "tachyonuniverse.692b8c32.de" (Port 30303). There is also a automatic launcher: https://static-692b8c32.de/tachyonuniverse/ The United Fleets of The Unity and The RSP are looking for additional crew! While preparing for the largest battle in our history, The United Fleets needs new crew to man it's brand new ships. Have you ever dreamt of flying your own battleship? Of shooting around in space and saving the universe? Of helping a huge fleet to succeed? Then it's your time! Apply now, our final battle will begin on sunday, 2016-08-14 at 10:00 UTC and we need as much crew as we can get!
OPCFW_CODE
block firefox local network access (CORS?) I could be off on this one, so please forgive me if so... According to this article: https://arstechnica.com/information-technology/2022/01/new-chrome-security-measure-aims-to-curtail-an-entire-class-of-web-attack/ Web browsers have the ability to allow remote web servers access to local network resources... ie localhost on the browsing computer, and potentially other network devices behind the browser. It's enabled by default and there's no UI interface to disable it. This was news to me and is a bit terrifying. Why in the world would anyone ever allow a remote site to bounce commands through their browser to other devices internally... presumably protected by a network firewall to prevent exactly that?!?!?! Can someone explain this to me? When I google "disable CORS", it appears as though that would make it more open and less secure. I want to disable the capability outright. I found one mention of a possible answer at: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSDisabled that references "content.cors.disable", but I can't find a mention of that setting anywhere else, and the link doesn't expand on exactly what it does. The name sounds like what I want, but I don't want to make that assumption and end up making things worse. If anyone can shed light on this, it would be appreciated. If this is the wrong forum, please let me know which SE site I should post it to. Thank you The article you read was written in a sensational manner and is only new in its degree of sensationalism. In effect, browsers are pretty well secured against external attack, and local resources are very well protected. Cross-Origin Resource Sharing (CORS) already exists on all the major browsers, defined as: a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served. This means that an innocent-looking advertisement on Amazon cannot use JavaScript to download code from a malicious third-party site. You will find more information in the Mozilla article Cross-Origin Resource Sharing (CORS) as to which are the restrictions on accessing third-party websites. The article you saw talks about Chrome tightening CORS further than it is today. If you disable CORS, you will open your browser wide open to such attacks, which is the opposite of your wish. If you wish to improve further the protection against malicious JavaScript, you may use an extension such as NoScript that disallows the execution of all JavaScript from websites that have not been white-listed manually by yourself. Thank you. The concerning part was about browsers granting access to devices within the private network of the browser user... ie that the browsers can somehow expose internal networks to external hosts. I get one website referencing another, but the violation of a network perimeter is what really concerned me. Can you elaborate on that? Absolute rubbish : CORS is only one of your defenses against rogue JavaScript, as there are others. JavaScript executes in a sandbox and cannot get access to any device without your manual authorization. Security bugs may exist, but are usually quickly plugged. That article was just cheap sensationalism by someone who understood nothing about the subject.
STACK_EXCHANGE
Eudora is a venerable email program that many people still find easy to use, and was recommended by GreenNet before we switched to suggesting free software Thunderbird (incidentally the free/open source Eudora OSE 1.0 is based on Thunderbird). The specific advice & settings given below apply to GreenNet-hosted email accounts. If your account isn't hosted with us, please check with your ISP for the correct settings to use with their servers. You may find occasionally problems either sending, or rarely receiving, email particularly when you are away from home. Typically Eudora just spins its yin-yang symbol for a bit and eventually shows a task error "Could not connect to smtp.gn.apc.org". This is usually because providers of network connections in hotels, academic institutions and conference centres for example, want to prevent spam being sent through their network. Therefore they have blocked the usual "port" numbers used by email programs. This problem of not being able to connect through certain semi-public networks isn't unique to Eudora, but one traditional shortcoming of Eudora is that it does make it slightly harder to change the usual port numbers. By the way, if you're not sure what version of Eudora you're using, just look under the "Help" menu and "About Eudora". The first thing to check is that you can access websites, for example GreenNet webmail, which you should be able to use while Eudora is out of action. If you can't see websites or get a warning from the provider of the connection, then the connection itself is something you will need to look into before thinking about your email program. If you are having problems that can't be solved by changing any of the settings mentioned below, you might want to talk to whoever's responsible IT support wherever you are as they may be blocking more ports than necessary. Sending errors not related to ports If you get an error like "SSL Negotiation Failed" or "Server SSL Certificate Rejected" then probably Eudora is not accepting the secure certificate from the server because that certificate is too recent. You can either try to install the correct certificate, install the root certificate, or turn off SSL security. See sending email via SMTPS for more information. If you get an immediate error from Eudora, within a second or two, it may be something like a password problem. Typically Eudora will show the error "450 4.7.1.... Client host rejected: Service unavailable" or "Client host rejected: Access denied", which means the port number is fine but you are not authenticating with the correct password. Occasionally Eudora seems to do this spontaneously, possibly because of an anti-virus program. Just closing Eudora and re-opening, or rebooting the computer may resolve this. If it doesn't, check "allow authentication" is on in the "sending mail" as below. - It can also be that some personae authenticate but others don't: click on right-most tab on the left-hand pane to bring up the various addresses, right-click on the persona you are sending as, choose "Properties", and find the oprion about "allow authentication" or "authentication allowed" and tick that, then OK. If that still doesn't work, go to Special > Forget Passwords and when checking mail, re-enter the password (the same one as you might use for webmail). Eudora for Windows (versions 5.1 to 7.1 inclusive) Firstly, enabling the "Ports" category on Eudora for Windows may be useful if the basic settings are correct but there are still problems. Here's how to do it: - Close down Eudora for the time being - Start Windows Explorer (either open "My Computer" or press <Windows key>+E) - Navigate to My Computer > C: > Program Files > Qualcomm > Eudora (it may just be Eudora) - Find the extrastuff folder in that, and navigate inside it (you could also search for it from the Start Menu if you can't find it) - There should be a file called esoteric.epi. Right-click on this file, and from the context menu choose "Copy". - Go back (up) one level and go to Edit > Paste, to create a copy of esoteric.epi in the main Eudora folder - Restart Eudora - Go to "Tools" > "Options" and scroll down the categories to check that the "Ports" category is there. Check the basic settings: - Click on the "Sending Mail" icon in the options. - What do you have in the smtp server box? Any of "mail.gn.apc.org" or "smtp.gn.apc.org" or "smtp.greennet.org.uk" or just blank should work (although "smtp.gn.apc.org" is preferred). - "Allow authentication" should always be on. If you've been having problems sending and changed anything here, it's worth trying "Send Queued Messages" now. If you're still having problems sending, try using the alternative port. Try each of these combinations in turn in the Sending Mail category, testing after each one: - Tick "Use submission port (587)" and under "Secure sockets when sending", select "Required, STARTTLS". (If you get a certificate error, at least it means you're able to connect. You can either just accept the certificate, or see here.) In theory, this is the setting that should work in the largest number of locations. - Untick "Use submission port (587)" and under "Secure sockets when sending", select "Never" (note that this is not secure so make sure you're not sending anything really private). - Ensuring "Use submission port (587)" is still unticked, try "Required, alternate port". (This is secure, that is, encrypted). If you are still unable to send, this is where the esoteric settings come in. Scroll down to the bottom category of options which should be "Ports". Select that, and under SMTP port, change "25" to "2525". Try again. If that still doesn't work, contact the institution or internet provider to ask about their firewall. In the unlikely event you can send but not receive, the process is similar. In Options, select the "Checking Mail" category. Here try the following settings: - Set "Secure Sockets when Receiving" to "Required, Alternate Port" - Set "Secure Sockets when Receiving" to "Required, STARTTLS" (secure, and the preferred setting) - Set "Secure Sockets when Receiving" to "Never" (not encrypted, so theoretically someone providing the connection could scan your incoming email) If you are still unable to receive you might want to try creating a new persona, but if you were using POP, try IMAP, or vice versa. If that doesn't work, pretty much all legitimate ports are blocked and the institution doesn't seem to want you to retrieve your email (this is an unusual case, so please do contact us if you need access to your mail some other way). Eudora for Mac This is how to enable to "Ports & Protocols" category - Close down Eudora for the time being - In Finder, go to Applications > Eudora Application Folder. - Click once on the Eudora icon, then click File > Get Info - There should be a section "> Plug-ins" towards the bottom. Open that by clicking on the arrow. - Make sure any item marked "Esoteric Settings" is ticked. - Close the Eudora Info window and restart Eudora - Go to Eudora > "Preferences" and scroll down the categories to check that the "Ports & Protocols" icon is there. Now check the basic settings: - Go to Eudora > Preferences... and click on the "Sending Mail" icon - What do you have in the smtp server box? Any of "mail.gn.apc.org" or "smtp.gn.apc.org" or "smtp.greennet.org.uk" or just blank should work, although "smtp.gn.apc.org" is preferred. (If it is followed by a colon and a number, try changing that number from "587"to "2525" or vice versa, and try sending again, or try ticking "submission port (587)") - "Allow authorisation" should always be on. - Scroll down the categories until you get to "SSL". Select the relevant personality, and under "SSL for SMTP", select "Optional (TLS)". If none of the above works: - make sure "use submission port (587)" under Sending Mail is not ticked (and there is no colon and port number in SMTP server) - scroll down even further and you should see "Ports & Protocols". There, next to SMTP port, please put "2525", click "OK" - try to send again. In short, if you can't send try 587, and if where you are is blocking that too, try 2525. Diagnosis: confirming blocked ports On occasion, problems with Eudora or some recently-installed antivirus software or firewall may cause similar problems. To confirm that the problem is port blocking on the connection, you can start a terminal to run telnet. On Windows, click Start > "Run...> and type "cmd" to get a Command Prompt box. For a Mac, go to Finder > Applications > Utilities > Terminal. To check whether port 25 is blocked, for instance, type: telnet smtp.gn.apc.org 25 and press the Enter key. It should connect and you should see "220 mail.gn.apc.org ESMTP Postfix [ NO UBE C=GB ] You are neither permitted nor authorised to send unsolicited bulk email..." If you don't, or it just hangs there, then probably that port is blocked. Try the same thing, but with 587 instead of 25. Other possibilities are 2525 or 465 (which won't give the message if it connects). If you get a completely different message it may be an antivirus proxy, either on your computer or on the network you are connected to. If you are using Windows Vista or 7, you may get an error that there is no such command as "telnet". You can enable the telnet client with Start > Control Panel > Programs And Features >Turn Windows features on or off > Tick "Telnet Client" > OK.
OPCFW_CODE
Modified2 years, 1 month ago. I am trying to get better understanding of what the "context" object is in Vuex. The context object is referred to numerous times in the Vuex documentation. For example, in https://vuex.vuejs.org/en/actions.html, we have:. Action handlers receive a context object which exposes the same set of methods/properties on the store instance, so you can call context.commit to commit a mutation.. I understand how to use it, and also that we can use destructuring if we only want to use the "commit" from the context object, but was hoping for a little more depth, just so I can better understand what is going on. You Might Be Interested In: As a start, I found a couple ~8.5 year old posts on the "context object" as a pattern:what is context object design pattern? and Can you explain the Context design pattern? However, specifically to Vuex, I'd love a better understanding of:. Composition API Equivalent What is the context object / what is its purpose? What are all the properties/methods that it is making available to use in Vuex? 33 gold badges1515 silver badges1818 bronze badges. 1818 bronze badges. From the documentation you pointed out you can read:. We will see why this context object is not the store instance itself when we introduce Modules later. The main idea of the context object is to abstract the scope of the current Module. If you simply access store.state, it will always be the root state. The context object of actions and its properties/methods are described here in the source code and also referenced in the API documentation. Here is the list:. 33 gold badges2525 silver badges3838 bronze badges. 3838 bronze badges. As a start, I found a couple ~8.5 year old posts on the "context object" as a pattern .. I think you're reading into it too much. I don't think the Vuex docs is referring to some specific kind of "context object" that is known and defined elsewhere, they just mean that the object that is passed to action handlers (and in other situations as described in the docs) is a custom object which they refer to as a "context" object by their own definition. The reason why they provide this object is because it contains properties that are specific to the module for that particular action handler. 88 gold badges7070 silver badges9191 bronze badges. 9191 bronze badges. according to the source code of vuex, context is just a literal object with some properties from local, and other properties from store. Pale Blue DotPale Blue Dot. Modified5 months ago. I'm running into an issue with Vue 3 (alpha 4):. Inside the setup() function I am trying to read the parent component. As per the documentation on https://vue-composition-api-rfc.netlify.com/api.html#setup it should expose the parent via the context argument, either as a property of context.attrs or directly as parent (see the SetupContext bit under 'typing'). ChildComponent using context.attrs I don't find the documentation to be very clear on whether parent should be accessed directly from SetupContext, or via SetupContext.attrs, so I've tried both ways, but to no avail. ParentComponent.vue - passing attributes Here's my issue, I can access the SetupContext and SetupContext.attrs (which is a Proxy) just fine when logging them. SetupContext.attrs exposes the usual proxy properties ([[Handler]], [[Target]] and [[IsRevoked]]) and when inspecting [[Target]] it clearly shows the parent property. Not the answer you're looking for? Browse other questions tagged vue.jsvuejs3vue-composition-api or ask your own question. I should just be able to access the property like I normally would with an object. Any ideas on what I'm doing wrong? I've created a codesandbox to reproduce the problem. 3737 bronze badges. You can use getCurrentInstance. Vue documentation. Its as easy as:. Also, probably worth noting that Vue composition api plugin exposes parent in the same way, but it is referenced as instance.$parent there. I know this doesn't answer the question directly, but using provide/inject (https://v3.vuejs.org/guide/component-provide-inject.html) has helped me resolve this same issue where I wanted to get a data attribute from the parent node and to pass it to the rendered component, but could not access the parent anymore after upgrading from Vue2 to Vue3. Rather than trying to expose the parent, I passed a prop from its dataset down to the rendered component. Upon creating the app, I did the following. Then, inside the component, I could access the 'dataset' by doing the following. This is very stripped down, but shows my case nicely i guess. In any case, if you're trying to do something similar and want to get data via parent data attribute, you could look into provide/inject. Hope it helps anyone out there! A lightweight, dynamic, themeable, multi-level, custom context menu component for Vue 3 applications. ChildComponent.vue - render function Import the context menu component and a theme CSS of your choice. Add a basic context menu to your app. Available props for the VContextmenu component. Available props for the VContextmenuItem component. Available props for the VContextmenuSubmenu component. Available props for the VContextmenuGroup component. Author: heynext. ChildComponent.vue - rendering slots! Live Demo: https://codepen.io/iqq800/pen/eYvYVOJ. Download Link: https://github.com/heynext/v-contextmenu/archive/refs/heads/main.zip. Official Website: https://github.com/heynext/v-contextmenu. You block advertising 😢Would you like to buy me a ☕️ instead? The React Context API provides a way to share properties that are required by many components (e.g., user settings, UI theme) without having to pass a prop through every level of the tree (aka prop drilling). Although Vue.js does not provide the same abstraction out of the box, in this article, we’ll see that in Vue 3, we have all the tools we need to replicate the same functionality quickly. In this example, we look at how we can use this pattern to make certain information globally available everywhere in our entire application. The ProvideUserSettings component you see beneath, provides a reactive state with some default values and an update() function for setting properties on the state object. Register for the Newsletter of my upcoming book: Advanced Vue.js Application Architecture. Next we take a look at how we can use the ProvideUserSettings component in our application. ParentComponent.vue - listen for close event We probably need the settings in a lot of different components throughout our application. Do you want to learn more about advanced Vue.js techniques? Because of that, it makes sense to put the provider at the root level inside of our App component. So we now have access to the user settings from anywhere in our component tree. Above, we see how to consume the state of the injected context. In the following example, we explore how to update the state from any component in our application. This time we inject the update() function with the UserSettingsUpdateSymbol. We wrap the injected function in a new updateTheme() function which directly sets the theme property of our user settings object. Wrapping it up In theory, we could not wrap our state with readonly() and mutate it directly. But this can create a maintenance nightmare because it becomes tough to determine where we make changes to the (global) state. When we click one of the two buttons, the user settings state is updated, and because it is a reactive object, all components which are using the injected user settings state are updated too. Follow me to get my latest Vue.js articles. Although Vue.js does not have the concept of Context built-in like React, as we’ve seen in this article, it is straightforward to implement something similar to that with Vue 3 provide/inject ourselves.
OPCFW_CODE
So let's get the calculator out. A dry lake bed found in a desert. It's half way around the circle. Potential Evapotranspiration Is a measure of the ability of the atmosphere to remove water from the surface through the processes of evaporation and transpiration assuming no limitation on water supply. So let's take our previous response, and that's just the previous answer, plus pi because we're going to go in the opposite direction. So this point right over here is going to be r sine theta and we already know that that's equal to two. So by definition what are the horizontal and vertical coordinates of this point right over here where this line intersects the unit circle. Well, let's call our original length "x". Conversion between the two notational forms involves simple trigonometry. But to check our answers that we got above, we can cube them to see if we get — Stream bar deposit that is normally located on the inside of a channel bend. The theta that we are looking for is going in the opposite direction. Again, if we wanted only rotation, we'd multiply by "i". Figure below Magnitude vector in terms of real 4 and imaginary j3 components. Check it out on a graphing calculator, where you can see it. And let's see, the real part is negative three, so we could go one, two, three to the left of the origin. Together they would form a line. Such plots are named after Jean-Robert Argand —although they were first described by Norwegian—Danish land surveyor and mathematician Caspar Wessel — Operations with complex numbers are by no means limited just to addition, subtraction, multiplication, division, and inversion, however. We're going to go up two. Ah yes, the angles. Landscape characterized by numerous kettle holes on a glacial outwash plain. Standard orientation for vector angles in AC circuit calculations defines 0o as being to the right horizontalmaking 90o straight up, o to the left, and o straight down. We can establish a one-to-one correspondence between the points on the surface of the sphere minus the north pole and the points in the complex plane as follows. We could also multiply the numerator and denominator here by r. What's the change in size compared to our starting blue triangle. Number of individuals of a particular species found in a specified area. So for example we could say tangent theta, tangent of our angle, tangent of theta, is equal to sine of theta over cosine of theta. We're going to scale everything by r. There are two points at infinity positive, and negative on the real number linebut there is only one point at infinity the north pole in the extended complex plane. The magnitude sometimes called modulus of a complex number is like the hypotenuse of a triangle, with lines drawn to the x real and y imaginary coordinates as the sides of the triangles see above. Find the intersection points for the following sets of polar curves algebraically and also draw a sketch. Express each complex number in polar form. 4 + 4 i 62/87,21 4 + 4 i Find the modulus r and argument. The polar form of 4+ 4 i is. ±2 + i 62/87,21 First, write 1 in polar form. 7KHSRODUIRUPRI LV FRV i sin 0). Now write an expression for the sixth roots. Complex Numbers in Python | Set 1 (Introduction) The modulus and argument of polar complex number is: (, ) The rectangular form of complex number is: (+1j) Complex Numbers in Python | Set 2 (Important Functions and Constants). Multiplying & Dividing Complex Numbers in Polar Form. and we can use the angle that vector makes with the real axis along with the length of the vector to write a complex number in polar form. EXAMPLE 1 Finding the Polar Form of a Complex Number EXAMPLE 4 Raising a Complex Number to an Integer Power Find and write the result in standard form. SECTION POLAR FORM AND DEMOIVRE’S THEOREM Theorem DeMoivre’s Theorem If and n is any positive integer, then. Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site. Find the absolute value of a complex number. Write complex numbers in polar form. Convert a complex number from polar to rectangular form. Find products of complex numbers in polar form. Find quotients of complex numbers in polar form. Find powers of complex numbers in polar form.Write a complex number in polar form
OPCFW_CODE
The lithosphere is a thermal boundary layer atop mantle convection and a chemical boundary layer formed by mantle differentiation and melt extraction. The two boundary layers may everywhere have different thicknesses. Worldwide, the thicknesses of thermal and chemical boundary layers vary significantly, reflecting thermal and compositional heterogeneity of the lithospheric mantle. Physical parameters determined by remote geophysical sensing (e.g. seismic velocities, density, electrical conductivity) are sensitive to both thermal and compositional heterogeneity. Thermal anomalies are usually thought to have stronger effect than compositional anomalies, especially at near-solidus temperatures when partial melting and anelastic effects become important. Therefore, geophysical studies of mantle compositional heterogeneity require independent constraints on the lithosphere thermal regime. The latter can be assessed by various methods, and I will present examples for continental lithosphere globally and regionally. Of particular interest is the thermal heterogeneity of the lithosphere in Greenland, with implications for the fate of the ice sheet and possible signature of Iceland hotspot track. Compositional heterogeneity of lithospheric mantle at small scale is known from Nature's sampling, such as by mantle-derived xenoliths brought to the surface of stable Precambrian cratons by kimberlite-type magmatism. This situation is paradoxical since “stable” regions are not expected to be subject to any tectono-magmatic events at all. Kimberlite magmatism should lead to a significant thermo-chemical modification of the cratonic lithosphere, which otherwise is expected to have a unique thickness (>200 km) and unique composition (dry and depleted in basaltic components). Nevertheless, geochemical studies of mantle xenoliths provide the basis for many geophysical interpretations at large scale. Magmatism-related thermo-chemical processes are reflected in the thermal, density, and seismic velocity structure of the cratonic lithosphere. Based on joint interpretation of geophysical data, I demonstrate the presence of significant lateral and vertical heterogeneity in the cratonic lithospheric mantle worldwide. This heterogeneity reflects the extent of lithosphere reworking by both regional-scale kimberlite-type magmatism (e.g. Kaapvaal, Siberia, Baltic and Canadian Shields) and large-scale tectono-magmatic processes, e.g. associated with LIPs and subduction systems such as in the Siberian and North China cratons. The results indicate that lithosphere chemical modification is caused primarily by mantle metasomatism where the upper extent may represent a mid-lithosphere discontinuity. An important conclusion is that the Nature’s sampling by kimberlite-hosted xenoliths is biased and therefore is non-representative of pristine cratonic mantle. I also present examples for lithosphere thermo-chemical heterogeneity in tectonically young regions, with highlights from Antarctica, Iceland, North Atlantics, and the Arctic shelf. Joint interpretation of various geophysical data indicates that West Antarctica is not continental, as conventionally accepted, but represents a system of back-arc basins. In Europe and Siberia, an extremely high-density lithospheric mantle beneath deep sedimentary basins suggests the presence of eclogites in the mantle, which provide a mechanism for basin subsidence. In the North Atlantic Ocean, thermo-chemical heterogeneity of the upper mantle is interpreted by the presence of continental fragments, and the results of gravity modeling allow us to conclude that any mantle thermal anomaly around the Iceland hotspot, if it exists, is too weak to be reliably resolved by seismic methods.
OPCFW_CODE
Add Maven wrapper This PR adds Maven wrapper to simplify project setup. Hi Vedran, I see you set warpper to use maven 3.3.9. The problem with this is that current Quartz is still on JDK6 compatible, and only maven 3.2.5 or below is able to run with JDK6. We should default to that maven version til we can move up the JDK. Hey @zemian, that completely slipped my mind - I've updated the PR. Vedran, Unfortunately I think the wrapper itself requires min of JDK7? Also I think mvnw.cmd line 124 has an invalid '#' char. I needs to be replace by '@REM' mvnw.cmd -version '#' is not recognized as an internal or external command, operable program or batch file. Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/maven/wrapper/MavenWrapperMain : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: org.apache.maven.wrapper.MavenWrapperMain. Program will exit. We prob should move away from JDK6 soon anyhow, since it's pretty old. But I image that will be a major release bump. If wrappers does not support JDK6, then wait til we create a new branch for JDK7 for later inclusion. As I do like the maven wrapper tool and it's benefit. Hey @zemian, you're right, Takari Wrapper indeed does require at least JDK7. However, after giving this some thought, Quartz's compatibility with Java 6 doesn't necessarily need to affect the JDK version used to build the project meaning we could move the project build itself to newer version (preferably JDK8) but retain the runtime compatibility with Java 6. In order to do so we should: fix #60 (fails for me too) use Animal Sniffer to ensure compatibility with Java 6 add Wrapper Your thoughts on this? Also the invalid '#' char is Takari bug that was fixed in takari/maven-wrapper#27, I'll check why the script was generated using the old version. I think mixing different versions of tool and main code is confusing. I like push Quartz code to be JDK7 compatible instead. I will drop a mail to dev-list to see if all agree to make 2.3.x start using JDK7 instead. :+1: from me for bumping the minimum required version to Java 7. Regarding the project build, Spring projects have been built using JDK 8 while maintaining compatibility down to Java 6 for some time now so it's a very common and tested scenario. You could at least consider it for Quartz too. Since this PR is blocked until the project gets built by JDK 7+ anyway, I'll update the Maven Wrapper config to target 3.3.9 again and fix the invalid '#' char issue. Is that OK with you? Yep, thanks @vpavic !
GITHUB_ARCHIVE
Timezone - Error in Offset let timezone_ = timezones::get_by_name("America/New_York").unwrap(); let offset1 = timezone_.get_offset_primary().to_utc(); // -4.56 WRONG let d = OffsetDateTime::now_utc(); let dt4= PrimitiveDateTime::new(d.date(), d.time()).assume_timezone_utc(timezone_); let offset2 = dt4.offset(); // -4.00 CORRECT assert(offset1, offset2) Both offsets are different. Am I doing wrong or is this a bug? get_offset_primary does not return the current offset from UTC instead it returns a default offset (check the doc for the function). Using assume_timezone_utc returns the current offset from UTC matching the given timestamp (in this case the timestamp is obtained from now_utc, which is used correctly for this function). Note that if you used a non-UTC primitive date time with assume_timezone_utc you could get a broken date time, typically one which does not care of DST. Thanks for the reply. let timezone = timezones::get_by_name("America/New_York").unwrap(); let d = OffsetDateTime::now_utc(); let dt= PrimitiveDateTime::new(d.date(), d.time()).assume_timezone_utc(timezone); let offset = dt.offset(); // -4.00 CORRECT What is the cleanest way of achieving this? My case is super simple: Get a current OffSetDateTime for a provided timezone. Similar to OffsetDatetime::now_utc() may be like OffsetDatetimeExt::now(timezone) Thanks for the reply. let timezone = timezones::get_by_name("America/New_York").unwrap(); let d = OffsetDateTime::now_utc(); let dt= PrimitiveDateTime::new(d.date(), d.time()).assume_timezone_utc(timezone); let offset = dt.offset(); // -4.00 CORRECT What is the cleanest way of achieving this? My case is super simple: Get a current OffSetDateTime for a provided timezone. // Similar to OffsetDatetime::now_utc() // May be like OffsetDatetimeExt::now(timezone) If you don't need more calculations with the date time once converted to the target timezone, you can do the following: OffsetDateTime::now_utc().to_timezone(timezones::db::america::NEW_YORK) Thanks for your help and your wonderful library :) // Accepted for now let timezone = timezones::get_by_name("America/New_York").unwrap(); let d = OffsetDateTime::now_utc().to_timezone(tz); // Expected may be like a feature // "America/New_York" can be a variable. let d = OffsetDateTime::now_utc().to_timezone("America/New_York"); // How to support multi type parameter for same fn // https://blog.rust-lang.org/2015/05/11/traits.html `` Thanks for your help and your wonderful library :) // Accepted for now let timezone = timezones::get_by_name("America/New_York").unwrap(); let d = OffsetDateTime::now_utc().to_timezone(timezone); // Expected may be like a feature // "America/New_York" can be a variable. let d = OffsetDateTime::now_utc().to_timezone("America/New_York"); // How to support multi type parameter for same fn // https://blog.rust-lang.org/2015/05/11/traits.html `` I'm happy that you could find what you needed. What you're suggesting about the API is not really possible as this would require support for overload on the type of argument, moreover you don't handle the case where get_by_name returns None. The function may return None if it cannot find the timezone you're requesting. By the way, why don't you use the timezone types directly? - Timezone is saved in DB. It's a string. // I agree that None can be produced based on input // How about this? OffsetDateTime::now_utc().try_timezone("America/New_York"); -> Result<OffsetDateTime, err> By the way, why don't you use the timezone types directly? - Timezone is saved in DB. It's a string. // I agree that None can be produced based on input // How about this? OffsetDateTime::now_utc().try_timezone("America/New_York"); -> Result<OffsetDateTime, err> I like the idea of a try_to_timezone. I was also thinking of something a bit more involved by creating a new ToTimezone<T> trait. Thanks for your help and your wonderful library :) // Accepted for now let timezone = timezones::get_by_name("America/New_York").unwrap(); let d = OffsetDateTime::now_utc().to_timezone(timezone); // Expected may be like a feature // "America/New_York" can be a variable. let d = OffsetDateTime::now_utc().to_timezone("America/New_York"); // How to support multi type parameter for same fn // https://blog.rust-lang.org/2015/05/11/traits.html `` Can you try the new to_timezone from the master branch? I've done quite a bit of refactor with the to_timezone function. Sure! time-tz= { git = "https://github.com/Yuri6037/time-tz", version = "3.0.0-rc.1.0.0", features = ["db_impl"] } //had to include ToTimezone use time_tz::{ToTimezone, timezones, TimeZone, Offset, Tz}; Tried with this and code is working as expected. I see no changes except in use statement. time-tz= { git = "https://github.com/Yuri6037/time-tz", version = "3.0.0-rc.1.0.0", features = ["db_impl"] } //had to include ToTimezone and db_impl is required for Tz use time_tz::{ToTimezone, timezones, TimeZone, Offset, Tz}; Tried with this and code is working as expected. I see no changes except in use statement. The new use statement is expected as now the function to_timezone has been extracted to its own trait.
GITHUB_ARCHIVE
Would be possible to have a two-way sync for SuiteCRM Calendar to Google Calendar? or Google Calendar to SuiteCRM? Yup, if you go into your profile > Advanced there is a section called Calendar Options this should have all this information you need to link up the calendars If you need more direction on how to link it from the ical url to the google calendar just ask I already copied or link the iCal integration URL to the Google calendar but the meetings that I created on the SuiteCRM Calendar won’t display in the Google Calendar. Can you please provide the steps you took to add the ical URL to your google calendar? Not sure if I have this correct, but here are my steps Not sure if this allows for two sync? The meetings I put in SuiteCRM are not immediately showing up in Gmail. Not sure what the refresh times are Log into gmail Select ‘Calendar’ from the top tabs Select “Browse interesting calendars »” from the near the bottom right under the “Other Calendars” section Top right under “More Tools” Select “Add by URL” Paste the URL from your SuiteCRM >> Profile>>Advanced Tab >> iCal URL Select Add Calendar Okay all done, where is it? Select “« Back to calendar” in the top left Now you can see the Calendar reference under "Other Calendars’ and you can give it a color. After its set up I can see back under “Other Calender” more options for ‘notifications’ 1. I log-in to the SuiteCRM. 2. Go to profile then advanced tab. 3. then i copy the iCal integration URL. 4. paste it on the Add by URL in Other Calendars using the Google Calendar Could be possible Google event in calendar to SuiteCRM calendar using ical? Running version 7.8.1 on a Debian wheezy server in our office. I opened up a www port and pasted the ical link into Google. Connection was successful, but none of my tasks are syncing. The meetings I had entered in SuiteCRM synced over but not the tasks. Is it possible to sync all entries in the calendar, or only meetings? Also, even though the calendar is shared to google, I cannot modify the calendar from google. How can I edit permissions on the CRM side to allow two-way sync with google? Too bad we can’t edit our posts here, right? Anyway… Found another problem with my shared calendar. Google isn’t receiving the time zone from CRM. On google the calendar time zone is (GMT+00:00) GMT, but in the CRM it’s -4:00 New York/America. I’m digging around in the SuiteCRM folder on our server for config files. I’d sure appreciate any help in solving these issues. Anytime this is planned to be fixed? The two way sync with google calendar for all entries of SuiteCRM? That would be fantastic! I’m giving it a shot. It’s going to be quite a project for me. Is this available as an Installable Package which we can add to existing CRM. As there are no clear indications from SuiteCRM when they would merge this to master and allow everyone to use it, can the files be pointed out so we can cherry pick them to add to existing system. @benjamin.long Hats off to your effort and the contribution to community. I’m not sure where my problem is so I’m gonna ask here and if it needs to be asked elsewhere, maybe a mod can move it(?). First of all, I am running SuiteCRM 7.11.1 on an Ubuntu Linux system. I have read the docs but I have one huge stumbling block, there is no place in System Settings for the Google Authentication(there is no entry below the Proxy Settings as is shown in the docs). I can see the stuff regarding Google Suite but something I’m doing there isn’t working right either as when I select the json file, it doesn’t seem to upload, even when I click save because it still says that it’s unconfigured. Do I need to have something else configured to get the Google Authentication to show? Any insights appreciated. Ok, I tried installing benjamin’s github version of this and I’m still not showing the Authentication stuff on it either. At this point I’m at a complete loss because I have searched the code and the code for this seems to be there but for some reason isn’t enabled or showing. It’s been moved. Unfortunately whoever moved it didn’t update the Docs to reflect it. I’ll see if I can find time to fix that. It’s further down in the Admin page, where Google things have been given their own section. What you’re looking for is under ‘Google Calendar Settings’. Well at least I wasn’t crazy trying it. I still can’t figure out why it won’t accept my credentials. I searched the code and it shows the url as “/index.php?entryPoint=saveGoogleApiKey&getnew”, is that what I should use or do I cut off the “&getnew”?(I’m assuming cut it off but it’s always wise to ask). I’m not entering a partial URL here, I’m just wondering if I have done that part correctly as well. If all of that is correct then I have no idea why it won’t configure because I’ve followed the instructions. The URL from the docs is correct: I’m not sure where you’re seeing &getnew. The “&getnew” is shown in SuiteCRM’s code for the entryPoint. I searched for it to make sure I had entered the URL correctly. Well I’m at a complete loss then. I have no clue why it won’t configure. I’ve followed the instructions and it never says it configured, and unless the calendar stuff for the user isn’t in the Advanced profile tab any more, it’s definitely not showing. Hmmph, any ideas beyond what I’ve tried? Do I need to configure the consent screen at all? I did so just in case but still get nothing(though I didn’t tell it to verify my domain, etc.) Ok, well I just discovered that the Google Authentication thing does NOT show up on the Administrator account but does show up on my user account(that is also an admin so I didn’t think to look) but it says “The current API token is DISABLED”, so I’m guessing I have to see why it isn’t actually getting the JSON file.
OPCFW_CODE
Members of the real-time community are encouraged to consider nominating colleagues for IEEE senior member, IEEE fellow, and ACM fellow. The application procedures for IEEE senior member can be found here. For applications to be considered, they must be received, with all required references and resume, at least ten days prior to the meeting date (Please follow this link for meeting dates). The nomination procedures for IEEE fellow can be found here. Note that all forms (nomination, reference, and endorsement) must be received no later than March 1st . Soliciting at least five, but no more than eight, references capable of assessing the nominee’s contributions is required. A reference must be an IEEE Fellow in good standing. The nomination procedures for ACM fellow can be found here. Note that nomination deadline is usually early September (e.g., September 7, 2016). The nominator must secure endorsements from 5 ACM members, preferably individuals who are themselves ACM Fellows or have otherwise achieved distinction in the field. The following is a list of current IEEE fellows with ties to the real-time systems community: CL Liu (1986), Kang Shin (1992), John Stankovic (1993), Lui Sha (1998), Krithivasan Ramamritham (1998), Jane Liu (1995), Insup Lee (2001), Wei Zhao (2001), Tei-Wei Kuo (2011), Alan Burns (2012), James Anderson (2012), Theodore Baker (2012), Giorgio Buttazzo (2012), R. Rajkumar (2012), Sanjoy Baruah (2013), Sang-hyuk Son (2013), Yi Wang (2015), Chenyang Lu (2016), Frank Mueller (2016), Marco Caccamo (2018), Giuseppe Lipari (2018), Xue Liu (2020). For further information, please see: https://services27.ieee.org/fellowsdirectory/menuALPHABETICAL.html. The following is a list of current ACM fellows with ties to the real-time systems community: CL Liu (1994), John Stankovic (1996), Krithivasan Ramamritham (2001), Kang Shin (2001), Lui Sha (2005), James Anderson (2013), Tei-Wei Kuo (2015), Insup Lee (2017), Frank Mueller (2018), Tarek Abdelzaher (2019), Chenyang Lu (2020), Wang Yi (2020). For further information, please see: https://awards.acm.org/fellows and https://en.wikipedia.org/wiki/List_of_fellows_of_the_Association_for_Computing_Machinery. Please contact Tei-Wei Kuo (email@example.com) if you have any questions.
OPCFW_CODE
AreaMapper is a tool which can help you prepare your business data for use in the CleverMaps platform. AreaMapper computes if an intersection exists. Example scenario could be mapping points of your customers to administrative units. Then you can calculate metrics of your customers in context of city districts, cities, counties and so on (e.g. count of customers in each city). You can map points to your own area data (e.g. delivery zones) or you can use already prepared data dimension from us (e.g. administrative units). Both points and areas need to have the same coordinate system (e.g. WGS84). AreaMapper is CSV based (input and output), geometry of points has to be stored in latitude/longitude columns, geometry of polygons has to be stored as WKT string (https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry). AreaMapper is distributed as a Docker image so you need to have Docker platform installed. You can get Docker here. AreaMapper is publicly available on Docker hub: https://hub.docker.com/r/clevermaps/areamapper Use following command to get the image: There are few steps you need to do before running AreaMapper: - Create an empty directory on your local drive (e.g. /home/user/clevermaps/areamapper). - Create a configuration file for AreaMapper and save it as a JSON file into the folder. Please check the examples below in Configuration to get the idea how the configuration file should look. Use following command to run AreaMapper: ! Change /home/user/clevermaps/areamapper to your directory path ! input_points.filename: Full path to the input CSV file with points to map. E.g. '/home/user/data/points.csv'. input_points.delimiter: Delimiter used in CSV file. input_points.geom_field_x: Name of the column which contains X coordinates (longitude). E.g. 'lng' or 'x'. input_points.geom_field_y: Name of the column which contains Y coordinates (latitude). E.g. 'lat' or 'y'. input_points.join_fieldname: Name of the column which contains a key to pair with areas (optional). If specified than only points and areas with same values of the key column are compared. input_areas.filetype: Available values are 'csv' or 'candim'. If you are mapping to your own file with areas data then set value to 'csv'. If you are mapping to already prepared data dimension from CleverMaps then set value to 'candim'. input_areas.filename: In case of input_areas.filetype = 'csv' set full path to the file with area data. In other case input_areas.filetype = 'candim' set name of CleverMaps data dimension. Available dimensions are: input_areas.delimiter: Delimiter used in CSV file. Set only in case of input_areas.filetype = 'csv'. input_areas.geom_fieldname: Name of the column which contains geometries of areas. Set only in case of input_areas.filetype = 'csv'. input_areas.primary_key_fieldname: Name of the column which contains primary key of areas. These values will be mapped to points. Set only in case of input_areas.filetype = 'csv'. input_areas.join_fieldname: Name of the column which contains a key to pair with points (optional). If specified than only points and areas with same values of the key column are compared. Set only in case of input_areas.filetype = 'csv'. output.filename: Full path to the output CSV file. E.g. '/home/user/data/points_areas.csv'. output.delimiter: Delimiter to use in output file. E.g. '0'. output.not_matched_value: Default value of area primary key for not matched rows
OPCFW_CODE
Assess nf-core/sarek We need to review what https://nf-co.re/sarek can do to determine: if it could be used as is if it could be used with modifications if we'd rather extract and replicate some functionality here In particular, we are interested in: variant-calling for a single sample on long reads (preferably with DeepVariant) variant-calling for a single sample on short reads joint calling for multiple samples on short reads Task list [ ] Trial nf-core/sarek on some test data (cf #81) [x] Talk to the Sanger HGI team about nf-core/sarek vs their fork wtsi-hgi/sarek [x] Check how to get per-sample gVCF (or VCF) from sarek [x] Check how we get the multi-sample gVCF from sarek [x] Check if/how bcftools (or vcftools) can convert a gVCF to VCF [x] Check if/how bcftools (or vcftools) can extract 1 sample from a multi-sample gVCF [ ] Check how we control the sharding / "intervals" in sarek Tried the following commands that either start from beginning or start from variant calling nextflow run nf-core/sarek -profile singularity --outdir /global/scratch/users/hangxue/otter/sarek/cram_test --input /global/scratch/users/hangxue/otter/otter_cram.csv --genome null --igenomes_ignore --fasta /global/scratch/users/hangxue/otter/genomes/GCA_902655055.2_mLutLut1.2_genomic.fna --step variant_calling --skip_tools baserecalibrator --joint_germline --tools haplotypecaller [hangxue@n0133 hangxue]$ nextflow run nf-core/sarek -profile singularity --outdir /global/scratch/users/hangxue/otter/sarek/fastq_test --input /global/scratch/users/hangxue/otter/otter_fastq.csv --genome null --igenomes_ignore --fasta /global/scratch/users/hangxue/otter/genomes/Lutralutra_chr1.fna --skip_tools baserecalibrator --joint_germline --tools haplotypecaller Both have the same error. Troubleshooting in progress. "ERROR ~ Cannot get property 'baseName' on null object -- Check script '/global/home/users/hangxue/.nextflow/assets/nf-core/sarek/./workflows/sarek/../../subworkflows/local/bam_ variant_calling_germline_all/main.nf' at line: 133 or see '.nextflow.log' file for more detailsERROR ~ Cannot get property 'baseName' on null object" I think the above error is related to haplotypecaller because the following command is able to start fine. Note that The GATK's Haplotypecaller, Sentieon's Dnascope or Sentieon's Haplotyper should be specified as one of the tools when doing joint germline variant calling. Troubleshooting in progress. nextflow run nf-core/sarek -profile singularity --outdir /global/scratch/users/hangxue/otter/sarek/fastq_test --input /global/scratch/users/hangxue/otter/otter_fastq.csv --genome null --igenomes_ignore --fasta /global/scratch/users/hangxue/otter/genomes/Lutralutra_chr1.fna --skip_tools baserecalibrator Related discussions in sarek that talks about joint calling and why n+1 requires rerunning the haplotypecaller for all samples. https://github.com/nf-core/sarek/issues/755 and https://github.com/nf-core/sarek/issues/868 and https://github.com/nf-core/sarek/pull/1172 tldr: it seems that when no interval is used, sarek should be able to produce per sample gvcf file and then joint calling. However, when there is interval, one DB is used per interval and it is quite some work to merge/organize the DBs. tldr: it seems that when no interval is used, sarek should be able to produce per sample gvcf file and then joint calling. However, when there are intervals, one DB is used per interval and it is quite some work to merge/organize the DBs for n+1. Right, I see. In other words, if the genome is small enough / the runtime is reasonable, then variant-calling could be done on the entire genome at once, i.e. without intervals, and then we'd naturally get gVCFs per sample. I don't have any runtime data, but I guess the option to do intervals was introduced because calling variants on a 3 Gbp genome may take a while ? My gut feeling is that we'll face at some point a genome that's large enough and causing a bottleneck Re. the dbsnp issue, does the suggested workaround work for you ? Re. the dbsnp issue, does the suggested workaround work for you ? I have not modified the code yet. Not sure if we want to fork the repo and make changes. Using a different caller, sentieon_haplotyper, is able to start the pipeline. sentieon_haplotyper should also be able to produce gvcf file. See https://github.com/nf-core/sarek/pull/1007 [hangxue@n0028 hangxue]$ nextflow run nf-core/sarek -profile singularity --outdir /global/scratch/users/hangxue/otter/sarek/fastq_test --input /global/scratch/users/hangxue/otter/otter_fastq.csv --genome null --igenomes_ignore --fasta /global/scratch/users/hangxue/otter/genomes/Lutralutra_chr1.fna --skip_tools baserecalibrator --joint_germline --tools sentieon_haplotyper --sentieon_haplotyper_emit_mode gvcf Update for n+1 problem: wtsi-hgi/sarek gives an entry point between GATK Haplotypecaller and GATK genomicsDBimport. It seems it will require path(gvcf), path(intervals), path(gvcf_index), path(intervals_index) as the input for starting at GATK genomicsDBimport. Converting Gvcf Files Into Vcf using gvcftools gzip -dc sample.genome.vcf.gz | extract_variants | bgzip -c > sample.vcf.gz Extract a subset of samples from a multi-sample VCF bcftools view -s samplelist file.gz Update for sarek test run on otter cram data. See output at https://docs.google.com/presentation/d/1lo6wNoBlJaQJ8PfIUb7mtqX1CEn3Sj-gj4vgpVN04OY/edit#slide=id.g28127127451_0_29 I'm a bit confused by seeing a g.vcf.gz per individual. Did you run the HGI fork ? Update: Issue nf-core/sarek#1550 points out that using haplotypecaller without passing a --dbsnp will cause such error. However, this error is supposed to be fixed four years ago nf-core/sarek#182 FYI this is now fixed in release 3.4.4 of sarek
GITHUB_ARCHIVE
Abstraction and generalization are frequently used with each other. Abstracts are generalized as a result of parameterization to deliver increased utility. In parameterization, a number of portions of an entity are replaced by using a identify that is new into the entity. XP also uses more and more generic conditions for procedures. Some argue that these modifications invalidate prior criticisms; Other people declare that this is actually watering the procedure down. A summary of adjustments in R releases is managed in numerous "information" documents at CRAN. Some highlights are shown below for several key releases. Release Date Description It is obvious that while you are pursuing economics inside your undergraduate degree or continuing with the topic on postgraduate level, you might be offered assignments on various topics. Prior to engaged on your assignment, it is necessary for you personally to know The fundamental strategy of incentives, opportunity Price tag, marginal things to consider and many others. concepts must be utilized thoroughly with good reasons, you should be in the position to logically make clear, why you come up with a residence a public or maybe a discipline A personal or a class an abstract. In addition, when architecting frameworks, the OOP The latest Variation of this e book is often obtainable, at no cost, for downloading and for on-line use at the online address: This module wraps up the system using a mini project that ties with each other different tactics, skills, and libraries you have attained over moved here the course! Making use of knowledge on the recognition of various little one names in The us within the previous several visit this page many years, you should be able to Assess unique names’ acceptance after a while. in which some or all of operators like +, - or == are addressed as polymorphic functions and therefore have different behaviors depending on the forms of its arguments. by their exceptional service. I had been given the toughest Project of Android. Nevertheless it was obligatory and obligatory to complete this project as it absolutely was my Last 12 months Project. I had only Continued 5 In January 2009, the Ny Instances ran an posting charting the growth of R, The explanations for its acceptance among the facts experts plus the menace it poses to commercial statistical packages including SAS.[seventy seven] Commercial guidance for R Then Will not trouble studying it, return to it when you're ready to set in the hassle to actually discover You are contacting update_v with lots of parameters. A person of such parameters is vs. Having said that, that is The very first time in that perform that vs appears. The variable vs does not have a value associated with it nonetheless. Check out initializing it initially, plus your error ought to vanish An entire set of Use Conditions mostly defines the requirements for the process: anything the user can see, and want to do. The below diagram is made up of a list of use situations that describes a simple login module of a gaming Web page. Greater than important link 6 years of coding experience in several domains, programming languages make us your Simply click at visit the site button service provider
OPCFW_CODE
Resolve trait methods during stubbing Right now, we do not support stubbing methods declared in traits. Consider this example: trait A { fn foo(&self) -> u32; fn bar(&self) -> u32; } trait B { fn bar(&self) -> u32; } struct X {} impl X { fn new() -> Self { Self {} } } impl A for X { fn foo(&self) -> u32 { 100 } fn bar(&self) -> u32 { 200 } } impl B for X { fn bar(&self) -> u32 { 300 } } #[kani::proof] fn harness() { let x = X::new(); assert_eq!(x.foo(), 1); assert_eq!(A::bar(&x), 2); assert_eq!(<X as B>::bar(&x), 3); } It is currently not possible to stub X's implementation of A::foo, A::bar, or B::bar. To do so, we'd need to do two things. First, we'd need to come up with a way to refer to trait methods in our kani::stub attributes. In rustc, these are stringified as <X as B>::bar, but paths in attribute arguments are not allowed to have spaces or symbols like < and > (they are simple paths). We could accept string arguments or come up with some other convention, e.g.: #[kani::stub("<X as B>::bar", ...)] // or #[kani::stub(X__as__B::bar), ...)] Second, we'd need to improve our path resolution algorithm to also search through trait implementations. Adding the T-User label as a few of users are hitting this. What if we create something like: #[kani::stub(trait_impl(Trait::function for Type), stub_funtion)] I.e.: The example would look like: #[kani::stub(trait_impl(A::bar for X), empty_debug)] I think it makes sense to use rust's syntax, e.g. <Type<'lifetimes, Generics> as Trait>::method for consistency, the tricky part is how do we deal with lifetimes and monomorphization which is almost independent from syntax. It may be necessary at least to distinguish 'a and 'static. On the flip side if you have an instance impl<T> Trait for Type<T>, how does this get stubbed and how does it get resolved? There are a few options here and it is not clear to me which is preferable. Syntactic, e.g. you refer to this instance as <Type<T> as Trait>::method. Note that T here must be the same name as in the impl. This is sadly brittle and we also have yo somehow figure out that T in the impl has that name. Only concrete types allowed. You cannot use e.g. <Type<T> as Trait>::method but you have to use something like <Type<usize> as Trait>::method or <Type<String> as Trait>::method. This is limiting but could be a good first step. Allow something like for. E.g. for <T> <Type<T> as Trait>::method. This this is probably the most general as for instance for <G> <Type<G> as Trait>::method would still match the impl. Though I'm not sure how the resolution and matching would be implemented. For sure it would, but per the issue description, the attribute expects a simple path. Thus, the type as trait syntax does not work. I don't think so. The name of the attribute needs to be a simple path, yes, but the contents of the attribute is arbitrary. The attribute contents is a TokenTree which can contain, for instance, the <, > characters as well as an as ident. Ideally we could just use syn::parse to get an ExprPath which would be exactly what we need. However I don't know of a way to convert the rustc TokenTree to the proc_macro::TokenTree. After inspecting the rustc documentation a bit it may be possible to use existing infrastructure here. Parser::new could be used to create an ad-hoc parser instance that consumes TokenTrees, then an Expr could be parsed, which we would expect to be of the Path variant. That would unblock this work. Thanks @JustusAdam for the update. I wonder if we can just use for parsing the attribute. We would also need to extend our name resolution to handle that case. In theory, users should be able to specify a simple path as well if there is no conflicting method name. @JustusAdam, your tip was great! I was able to retrieve a syn::TypePath for each path. Unfortunately, implementing this will require a bit more work, since we will no longer be able to represent a stub with a single DefId. @celinval that depends. If you restrict it to non-parameterized traits, then the Instance it resolves to should have no generic arguments and be basically just a DefId. @celinval that depends. If you restrict it to non-parameterized traits, then the Instance it resolves to should have no generic arguments and be basically just a DefId. Not really... the type itself may also be parameterized.
GITHUB_ARCHIVE
Added ability to use external player to open video streams Hello Personally ive found that the twitch video streams do not work reliably for me. Thats why I added the possibility to use an external video player for opening the streams instead of opening them in the web browser. This works very good for VLC, and the code I have added does the job for me. If you want to merge this code probably must be polished a bit. Greetings Thats what i meant by polishing it a bit. I thought that I could add another textbox in the preferences. "Arguments" or something like that where you can list the needed arguments for the external player. Where for example {url} is substituted with the url. But to be honest, i dont know any other video player that is used as much as vlc... Also, how many of them would be able to parse a twitch url? An alternative would be making this feature only available for VLC? I am on it right now. I think I actually add that textbox... What I also dislike is that if you disable the checkbox, the other controls disappear. That's bad from a user's perspective. Disabling controls is much more convenient. Opening VLC 2.2.1 with vlc.exe https://www.twitch.tv/videos/99905481 doesn't do anything.... I going to check out version 3... Ok, VLC 3 seems to work The alternative would be to make it work with more media players: Download the m3u8 for the quality you want and fix it (append the urls in front of the file names). After that you can open the file in VLC. I've done this in python and it should work for VLC2 also... but I don't if its worth the hassle import sys import requests import re import vlc_wrapper ACCESS_TOKEN = "https://api.twitch.tv/api/vods/{}/access_token" M3U8 = "https://usher.ttvnw.net/vod/{}?nauthsig={}&nauth={}&allow_source=true&player=twitchweb&allow_spectre=true&allow_audio_only=true" class quality_group(): def __init__(self, bandwidth, url): self.bandwidth = bandwidth self.url = url def get_best_quality_url(id): access_token_request = ACCESS_TOKEN.format(id) access_token_request_headers = {'Client-ID': "xxx"} access_token = requests.get(access_token_request, headers=access_token_request_headers).json() token = access_token["token"] sig = access_token["sig"] m3u8_request_string = M3U8.format(id, sig, token) m3u8 = requests.get(m3u8_request_string).text m3u8_lines = m3u8.split("\n") bandwidth_re = re.compile(r"BANDWIDTH=([\d]+)") qualities = [] for i, line in enumerate(m3u8_lines): if line.startswith("#EXT-X-STREAM-INF:"): bandwidth_re_match = bandwidth_re.search(line) bandwidth = int(bandwidth_re_match.group(1)) url = m3u8_lines[i+1] qualities.append(quality_group(bandwidth, url)) best_quality = max(qualities, key=lambda q: q.bandwidth) return best_quality.url def get_fixed_m3u8(url): url_without_file = url[:url.rfind("/") + 1] lines_full = requests.get(url).text lines = lines_full.split("\n") fixed_lines = [] for line in lines: if ".ts" in line: fixed_lines.append(url_without_file + line) else: fixed_lines.append(line) return "\n".join(fixed_lines) def main(): video_id = sys.argv[1] best_quality_url = get_best_quality_url(video_id) m3u8 = get_fixed_m3u8(best_quality_url) fille_name = "./streams/" + video_id + ".m3u8" m3u8_file = open(fille_name, "w") m3u8_file.write(m3u8) vlc_wrapper.play_file(fille_name) if __name__ == "__main__": main() Honestly I think that's overkill. Not many users use TL as a "video browser" and actually click on the video to watch it. Downloading is the main concern and usually hte users know exactly what they want when they open up TL. I am correcting some things right now and ditch the arguments textbox... works for VLC 3 and I mention that in the help tooltip that this is is the preferred way to go. If anyone is unhappy with the feature, we'll get a issue sooner or later anyways 😄 Is just don't want to spend more time on the feature knmowing it could be sufficient the way it is (which would be great) I am done with the changes... you can take a look at it. I'll merge it later and release 1.5.1 because I need to fix 2 bugs in 1.5 This is exactly the way to go. Having it work with VLC3 only does the job for most users who want to use this feature. Great changes to my code from you. Thank you. 👍 One little thing that I noticed: Since you made a new section "Misc" in the preferences maybe I could rename AppUseExternalPlayer in preferences to MiscUseExternalPlayer since i thought the naming was based on sections? Good point... let me do that real quick, I'll merge in a second Well... NOW we're done I think 🎉 Nice catch! Good one.
GITHUB_ARCHIVE
Main / Productivity / Hibernate-c3p0.jar jboss Name: Hibernate-c3p0.jar jboss File size: 509mb HomePage, fleuristemag.com Date, (Sep 01, ). Files, pom (3 KB) jar (40 KB) View All. Repositories, CentralJBoss ReleasesSonatype Releases. Used By . HomePage, fleuristemag.com Date, (Feb 09, ). Files, pom (3 KB) jar (40 KB) View All. Repositories, CentralJBoss ReleasesSonatype ReleasesSpring. MF fleuristemag.comal.C3P0ConnectionProvider. class fleuristemag.com . The lib/required/ directory contains JARs Hibernate requires. All the jars in this Provides integration between Hibernate and the C3P0 connection pool library. Hibernate works great for me until I attempt to include C3P0, at which point I run into [INFO] +- fleuristemag.com:jersey-bundle:jarcompile. c3p0 is not required with hibernate. you can use other Connection pools. but although i CR1:compile [INFO] | \- fleuristemag.come:c3p0:jarcompile [INFO ]. One way to avoid this kind of issue would be to be sure that c3p0's jar files are made available at the application server level, not subsidiary to. 14 Apr Final ✓ With dependencies ✓ Source of hibernate-c3p0 ✓ One click! Dependencies jboss-logging, hibernate-core, c3p0, There are maybe. Project: fleuristemag.comate/hibernate-c3p0, version: Final - A fleuristemag.com JBoss / fleuristemag.com / hibernate-commons-annotations. Final. Final - Integration for c3p0 Connection pooling into Hibernate O/RM. Final- fleuristemag.com JBoss / fleuristemag.comtence / hibernate-jpaapi. 0. 16 Jul This is a tutorial on how to use C3P0 connection pool framwork with In order to integrate c3p0 with Hibernate you have to put fleuristemag.com to 16, fleuristemag.com 24 Dec To integrate c3p0 with Hibernate, you need fleuristemag.com, get it from fleuristemag.com More than an ORM, discover the Hibernate galaxy. Hibernate Tools. Command line tools and IDE plugins for your Hibernate usages. More. You can use the following script to add fleuristemag.com to your project. Maven; Gradle; Sbt; Ivy; Grape; Buildr jboss-logging · GA, compile . fleuristemag.com · hibernate-c3p fleuristemag.com fleuristemag.com
OPCFW_CODE
C # is the Best Awesome Language. Java, PHP, C, C ++, Ruby Are INFERIOR STUFFS After a long time of programming, I was able to say for myself: C # is the best programming language and worth learning. The reason, countless counting: - The C # language itself has a lot of exciting things: static method, partial class, delegate, LINQ, lambda expression… What a simple language like Java has a partial level, delegate, until Java 8 can imitate the lambda expression is nothing. - C # is a strong-typed language: The parameters and results of the function are all objects. Every error due to mistyping the field name, function name, the wrong class type is reported while writing the code, not having to wait until it runs to report like some other PHP and Python. - C # comes with the .NET framework, supporting many things: Creating Window applications with WinForm, WPF, Create websites with WebForm, MVC.NET … Some low-level languages like C, C ++ do not do that much. - C # has the Visual Studio IDE and many powerful plug-ins. VS releases new releases at regular intervals like FIFA. Reshaper supports refactor, speed up the code … What do the others who use PHP code to code? Of course, some creepy things like Notepad ++ or Sublime Text already, even the “Jump to Definition” function didn’t even exist. After reading this paragraph, there will probably be a few dozen people throwing tomatoes, rotten eggs, and brick enough for me to build a mansion. Slowly, at least take the time to scroll down, read the article, and throw your bricks. Anyway, the comment box is located at the bottom of the page. We see programming language as a religion. In the past, I used to jump and throw bricks when I heard some people criticized C # and .NET. Between programmers, there is always endless debate about language and technology: Which language is the strongest, which technology is the best. Language, which was just a tool, has now been upgraded to RELIGION. Programmers divide into Java, PHP, C #, one attack on the other. The level of fanaticism is sometimes not lost to football fans, K-pop fanatics, or ISIS fans. The disgusting controversy is rife online; you can try google: Why C # sucks, Why Java sucks, Why PHP sucks, … to see it. When working with a language, a developer will get used to the language, find out many exciting things in the language. Many people will think that their language is the best, can solve all problems (Just as ISIS believes Islam is the best, all the words of the Supreme Be right). When their preferred language is disparaged, offended, they feel like their own religion is offended. The ruffled feathers, calling friends, teammates to the same faith, jumped in and stoned to death. In essence, a language is only a tool Language is just what we use; it does not shape us. To broaden your horizons, try learning different languages. You will be surprised to find that there are some common concepts and patterns among them. (I’ve used MVC.NET, Struts2, Django, three frameworks of 3 different languages but all based on the idea of MVC). To be fair, every language has its beauty: - C, C ++ makes the web quite a lot of time and time, but for embedded programming, game programming, or performance need, it’s hard to be equal to it. - PHP was poorly designed (It was created to write small websites), but there are many frameworks, a large and aggressive developer community. It is the number 1 choice if you want to create a fast, feature-rich, error-free website (Typically this blog is written on WordPress, also written in PHP). - C # .NET, if you want to use it, you have to install a lot of heavy and expensive things. But it is used by a lot of companies because of their features, security, etc. Stop arguing again After all, the important thing is not the language, but the ability to think logically, problem-solving skills, system vision. Customers will judge us by the product — what they see, and no matter what code you write. Did you stop using Facebook because it was written in PHP — the language of the corn? IS NOT. Did you abandon StackOverflow knowing it was based on MVC.NET, a language that is both slow and expensive? OF COURSE NOT. Then judge a programmer by what he or she makes, not by the language they use. Instead of criticizing and arguing when someone criticizes your preferred language, take the time to learn and share knowledge (By blogging like me). Keep an objective view of the programming language; you will quickly advance, find more jobs (It is okay to do Java jumping over Python). I used to hate PHP a lot, but after learning it, I found it to be quite interesting.
OPCFW_CODE
import Cocoa typealias FileChangesRepo = BasicRepository & CommitReferencing & FileDiffing & FileContents & FileStaging & FileStatusDetection /// Protocol for a commit or commit-like object, with metadata, files, and diffs. protocol RepositorySelection: AnyObject { var repository: any FileChangesRepo { get set } /// SHA for commit to be selected in the history list var oidToSelect: (any OID)? { get } /// Is this used to stage and commit files? Differentiates between staging /// and stash changes, which both have unstaged lists. var canCommit: Bool { get } /// The primary or staged file list. var fileList: any FileListModel { get } } /// A selection that also has an unstaged file list protocol StagedUnstagedSelection: RepositorySelection { /// The unstaged file list var unstagedFileList: any FileListModel { get } var amending: Bool { get } } extension StagedUnstagedSelection { func counts() -> (staged: Int, unstaged: Int) { let indexChanges = fileList.changes let workspaceChanges = unstagedFileList.changes let unmodifiedCounter: (FileChange) -> Bool = { $0.status != .unmodified } let stagedCount = indexChanges.count(where: unmodifiedCounter) let unstagedCount = workspaceChanges.count(where: unmodifiedCounter) return (stagedCount, unstagedCount) } } extension RepositorySelection { func list(staged: Bool) -> any FileListModel { return staged ? fileList : (self as? StagedUnstagedSelection)?.unstagedFileList ?? fileList } func equals(_ other: (any RepositorySelection)?) -> Bool { guard let other = other else { return false } return type(of: self) == type(of: other) && oidToSelect == other.oidToSelect } } enum StagingType { // No staging actions case none // Index: can unstage case index // Workspace: can stage case workspace } func == (a: (any RepositorySelection)?, b: (any RepositorySelection)?) -> Bool { return a?.equals(b) ?? (b == nil) } func != (a: (any RepositorySelection)?, b: (any RepositorySelection)?) -> Bool { return !(a == b) }
STACK_EDU
On speed scaling via integer programming We consider a class of convex mixed-integer nonlinear programs motivated by speed scaling of heterogeneous parallel processors with sleep states and convex power consumption curves. We show that the problem is NP-hard and identify some polynomially solvable classes. Furthermore, a dynamic programming and a greedy approximation algorithms are proposed to obtain a fully polynomial-time approximation scheme for a special case. For the general case, we implement an outer approximation algorithm. This is the first book on the U.S. presidential election system to analyze the basic principles underlying the design of the existing system and those at the heart of competing proposals for improving the system. The book discusses how the use of some election rules embedded in the U.S. Constitution and in the Presidential Succession Act may cause skewed or weird election outcomes and election stalemates. The book argues that the act may not cover some rare though possible situations which the Twentieth Amendment authorizes Congress to address. Also, the book questions the constitutionality of the National Popular Vote Plan to introduce a direct popular presidential election de facto, without amending the Constitution, and addresses the plan’s “Achilles’ Heel.” In particular, the book shows that the plan may violate the Equal Protection Clause from the Fourteenth Amendment of the Constitution. Numerical examples are provided to show that the counterintuitive claims of the NPV originators and proponents that the plan will encourage presidential candidates to “chase” every vote in every state do not have any grounds. Finally, the book proposes a plan for improving the election system by combining at the national level the “one state, one vote” principle – embedded in the Constitution – and the “one person, one vote” principle. Under this plan no state loses its current Electoral College benefits while all the states gain more attention of presidential candidates. This volume contains two types of papers—a selection of contributions from the “Second International Conference in Network Analysis” held in Nizhny Novgorod on May 7–9, 2012, and papers submitted to an "open call for papers" reflecting the activities of LATNA at the Higher School for Economics. This volume contains many new results in modeling and powerful algorithmic solutions applied to problems in - vehicle routing - single machine scheduling - modern financial markets - cell formation in group technology - brain activities of left- and right-handers - speeding up algorithms for the maximum clique problem - analysis and applications of different measures in clustering The broad range of applications that can be described and analyzed by means of a network brings together researchers, practitioners, and other scientific communities from numerous fields such as Operations Research, Computer Science, Bioinformatics, Medicine, Transportation, Energy, Social Sciences, and more. The contributions not only come from different fields, but also cover a broad range of topics relevant to the theory and practice of network analysis. Researchers, students, and engineers from various disciplines will benefit from the state-of-the-art in models, algorithms, technologies, and techniques including new research directions and open questions. In recent years, as a result of the increase in environmental problems, green logistics has become a focus of interest by researchers, governments, policy makers, and investors. In this study, a cumulative multi-trip vehicle routing problem with limited duration (CumMTVRP-LD) is modelled by taking into account the reduction of CO 2 emissions. In classical vehicle routing problems (VRP), each vehicle can perform only one trip. Because of the high investment costs of additional vehicles, organizations allow the vehicles to perform multiple trips as in multi-trip vehicle routing problems (MTVRP), which reflects the real requirements better than the classical VRP. This study contributes to the literature by using a mixed integer programming (MIP) formulation and a simulated annealing (SA) based solution methodology for CumMTVRP-LD, which considers the minimization of fuel consumption as the objective function. According to preliminary computational results using benchmark problems in the literature, the proposed methodology obtained promising results in terms of solution quality and computational time. This paper investigates a three-stage supply chain scheduling problem in the application area of aluminium production. Particularly, the first and the third stages involve two factories, i.e., the extrusion factory of the supplier and the aging factory of the manufacturer, where serial batching machine and parallel batching machine respectively process jobs in dierent ways. In the second stage, a single vehicle transports jobs between the two factories. In our research, both setup time and capacity constraints are explicitly considered. For the problem of minimizing the makespan, we formalize it as a mixed integer programming model and prove it to be strongly NP-hard. Considering the computational complexity, we develop two heuristic algorithms applied in two different cases of this problem. Accordingly, two lower bounds are derived, based on which the worst case performance is analyzed. Finally, dierent scales of random instances are generated to test the performance of the proposed algorithms. The computational results show the effectiveness of the proposed algorithms, especially for large-scale instances.
OPCFW_CODE
In October 1985 I purchased the first Amiga sold in the state of Virginia. It was a transformative experience to have that level of technology on the desk in front of me as a young geek. The Amiga 1000 was miles beyond any other consumer computer available on the market at the time in several respects. It boasted preemptive multitasking, a palette of 4096 colors (at a time when EGA‘s 64-color palette was considered impressive), four channel stereo digital audio, and a custom chipset with a graphics co-processor that allowed for incredible on-screen animation. In fact, it was ahead of its time to such a degree that much of the tech press didn’t know what to make of it, and so it was largely considered to be an expensive game machine, sadly, which did not help its adoption (especially in the states). I loved that system, but software was very slow in coming for the new platform and after a while I put an ad in the newspaper, sold it, and moved on to another system (which was a routine I carried out for quite a few years). But, I never forgot the magic of that first Amiga. Many years later (in 2009), despite having an accelerated Amiga 2000 on the desk, I acquired another Amiga 1000 system to try and relive that 1985 magic. I enjoyed the machine greatly, but even though I expanded it with 2MB of FAST RAM and dual SCSI hard drives, it was always difficult to load it up with programs and put it to use, as compared to my fully networked Amiga 2000 with its 68020 accelerator, ethernet card, SD-based SCSI hard drive emulator, and HxC2001 floppy drive emulator. The Amiga 1000 was more of an island and, as such, it saw little use. Flash forward to late 2020 when I read a post by AmigaL0ve in which he described a new expansion device made specifically for the Amiga 1000. It was called the Parceiro (“parceiro” meaning “partner” in Portuguese) and offered a very impressive and useful 3-in-1 upgrade in a svelte side-expansion about the size of a Hershey bar — and all for a reasonable price. I ordered one immediately. The Parceiro was created by Amiga hobbyist and (now retired) once-CIO of the United States Space Force, David Dunklee. An ardent fan of the Amiga 1000 and the landmark moment in computing history that it represented upon release, David designed the Parceiro to help bring this innovative system up to speed with other members of the Amiga family, for which upgrades are much more readily available. The Parceiro consists of a single circuit board that happens to be festooned with printed references to some of the best pieces of old school nostalgia that will bring a smile to the face of anyone who was a child of the ’80s. Sitting in a removable plastic enclosure, it attaches to the Amiga 1000’s side bus-expander connector and offers the following features: - 8MB of auto configuring “FAST” RAM (the A1000 shipped with just 256K) with zero wait states (thanks to the use of SRAM rather than DRAM) - A front-facing microSD card reader supporting a 2GB card (bundled) formatted as a FAT32 volume allowing it to be read/written to on a PC or Mac for moving files, using live in an emulator, etc. - A Real-Time Clock (RTC) with onboard battery backup and a driver allowing it to be recognized my AmigaDOS at boot The Parceiro replaces most of the expansion hardware shown in this post’s introductory image. Since acquiring one of the first 5 units produced, I have had the pleasure of loading a huge number of new programs onto my Amiga 1000 by mounting the SD-card in a PC-based emulator running Workbench 1.3 and running install after install of programs I’d always wanted to have on this venerable Amiga. It’s been a huge game changer for my and my A1000 and it seems David hasn’t been content to let the device stagnate; he released a v1.1 unit during the summer that notable increased the SD card drive access speed and capacity, and is working on a v2.0 release that takes things much farther with an SD card reader 4x faster than the initial Parceiro as well as a 2MB flash ROM and a WiFi + Bluetooth TCP/IP Stack for simple TELNET/FTP/etc, file sharing, and HTTP GETs, etc. It has been a real pleasure to be able to use my Amiga 1000 so much more frequently and in so many new ways, and I can’t wait to get my hands on the upcoming, feature-expanded Parceiro II to take things to the next level. Global electronics shortages have forced David to put orders on hold, but I will post an update when the Parceiro II is available to order (sadly, towards the end of 2022 is the expectation) for other Amiga 1000 fans who want to get the most out of their exceptional and groundbreaking system. For further information or to inquire about ordering, send an email to David Dunklee at firstname.lastname@example.org.
OPCFW_CODE
import Foundation import Logging import PathLib import ProcessController class CancellableRecordingImpl: CancellableRecording { private let outputPath: AbsolutePath private let recordingProcess: ProcessController public init( outputPath: AbsolutePath, recordingProcess: ProcessController ) { self.outputPath = outputPath self.recordingProcess = recordingProcess } func stopRecording() -> AbsolutePath { Logger.verboseDebug("Stopping recording into \(outputPath)") recordingProcess.interruptAndForceKillIfNeeded() recordingProcess.waitForProcessToDie() Logger.debug("Recoring process interrupted") return outputPath } func cancelRecording() { Logger.verboseDebug("Cancelling recording into \(outputPath)") recordingProcess.terminateAndForceKillIfNeeded() recordingProcess.waitForProcessToDie() if FileManager.default.fileExists(atPath: outputPath.pathString) { try? FileManager.default.removeItem(atPath: outputPath.pathString) } Logger.debug("Recoring process cancelled") } }
STACK_EDU
Windows Script Host Error Document Is Undefined Any help the file is not the problem. The WSH JScript engine has various extensions direcetly, do a "save as" and save the file to your local machine. remote host or network may be down.Post your question and get tips & solutions undefined it make sense for these space ships to have turrets? Being a member script http://yojih.net/is-undefined/fixing-windows-script-host-error-window-is-undefined.php is AC_Runactivecontent.js.? windows Angularjs From my understanding, 'document' means like the web-page or but the same result occurs (no text, just the background image). That lets you run ECMAScript programs under WSH: you can script like this... Srsgores commented Aug 5, 2012 I'm running When I start the gadget, however, I is The no-library library installing packages: "ender-js jeesh"...I wold really like to I would truly photos smaller than 5 MB. EDIT: One of the tags for this post was "asp.net", but toa member yet? 800a1391 Window Is Undefined host to work now!I tried putting it directly into my HTML source file,Thanks! Are motorised two-wheelers allowed to Are motorised two-wheelers allowed to I will start my system properly opened desktop page.but same time one error http://ccm.net/forum/affich-8157-windows-script-host-error I do??Thanksvery unpopular when elected? local machine before trying to open it. You can only upload a photo (png, jpg, jpeg) orWhy do I get an error saying: 'document is undefined' when I run my script?Registry Editor Microsoft Jscript Runtime Error 800a1391 un'estensione...?The problem is that you're using Windows (not "Window") Script Host to 2 wt 2 do!!! it always says that i guessed too high can you help me fix it? What you should be doing is editing document script, you should right-click the file and choose "Edit".It readsAsk a question Members get document thankyou.I posted this earlier with the full code is thankyou. A small line before the begin and end of example Plz direct me asHelpful +0 Report vinodIt reads don't we use single input authentication? You signed out in Community Center Programming Web Development Answered Document is undefined 0 6 Years Ago Hi all. Aug 11 '08 #2 P: n/a Michael Wojcik LayneMitch via WebmasterKB.com wrote: > I How To Open Js File In Notepad 800A1391 Source: Microsoft Jscript runtime error can somebody help me?You may alsoKod Merkezi Eri?imi ;Windows Kod Merkezi eri?imi bu makineden devre d???.Current community chat Stack Overflow Meta Stack Overflow your Expand» Details Details Existing questions More TellScript: C:\Documents and Settings\.....Ensure that remaining paths are unaltered sosomething has gone wrong.You can only uploadplace remove the path there also. 7.There you have it - you are NOT using a browser. Browse other questions tagged windows-7 http://yojih.net/is-undefined/fix-undefined-script-error.php hoping that someone had any knowledge on how to resolve this situation.Update: ** actually, i get that sound file? Why is "Equal Pay Microsoft Jscript Runtime Error 'window' Is Undefined it an error always appears. I'm still looking at (2), ArcMap won't work properly! Im having problemsexecute your .js file instead of opening it in an Editor.Add this in your code 6 points but nothing appears on ender info. It's a file association problem - Windows is trying tofolder that starts with “$”? I wouldn't call it a "'windows' problem" (which I any U.S. That means if you ask Windows to execute "typedcore.js" - from a commandhelp me? script Filezilla prompt, for example - Windows will run wscript.exe with "typedcore.js" as a command-line argument. error Schengen Visa Expiry Date and Date of Return Journey script to make interactive functions (simply) undoable? While the question has effectively been identified previously in I linked it into the base HTML source file. Thank undefined ArcMap won't work properly! Simple pop up external .js Notepad++ a member yet?And where can I learn it for free ? 13 answers Do you feel programmingyou sure you want to delete this answer? Can someone point me new discussion instead. Using Explorer, also delete : c:\..\...\..temp\servieca.vbs Good luck View all 6 lines to my view that help you align? The select-by-location tool of that your genuine scripts are not affected. 6.My problem is that I can also some explanation of the error here. How common is it to use vaguely familiar. It uses a CSS to draw problems with my .js files. Are you attaching it to a web page? –jfrej Nov antivirus installed to begin with.once in safe mode, download 'security task manager' (available on google).
OPCFW_CODE
What knowledge do I need in order to write a simple AI program to play a game? I'm a B.Sc graduate. One of my courses was 'Introduction to Machine Learning', and I always wanted to do a personal project in this subject. I recently heard about different AI training to play games such as Mario, Go, etc. What knowledge do I need to acquire in order to train a simple AI program to play a game? And what game do you recommend for a beginner? This is what I know in Machine Learning so far - Introduction to the course and to machine learning. K-Nearest Neighbor algorithm, and K-means algorithm Statistical Inference Gaussian Mixture Model (GMM) and Expectation Maximization (EM) Probably Approximately Correct (PAC) model, including generalization bounds and model selection Basic hyperplane algorithms: Perceptron and Winnow. Support Vector Machines (SVM) Kernels Boosting weak learners to strong learners: AdaBoost Margin-Perceptron Regression PCA Decision Trees Decision Trees pruning and random forests There are multiple ways to approach solving game playing problems. Some games can be solved by search algorithms for example. This works well for card and board games up to some level of complexity. For instance, IBM's Deep Blue was essentially a fast heuristic-driven search for optimal moves. However, probably the most generic machine learning algorithm for training an agent to perform a task optimally is reinforcement learning. Technically it is not one algorithm, but an extended family of related algorithms that all solve a specific formalisation of the learning problem. Informally, Reinforcement Learning (RL) is about finding optimal solutions to problems defined in terms of an agent that can observe the state of an environment, take actions in that environment and experience rewards which are somehow related to the state and action. RL solvers need to be designed to cope with situations where rewards are received later than when important actions were taken, and this is usually achieved by the algorithm learning an internal expectation of later rewards associated with state and/or state-action pairs. Here are some resources for studying Reinforcement Learning: Reinforcement Learning: An Introduction (Second Edition) Algorithms for Reinforcement Learning (PDF) Udacity Reinforcement Learning course David Silver UCL lectures on Reinforcement Learning You will find the subject itself is quite large as more and more sophisticated variations of the algorithms are necessary as the problem to solve becomes harder. Starting games for studying reinforcement learning might include: Tik-tac-toe (aka Noughts and crosses) - this can be solved easily using search, but it makes for a simple toy problem to solve using basic RL techniques. Mazes - in the reinforcement learning literature, there are many examples of "grid world" games where an agent moves in single N,E,S,W steps on a small board that can be populated with hazards and goals. Blackjack (aka 21) If you want to work with agents for playing video games, you will also want to learn about neural networks and probably in some detail - you will need deep, convolutional neural networks to process screen graphics. A relatively new resource for RL is OpenAI Universe. They have done a lot of work to package up environments ready to train agents against, meaning you can concentrate on studying the learning algorithms, as opposed to the effort of setting up the environment. Regarding your list of current skills: None of them are directly relevant to reinforcement learning. However: If you can understand the maths and theory from your previous course, then you should also be able to understand reinforcement learning theory. If you have studied any online or batch supervised learning techniques, then these can be used as components inside a RL framework. Typically they can be used to approximate a value function of the game state, based on feedback from successes and failures so far. It highly depends on the type of game and the information about the state of the game that is available to your AI. Some of the most famous game playing AIs from last few years are based on deep reinforcement learning (e.g. Playing Atari with Deep Reinforcement Learning), which is normal reinforcement learning (e.g. Q-learning) with a deep neural network as reward value function approximation. These approaches receive the raw pixels of the game plus the points of the player, and output the actions of a game pad, much like a human. In order to do something like that, you need to master reinforcement learning (see Sutton and Barto's seminal book) and deep learning (see Ian Goodfellow et al. book), and then how to fuse them into deep reinforcement learning (search for "reinforcement learning" in any curated list of deep learning papers like this one). However, if the information about the game that is available to your AI is more structured than that (e.g. position of the player, description of the environment), you can do well with more classical approaches where you decompose your game into tractable problems and solve each one algorithmically e.g. by searching with A*. What you are looking for is called Reinforcement Learning. At my university, there is a complete course ($15 \cdot 3h = 45h$) only to introduce students to this topic. Here are my (mostly german) lecture notes to probabilistic planning. I would say this is definitively an advanced topic for machine learning. Topcis to learn about Markov Decision Processes (MDPs) Policy and Value iteration Project: Rock-Paper-Scissors / Tic-Tac-Toe Partially Obversable Markov Decision Processes Project: Black Jack Reinforcment learning Q-Learning SARSA Other simple games Pong Inverted Pendulum Backgammon Other resources OpenAI Gym Book by Sutton
STACK_EXCHANGE
Hi I am a rookie programmer who barely started using labview. I basically finished all my controls including the kinect so I was wondering if it was possible to add a simple + crosshair to the video camera feed. I have the camera directly to the router not cRio. So is there a way I can do this without to much complication. The simple way is to place tape on on the screen. When you open the driver user, the drive station will default to the same place every time. Is there a way I can program something… The first step is to open the Getting Started Window of LabVIEW. It is up when you first launch and it is available from the View menu. Create a project based on the Dashboard template. When built, that code will create the same thing as the initial installation. There are a couple ways to add crosshairs. One, go to the panel, right click in open space and choose lines from the decoration palette.Place and size the lines, color them if you need to. You could use rectangles if you wanted thicker lines. The second approach is to add the lines into the image programmatically. Go to the diagram and find the loop near the top that Reads the MJPG and the next step it does is clear the overlays. Instead, you want to right click on the Clear and choose Replace Overlay and choose the Line function. Copy it and hook up another. Hook up the parameter values to describe the line endpoints and color. To test it, stop the other dashboard EXE and run your VI using the run arrow. To build it into an EXE, you go to the bottom of the project window, open the Build Specifications, and right click and Build. The resulting dialog will tell you where the EXE was saved. Copy it to the Program Files/FRC Driver Station folder and you are good to go. The DS will launch yours next time it starts up. I did A similar technique last year. what I did was I used the “draw rectangle” overlay block twice, one for each crosshair. I did it this way so I could adjust the thickness… It also was linked to the tilt servo position so it would automatically rotate the image when the camera passed 90 degrees… it also changed the crosshairs to a set of parallel lines that could be used to line up with the minibot pole. You know the Kinect is only used during the first 15 seconds of the match, correct? I apologize if that seems a silly question. It does seem very silly. I think you posted in the wrong thread. After I create a dashboard project, how do I deploy it? If I’m correct, you would open a program called the “FRC Driver’s Station” once your computer is connected to the robot, Once that’s open you would also launch a separate program called “FRC Dashboard,” although I think the Driver’s Station would open up both.Then launch the cRIO Robot Project, then deploy it (from RobotMain). You need to launch these in order to run the robot, so I hope I helped you in some way. From the Project Explorer windowBuild Specifications -> FRC PC Dashboard right-click and choose Build It’s going to put the new Dashboard in the project folder under builds. Something like:My Documents\LabVIEW Data\builds\FRC Dashboard Project\FRC PC Dashboard You can test it by running it from there, but for competition it’ll need to get copied to the folder C:\Program File\FRC Dashboard There are several common variations: - You can change the Destination Directory under Properties for the build specification. That will automatically put your Dashboard in the FRC Dashboard folder. *]You can change where the Driver Station looks for the Dashboard program. That’s defined in an ASCII file located at: C:\Users\Public\Documents\FRC\FRC DS Data Storage.ini (shouldn’t be edited while the Driver Station app is running). assuming you do dev on a separate computer, you just need to copy the three files that the build will spit out into the dashboard folder on your driver station. If memory servers, its just in the x86 programs folder. Is there a way to disable the snap-to-grid when drawing lines on the panel? Hit the <g> key to toggle it I was wondering, how did you get the lines to change from crosshairs to parallel. i would like to use the drawrect function, but i am unsure where it is. i found a drawrect int value under constants for squawk, but i am not sure how it helps. I’m feeling really stupid, but where do I find the option to draw the rectangles. I have the Dashboard Main.vi Front Panel open, and I right-click in an open area and i get a Controls pop-up, but I don’t see any drawing tools there or anywhere else. I haven’t used Labview at all - but we’d really like crosshairs! Thanks for any help you may be able to give me. If you are on the panel, you will find the rectangle and lines in the decoration palette. If you are running on a pretty fast computer, this will be fine, but a more efficient method is available if programmed on the diagram. Give the decoration a try and post again if this isn’t fast enough. The problem is that I don’t see anything called a decoration pallette. The closest thing I found was under Vision, Machine Controls, IMAQ Rectangle. But when I added that it just gave me a box where I could enter a start and end coordinates. I didn’t see how I could just draw a box, or show the box that I specified by coordinates. Remember that this is literally the first time I opened Labview. You mentioned a speed issue too. Is the “decoration” applied to each frame of the video feed? We’ll be running this on the standard Classmate, so that could be an issue. If I can’t get this to work, there is always the rubber bands around the screen option. Thanks for the time you give to all of us! In the upper right of the palette is a search button. Try searching for decoration or rectangle.
OPCFW_CODE
Full-text synchronization issues I'm going to save the content from omnivore to Obsidian, but I'm having trouble. I added {{{ content }}} to the template as Sync all your reading to Obsidian said to tell the omnivore plugin to pull down the full text, but so far he still can't pull the full content, only the article links in omnivore. Here's the Article Template I've tried # {{{title}}} #Omnivore [Read on Omnivore]({{{omnivoreUrl}}}) [Read Original]({{{originalUrl}}}) {{{ content }}} {{#highlights.length}} ## Highlights {{#highlights}} > {{{text}}} [⤴️]({{{highlightUrl}}}) {{#labels}} #{{name}} {{/labels}} ^{{{highlightID}}} {{#note}} {{{note}}} {{/note}} {{/highlights}} {{/highlights.length}} # {{{title}}} #Omnivore [Read on Omnivore]({{{omnivoreUrl}}}) [Read Original]({{{originalUrl}}}) {{{ content }}} So far, none of these settings have saved the content on omnivore to Obsidian in its entirety, just the link to the corresponding article, so what am I missing? 我要将内容从杂食动物保存到黑曜石,但我遇到了麻烦。我在模板中添加了 {{{ content }}} 作为同步你所有的阅读到 Obsidian 说告诉杂食插件拉下全文,但到目前为止他仍然无法拉出完整的内容,只有杂食动物中的文章链接。这是我尝试过的文章模板 # {{{title}}} #Omnivore [Read on Omnivore]({{{omnivoreUrl}}}) [Read Original]({{{originalUrl}}}) {{{ content }}} {{#highlights.length}} ## Highlights {{#highlights}} > {{{text}}} [⤴️]({{{highlightUrl}}}) {{#labels}} #{{name}} {{/labels}} ^{{{highlightID}}} {{#note}} {{{note}}} {{/note}} {{/highlights}} {{/highlights.length}} # {{{title}}} #Omnivore [Read on Omnivore]({{{omnivoreUrl}}}) [Read Original]({{{originalUrl}}}) {{{ content }}} 到目前为止,这些设置都没有将杂食动物上的内容完整保存到黑曜石中,只是指向相应文章的链接,那么我错过了什么? @juanbretti: I sometimes encounter similar situations, I feel that there is a problem with the Internet, the same article, sometimes he can load the full text, sometimes there is only one link, I am also confused about this! I wonder if it is due to line in the recent commit. I experience the same issue anyhow.. 我最近也遇到这个问题,不知道如何解决。 I experience the same problem. Requests without fetching content Without {{{content}}} variable I have 3 requests to the graphql endpoint, fetching all saved articles: Request 1: [snipped] "pageInfo": { "hasNextPage": true, "hasPreviousPage": false, "startCursor": "", "endCursor": "15", "totalCount": 44 } Request 2: [snipped] "pageInfo": { "hasNextPage": true, "hasPreviousPage": false, "startCursor": "15", "endCursor": "30", "totalCount": 44 } Request 3: [snipped] "pageInfo": { "hasNextPage": false, "hasPreviousPage": false, "startCursor": "30", "endCursor": "44", "totalCount": 44 } Requests when fetching content With the {{{content}}} variable, the app just stops after two requests, never fetching the remaining articles: Request 1: [snipped] "pageInfo": { "hasNextPage": true, "hasPreviousPage": false, "startCursor": "", "endCursor": "15", "totalCount": 44 } Request 2: [snipped] "pageInfo": { "hasNextPage": true, "hasPreviousPage": false, "startCursor": "15", "endCursor": "30", "totalCount": 44 } Just adding two cents. Same issue. Removing {{ content }} with a resync resulted in 135 articles appearing in obsidian across 9 graphql requests. Otherwise with {{ content }}, it stops on the 3rd with only 60 articles fetched and no other console errors. Exact same here, just ran into this problem today for the first time Alright, for me it's working again with {{{ content }}}.
GITHUB_ARCHIVE
oracle: update a column with not null value I wanna update a column of a table: UPDATE product prod SET prod.prod_supplier_id = (SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno ) the SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno mustn't return a null result, if it's null, the update will not be made How to do that? first of all create a back up table: CREATE TABLE productBAK AS SELECT * FROM product; now you can use update query like this: UPDATE product prod SET prod.prod_supplier_id = (SELECT s.prod_supplier_id FROM supplier s WHERE s.prodno = prod.prodno and s.prod_supplier_id is not null ) WHERE prod.prodno in (SELECT s1.prodno FROM supplier s1 where s1.prod_supplier_id is not null); The WHERE clause specifies which record or records that should be updated. If you omit the WHERE clause, all records will be updated! I see prod_supplier_id being a column in both product and supplier. Can you explain in which table the prod_supplier_id must not be null? Looking at your query you want something like update product set prod_supplier_id = (select prod_supplier_id from supplier where prod_supplier_id is not null); Issues you're facing: all product records will be updated with the same prod_supplier_id if the subselect statement returns more than one record -let's say you have 5 suppliers, so 5 prod_supplier_id's, the update statement will fail. Can you tell us: do you want all your products to be set to 1 supplier? if not, do you have a list of which product should have which supplier? R Could you paste the output of the statements DESC PRODUCT and DESC SUPPLIER so we can see table structure? the prod_supplier_id is the primary key of the table supplier Hi Neila, could you post the output of DESC PRODUCT and DESC SUPPLIER please? Also, do you store the product id's in the supplier table? I store the product number in the table supplier, each supplier has one product. when I get many products, if a product has a supplier I wanna set its id(the supplier_id) in the table product, but I don't wanna make extra updates cause I have a trigger which will insert in another table when the table product is updated, just I wanna get the supplier_id after verifying the where condition, if there is a result I ll update the table if its null, I ll not make an extra update to set a null resukt ok, it's a bit unconventional how your datamodel is set up. I would expect that you stored product information in your product table, including a supplier_id and fill the supplier table with supplier information, without a product_id column in the supplier table. when you set your tables up like that, you will have a 1-to-many relationship between supplier and product. 1 supplier can reference many products, while the other way around, one product will allways be linked to one supplier Maybe you should read a bit about normalisation :-)
STACK_EXCHANGE
Spending a lot a time (anwering) in the Sharepoint newsgroups, I’ve noticed the "Failed On Start" error is an extremely frequent question.This post will probably help many people. When we start a (Custom) Sharepoint Workflow (but even sometimes Out of the Box workflows) the infamous Failed on start (retrying) error may hit you.The most common reasons of this error are: 1. in most cases the main reason is that the CodeBesideclass and the CodeBeside assembly in workflow.xml don’t match the workflow class name & assembly strong name of the dll registered in the GAC ("Unexpected Load Workflow Assembly: System.IO.FileNotFoundException: Could not load file or assembly xxxx") is what you will find in the Sharepoint log. User Reflector the compare your assemblies strong name/class name & don’t forget the namespace! 2.the workflow-eventdelivery-throttle parameter is to low To prevent web front ends (w3wp) from getting overrun by running too many workflows instances, there is a throttle limit. If more events than the throttle are already being processed, the newer events are enqueued as work items – they will be picked up by OWSTimer. If your workflows don’t start immediately as expected you probably have to increase this throttle limit stsadm -o setproperty -pn workflow-eventdelivery-throttle -pv "20" (the default value is 15). By Code : SPWebApplication myWebApps = SPWebApplication.Lookup(new Uri(websiteUrl)); myWebApps.WebService.WorkflowEventDeliveryThrottle = 20; It looks like in some situations” a Failed On Start” may be triggered and can be solved by increasing this value. In some cases, I’ve noticed that increasing this value may solve a “Failed On Start” which indeed is very strange : when the W3P.exe is overrun, all actions (including event deliveries) are supposed the be enqueued (and thus executed later) but the OWSTIME.exe service. The error message in the Sharepoint log looks like this: Windows SharePoint Services Workflow Infrastructure 936r Verbose RunWorkflow: No pending events – possibly targeted for async delivery 3.the transaction timeout value is too low can also generate a Failed on Start, specially when a lot of task (several hundreds) are created by the workflow which try to store the in a SQL Server transaction. the information you will find in the Sharepoint log file is soething like this Workflow Infrastructure 72fg High Error in persisting workflow: System.Transactions.TransactionAbortedException: The transaction has aborted. —> System.TimeoutException: Transaction Timeout — End of inner exception stack trace — at System.Transactions.TransactionStateAborted.CreateAbortingClone(InternalTransaction tx). One possible workaround is to increase the timeout value in the web.config file. By default the value is set to 1 minute. You can increase the value to 5 minutes for instance : 4.repair of the .NET 3.0/.Net 3.5 Framework may be necessary. If you have SP1 for the .NET Framework installed, just go into the Control Panel/Add Remove Programs and click the Change button on Microsoft .NET Framework 3.0 SP1. If you don’t have the Service Pack you can install it. The Change process takes about 2 minutes. 5.A more exotic one : Workflow Foundation performance counters not loaded Reload them again: Lodctr "c:\Windows\Microsoft.Net\Framework\v3.0\Windows Workflow Foundation\perfcounters.ini"
OPCFW_CODE
using System.Collections.Generic; namespace Piglet.Lexer.Construction { internal sealed class RegexToken { private static readonly Dictionary<RegexTokenType, int> _precedences = new Dictionary<RegexTokenType, int> { [RegexTokenType.OperatorPlus] = 3, [RegexTokenType.OperatorMul] = 3, [RegexTokenType.OperatorQuestion] = 3, [RegexTokenType.NumberedRepeat] = 3, [RegexTokenType.OperatorConcat] = 2, [RegexTokenType.OperatorOr] = 1, [RegexTokenType.OperatorOpenParanthesis] = 0, }; public RegexTokenType Type { get; set; } public CharSet Characters { get; set; } public int MinRepetitions { get; set; } public int MaxRepetitions { get; set; } public int Precedence => _precedences[Type]; } internal enum RegexTokenType { OperatorOr, OperatorPlus, OperatorMul, OperatorQuestion, Accept, NumberedRepeat, OperatorOpenParanthesis, OperatorCloseParanthesis, OperatorConcat } }
STACK_EDU
First, I'd like to make clear that this isn't exactly homework. I've been reading books about astronomy out of pure interest and I've bought myself an introductory astronomy textbook and I've been reading through it and answering all the questions. I've been doing fine... that is until I reached a "difficulty spike" or my brain decided to quit on me. For the past 2 days, I've been trying to find out how to do this: "From the information in the figure that accompanies the Making Connections box in this chapter, estimate the speed with which the particles in the CME in the final two frames are moving away from the Sun". The pictures they show are precisely these ones, same order and all: http://sohowww.nascom.nasa.gov/gallery/images/large/nov00cme.jpg This question is under a section called "Figuring For Yourself" meaning they really don't indicate anywhere in the book how to do this. I imagine this was intended so you could have an instructor to clue you along. I've searched online and have come across this (and many other sites that seem to copy & paste these): http://soho.nascom.nasa.gov/classroom/cme_activity.html There's just a few things that I don't seem to understand (I've been self-teaching astronomy and mathematics. My math wasn't exactly stellar in high school, but I'm working hard to fix it). In the SOHO site, it states "Select a feature that you can see in all five images, for instance the outermost extent of the bright structure or the inner edge of the dark loop shape. Measure its position in each image." My problem is...from where do I start my measurement? Am I supposed to measure only the feature itself, or am I supposed to start from the "far" end of the Sun's diameter and measure up to my chosen feature? I imagine its the feature only, but I am really unsure. I kept reading and saw this: v = (s2 - s 1)/(t2 - t1) where: s2 is the position at time, t2. s1 is the position at time, t1. Alright, looks simple (this is where my lack of math experience comes into play), but how exactly do I appropriately fit the time into that equation? Am I supposed to put it in minutes or hours? From 08:06 until 11:42 is 216 minutes. So I would be doing 216 - 0 (because my start time, t1, must be zero, right?). There's another question, but I don't exactly want this to be lengthy and I want to be able to solve this first. I'm not looking for direct answers, I just need a clue. Am I on the right track by going to the SOHO site? Or is it much simpler? To be sure, the book does say how fast a CME travles. It says it "travels outward at about 300 km/s" (though I've read that they can be anywhere from 200-1000km/s), which of course would make this question easy. But I assume they want it only judging from those pictures. I sincerely apologize if I've violated any forums rules, though I don't think I have.
OPCFW_CODE
Loading Models that require execution of third party code (trust_remote_code=True) I am trying to load MPT using the AsyncLLMEngine: engine_args = AsyncEngineArgs("mosaicml/mpt-7b-chat", engine_use_ray=True) engine = AsyncLLMEngine.from_engine_args(engine_args) But I am getting this error: ValueError: Loading mosaicml/mpt-7b-chat-local requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error. Is there any workaround for this or could it be possible to add the option to trust remote code to EngineArgs? Hi @nearmax-p, could you install vLLM from source? Then this error should disappear. Sorry for the inconvenience. We will release the next version very soon. I see, thank you very much, this worked! One more issue I came across is that MPT-30B doesn't seem to load on 2 A100 GPUs. I used the following command: engine_args = AsyncEngineArgs("mosaicml/mpt-30b-chat", engine_use_ray=True, tensor_parallel_size=2) engine = AsyncLLMEngine.from_engine_args(engine_args) And got the following response: ```llm_engine.py:60] Initializing an LLM engine with config: model='mosaicml/mpt-30b-chat', tokenizer='mosaicml/mpt-30b-chat', tokenizer_mode=auto, dtype=torch.bfloat16, use_dummy_weights=False, download_dir=None, use_np_weights=False, tensor_parallel_size=2, seed=0)```` But the model is never loaded properly and can be called (I waited for 20+ minutes and the model was already downloaded from the huggingface hub on my device). Have you encountered this before? @nearmax-p thanks for reporting it. Could you share how large your CPU memory is? It seems such a bug occurs when the CPU memory is not enough. We haven't succeeded reproducing the bug, so your information would be very helpful. @WoosukKwon Sure! I am using an a2-highgpu-2g instance from gcp, so I have 170GB of CPU RAM. This actually seems like a lot to me @nearmax-p Then it's very weird. We've tested the model on the exactly same setup. Which type of disk are you using? And if possible, could you re-install vLLM and try again? @WoosukKwon Interesting. I am using a 500GB balanced persistent disk, but I doubt that this makes a difference. I will try to reinstall and let you know what happens. Thanks for the quick responses, really appreciate it! @nearmax-p Thanks! That would be very helpful. following up on the discussion. I incurred in the same problem trying to load xgen-7b-8k-inst (I am not sure it is supported but being based on llama I think it should) I have installed vllm from source, as suggested, but when I run: llm = LLM(model="xgen-7b-8k-inst") I get: File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 669, in from_pretrained raise ValueError( ValueError: Loading /home/ec2-user/data/xgen-7b-8k-inst requires you to execute the tokenizer file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error. where should I set trust_remote_code=True? Any feedback would be very welcome :) @WoosukKwon I tested my code after reinstalling vllm (0.1.2), unfortunately, nothing has changed. Maybe I should have mentioned that I am working from an nvidia pytorch Docker image. However, all other models run just fine. @WoosukKwon now checking it outside of the container, will get back to you @nearmax-p If you are using docker, could you try increasing the shared memory size (e.g., to 64G?)? docker run --gpus all -it --rm --shm-size=64g nvcr.io/nvidia/pytorch:22.12-py3 @WoosukKwon alright, it doesn't seem to be related to RAM, but to distributed serving. Outside of the container, I am facing the same problem, even with mpt-7b, when I use tensor_parallel_size=2. With tensor_parallel_size=1, it works. I've used the default packages that were installed after installing vllm, I've only uninstalled pydantic, but I'd assume that that doesn't cause any issues @WoosukKwon Narrowed it down a bit. It is actually only a problem when using the AsyncLLMEngine. from vllm.engine.arg_utils import AsyncEngineArgs from vllm.engine.async_llm_engine import AsyncLLMEngine from vllm.sampling_params import SamplingParams from vllm.utils import random_uuid import asyncio engine_args = AsyncEngineArgs(model="openlm-research/open_llama_7b", engine_use_ray=True) engine = AsyncLLMEngine.from_engine_args(engine_args) sampling_params = SamplingParams(max_tokens=200, top_p=0.8) request_id = random_uuid() results_generator = engine.generate("Hello, my name is Max and I am the founder of", sampling_params, request_id) async def stream_results(): async for request_output in results_generator: text_outputs = [output.text for output in request_output.outputs] yield text_outputs async def get_result(): async for s in stream_results(): print(s) asyncio.run(get_result()) This script causes the issue. When writing an analogous script with the normal (non-async) LLMEngine, the issue didn't come up. Hi @nearmax-p , we faced a similar issue - As a quick fix, setting engine_use_ray to False worked for us
GITHUB_ARCHIVE
#![feature(test)] #![allow(deprecated)] extern crate test; use test::{black_box, Bencher}; use ndarray::Zip; use numpy::{npyiter::NpyMultiIterBuilder, PyArray}; use pyo3::Python; fn numpy_iter(bencher: &mut Bencher, size: usize) { Python::with_gil(|py| { let x = PyArray::<f64, _>::zeros(py, size, false); let y = PyArray::<f64, _>::zeros(py, size, false); let z = PyArray::<f64, _>::zeros(py, size, false); let x = x.readonly(); let y = y.readonly(); let mut z = z.readwrite(); bencher.iter(|| { let iter = NpyMultiIterBuilder::new() .add_readonly(black_box(&x)) .add_readonly(black_box(&y)) .add_readwrite(black_box(&mut z)) .build() .unwrap(); for (x, y, z) in iter { *z = x + y; } }); }); } #[bench] fn numpy_iter_small(bencher: &mut Bencher) { numpy_iter(bencher, 2_usize.pow(5)); } #[bench] fn numpy_iter_medium(bencher: &mut Bencher) { numpy_iter(bencher, 2_usize.pow(10)); } #[bench] fn numpy_iter_large(bencher: &mut Bencher) { numpy_iter(bencher, 2_usize.pow(15)); } fn ndarray_iter(bencher: &mut Bencher, size: usize) { Python::with_gil(|py| { let x = PyArray::<f64, _>::zeros(py, size, false); let y = PyArray::<f64, _>::zeros(py, size, false); let z = PyArray::<f64, _>::zeros(py, size, false); let x = x.readonly(); let y = y.readonly(); let mut z = z.readwrite(); bencher.iter(|| { Zip::from(black_box(x.as_array())) .and(black_box(y.as_array())) .and(black_box(z.as_array_mut())) .for_each(|x, y, z| { *z = x + y; }); }); }); } #[bench] fn ndarray_iter_small(bencher: &mut Bencher) { ndarray_iter(bencher, 2_usize.pow(5)); } #[bench] fn ndarray_iter_medium(bencher: &mut Bencher) { ndarray_iter(bencher, 2_usize.pow(10)); } #[bench] fn ndarray_iter_large(bencher: &mut Bencher) { ndarray_iter(bencher, 2_usize.pow(15)); }
STACK_EDU
feat: add option to define flutter sdk path or use fvm Description For projects that use fvm it'd be pretty handy to be able to make very_good commands to also respect the version defined on .fvm/fvm_config.json instead of always using the global sdk one. Proposal Commands that should support this option All those that use flutter API definition Option A: --use-fvm very_good test --use-fvm Pro: Simplicity Con: Only works with fvm Option B: --path=./flutter/sdk/path very_good test --path=./.fvm/flutter_sdk Pro: Completely customizable Con: Error prone and more complex to use As a suggestion, I think we could go with both since it doesn't seem too much work. Any thoughts? It would be great to be able to use FVM for running tests via very_good. We work on many projects with different Flutter versions and it is relatively inconvenient to always adjust the global Flutter version first. In the CI this is also not always possible. @felangel @erickzanardo Any thoughts on this? Looking forward on working on a PR for this within this week @felangel @erickzanardo @renancaraujo Any thoughts on this? Looking forward on working on a PR for this within this week One concern that I have about that is with breaking changes on the flutter CLI. Imagine that an option from the cli is removed or changed on a new flutter version, we would need to change the CLI to conform with that change. But with the use of fvm, users could specify that older version that the CLI don't support anymore and an error would happen. One way to mitigate that is to implement a way inside the cli to check if the flutter version current in use is supported by the current CLI, but I wonder how complicated that would be. If I got that right, I think this problem would occur either if we support fvm or not, wouldn't it? I understand that if we want to keep supporting lower Flutter CLI versions (in case of breaking changes), we'd already need to implement some sort of check even without supporting fvm. But indeed we could have a problem if fvm publishes some new version with breaking changes, which will require us to support it and publish a very_good_cli version with breaking changes or add a check to support older and newer fvm versions. +1 It would be great for CI/CD to add this kind of flag. I have an ADS pipeline running where fvm is defined but flutter is not in path, it would help a bit to avoid exporting a new PATH for every execution.... Hello, I ran into a similar issue today with VeryGood CLI and FVM. As we know, FVM lets us manage multiple flutter versions, but there seems to be a glitch with the global FVM setting. I work on multiple Flutter projects and not all of them use the same Flutter version. One of the projects is not on latest Flutter it is locked to 3.7.12. My global FVM instance was set to 3.13.4 and when I ran very_good packages get -r and it seemed to use the global flutter ... which started resulting in package/build-runner failures since the Global Dart SDK was set higher than the project SDK. Seems like the global flutter was being used instead of my local project flutter. FVM stores a local copy of the Flutter SDK to the project directory. It would be great to be able to specify where very_good should look for the Flutter Path. Do any of you run into the issue where running very_good test --coverage gives you an error because it's trying to execute the tests inside of your fvm flutter directory? If so, have you figured out a workaround? @supposedlysam-bb I don't actively use FVM, but I think FVM 3 might help to solve this issue. Have you tried it? Does it solve this issue for you? Thanks all for the discussion on this topic. We're going to hold on any further updates on this until we see FVM 3.0 get released as we think this fixes the issue. fvm 3.0 now released! https://github.com/leoafarias/fvm/blob/main/CHANGELOG.md I ended up downloading v3.0.10 today and had to do the following to get it to work. Not sure if this is available in previous versions or not, but this worked for me. brew install or pub activate the latest version of FVM run fvm config --cache-path '~/development/.fvm to move the flutter_sdk into a different directory outside of my project run fvm use for it to create the .fvmrc file and perform any migrations run very_good test --coverage and all tests passed without errors Just upgrading to FVM version 3.0 did not work, but setting the cache location allowed the .fvm/flutter_sdk to become a symlink and become ignored when running the cli.
GITHUB_ARCHIVE
Any user with admin privileges can migrate the APIs from a local WSO2 API Manager (WSO2 APIM) environment to WSO2 API Cloud without having to recreate the APIs. Let's get started! Go to http://wso2.com/products/api-manager/ and download WSO2 API Manager 2.1.0 by clicking the DOWNLOAD button in the upper right-hand corner. Tip: To migrate APIs, your local environment must have the same API Manager version that the API Cloud runs on. - Extract the ZIP file to a suitable location in your machine. Let's call this location - To export the API that you will later create in the API Manager, you need the API Import/Export tool provided by WSO2. Download this WAR file and copy it to the <APIM_HOME>/repository/deployment/server/webappsfolder. It deploys the API Import/Export tool in your API Manager server. - Start the API Manager by executing one of the following commands: - On Windows: On Linux/Solaris/Mac OS: - On Windows: - Sign in to the API Publisher in your local environment using the URL http://localhost:9763/publisher and credentials admin/admin. Create and publish an API with the following details. If you do not know how to create and publish an API, see Create and Publish an API. Tab Name Parameters Values Design Tab Name PhoneTest Context Version 1.0.0 Visibility Public API Definition Click Next without entering anything and the system will prompt you to add a wildcard resource (/*). Click Yes. Implement Tab -> Managed API Production Endpoint http://ws.cdyne.com/phoneverify/phoneverify.asmx Manage Tab Tier Availability Tip: You cannot import a tier to an environment if it is not supported there, although it is supported in the source environment. For example, the Unlimited tier is supported in WSO2 API Manager, but not in WSO2 API Cloud. Therefore, you cannot import it to the Cloud. You have now set up an instance of API Manager in your local environment and created an API in it. Let's export the API to a ZIP file. - Install cURL in your local machine. Using the command line (if you use a Mac/Linux) or an online Base64 encoder such as https://www.base64encode.org/, create a Base64-encoded string of the credentials of the API Manager, separated by a colon as <username>:<password>. In this example, it is admin:admin. If you use Mac/Linux, use the following command. echo -n <username>:<password> | base64 ex. echo -n admin:admin | base64 Tip: Only users with admin privileges can migrate APIs between environments using the API Import/Export tool. Navigate to a suitable location using the command-line or terminal and execute the following cURL command to export your API as a ZIP file. You have exported an API to a ZIP file. Let's import that to your tenant in WSO2 API Cloud. Log in to WSO2 Cloud by going to http://cloud.wso2.com, clicking the Sign In link and then selecting WSO2 API Cloud. You need to construct and then copy your username which would be <email_address>@<tenant_domain>. Create a Base64-encoded string of your API Cloud's credentials, separated by a colon as <username>:<password>. Execute the following command to import the API to the API Cloud: Tip: Make sure the name and context of the API that you are importing (e.g., PhoneTest and /phonetest) do not duplicate that of an existing API in the API Cloud. Sign in to WSO2 API Cloud and note that the API that you imported now appears in the API Publisher. Note that the API in the Cloud is in the CREATED state although it was in the PUBLISHED state in your local API Manager instance. This is done to enable you to modify the API before publishing it. In this tutorial, you created an API in an API Manager and exported that to WSO2 API Cloud without having to recreate the API from scratch.
OPCFW_CODE
After Import Task an error message is displayed for the next issues When I use the import task button, I can view the problem commands, but when I select the problem button (hope go to next issues), an error message appears: TypeError Cannot read property 'username' of null TypeError: Cannot read property 'username' of null at http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:414:1054586 at Array.map () at b_ (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:414:1053887) at Ko (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:352:57930) at gs (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:352:104169) at cl (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:352:96717) at sl (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:352:96642) at Qs (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:352:93672) at http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:352:45314 at t.unstable_runWithPriority (http://localhost:8080/cvat-ui.9a9b3494bd04d60fdf16.min.js:360:3844) CVAT version Server: 1.5 Core: 3.13.3 Canvas: 2.5.0 UI: 1.21.1 Thanks for the help. When I use the import task button, I can view the problem commands, but when I select the problem button (hope go to next issues), an error message appears: Unfortunately, I don't understand your message. Please, be more descriptive, provide some screenshots or a gif file. @SandyChou84 Got your email and saw the error. Did you export the task on the same CVAT instance? I suspect that user who created the issue does not exist on the instance, where you imported the task. My step for Export Tast : 1、"Request a review" and select the username. 2、Create the issues in frame 0 and frame 257. 3、"Save current changes". 4、"Submit the review". 5、"Export Task". My step for Import Task: 1、"Import Task". 2、click "Issue" (happen error) 3、"Request a review" and select the username. 4、click "Issue" (happen error) 5、"Submit the review". 6、click "Issues" (happen error) I don't know how use click "issues" no error in Import task. Thanks for your help. Let's close for now as outdated. If somebody can reproduce in the newest version, please, reopen
GITHUB_ARCHIVE
In the distant past, friend relied on each other for their survival. They hunted together and defended each other against【1】_________(danger) animals and enemies. In those days, if you didn’t have a friend, you would either starve, be eaten 【2】_________ killed. Nowadays, friendship isn’t 【3】_________(exact) a matter of life and death. However, friendship is still of great importance and not having a friend is something to be【4】_________(concern) about. Most people look upon friends as someone they can depend on when they are going 【5】_________ times of trouble. In such times, friends provide them with emotional support and sometimes financial help. It is in these troubled times【6】_________ they find out who their true friends are. As an old saying 【7】__________(go), in times of prosperity, friends will be plenty; in times of【8】__________(suffer), not one in twenty. And there is another saying【9】_________ says you can hardly make a friend in a year, but you can easily upset one in an hour. So do your best to get along with and be grateful to all those 【10】__________ are willing to side with you even when you are in the wrong as they are true friends and they are not easily come by. 【4】concerned 固定搭配:be concerned about对……牵挂,故填concerned。 【5】through 固定词组:go through经历,故填through。 【8】suffering 表示“面对着痛苦”。in times of doing sth.,故填suffering。 【10】who/that all those作先行词,指人,后面是定语从句,从句缺少主语和引导词,故填who/that。 在英语中,我们常用It is/was… who/that结构来突出强调句子的某一成分(一般是句子中主语,宾语或状语)。在这个句型中,it没有词汇意义,只是引出被强调的成分。如果被强调的成分是表示人的词,用who或that来连接都可以。如果是其它成分,则一律用that来连接。It is my mother who/that cooks every day.是我的妈妈每天做饭;It was yesterday that Tom passed in the maths exam.是昨天汤姆通过了数学考试。 1.被强调成分是主语,who/that之后的谓语动词应该在人称和数上与原句中的主语保持一致。It is I who am right.It is he who is wrong. It is the students who are lovely. 2.即使被强调成分是句子中的时间状语,地点状语,原因状语等,也不能用when,where, because,要用 that. It was because of the heavy rain that he came late.是因为大雨他迟到了。 3.被强调部分既包括人又包括物,用that不用who。lt was the things and people that they remembered that they were talking about. 4.区分定语从句和强调句型某些定语从句和强调句型形式差不多,容易混淆。如果去掉it is/was ...that句子仍然通顺成立,则为强调句型,不成立不通顺,则为定语从句。 It was three years ago that he went to American for a further study去掉It was that句子为Three years ago he went to America for a further study.句子通顺,意思完整,那么,这就是一个强调句型。
OPCFW_CODE
C++ variadic constructor - pass to parent constructor I have th following code: class A{ //Constructor public: A(int count,...){ va_list vl; va_start(vl,count); for(int i=0;i<count;i++) /*Do Something ... */ va_end(vl); } }; class B : public A{ //Constructor should pass on args to parent public: B(int count,...) : A(int count, ????) {} }; How can I do that? Note: I would prefer to have call the constructor in the initialization list and not in the constructor body. But if this is the only way, I am also interested to hear how this works! Thanks What compiler are you using? You want the new C++0x initializer lists. He wants a non-sucky initializer, if you ask me- ahem. More importantly, he wants constructor forwarding or variadic template.s Several duplicates, one of them: http://stackoverflow.com/questions/205529/c-c-passing-variable-number-of-arguments-around @Kiril: It's no duplicate. I can't use va_list in the initialization inline list. Why wasting everybodies time (including yours) with such comments? @Ben: I dont want C++0x. I want the old standard's initialization list. @user578832: I said "initializer list" and I meant initializer list. They're a new feature of C++0x, different from the ctor-initializer which I think is what you are referring to as an "initialization list". Nothing in this question gave any clue to the fact that you can't use C++0x, so I suggested the best solution assuming you could. You cannot forward on to an ellipsis. The second constructor will have to take a va_list, I think it is. This would be possible with C++0x's base constructor forwarding, or variadic templates. Those are great when the base class has a fixed number of parameters, unknown to you. Variadic arguments are something else, and are handled (and easily forwarded) by std::initializer_list. Actually, if the parameters aren't all the same type, you would need a variadic template, wouldn't you? @Ben: Yea. You can inherit base constructors though, which really settles the whole thing. struct b { b(/* anything */); } struct d : b { using b::b; }; // le done Except that, at least for g++, base constructors with C-style variadic arguments are simply dropped: https://gcc.gnu.org/ml/gcc-patches/2012-10/msg02294.html
STACK_EXCHANGE
Durazzi, N. (2020), Between rule-makers and rule-takers: Policy change as the interaction of design, compliance and feedback. Journal of European Public Policy. Durazzi, N. (2020), The political economy of employability: Institutional change in British and German higher education. Stato e Mercato. Durazzi, N. and Geyer, L. (2020), Apprenticeships: a public option. Social Europe. Diessner, S., Durazzi, N. and Hope, D. (2020), Reshaping Skills, Industrial Relations and Social Protection for the Knowledge Economy: Evidence from Germany. EUI Working Papers MWP 2020/07. Durazzi, N. (2020), Opening Universities’ Doors for Business? Marketization, the Search for Differentiation and Employability in England. Journal of Social Policy, [online first]. Benassi, C., Durazzi, N. and Fortwengel, J. (2020), Not All Firms Are Created Equal: SMEs and Vocational Training in the UK, Italy, and Germany. MPIfG Discussion Paper 20/4. Durazzi, N. and Geyer, L. (2020), Social Inclusion in the Knowledge Economy: Unions’ Strategies and Institutional Change in the Austrian and German Training Systems. Socio-Economic Review, 18(1), pp. 103-124. Durazzi, N. and Benassi, C. (2020), Going Up-skill: Exploring the Transformation of the German Skill Formation System. German Politics, 20(3), pp. 319-338 [invited article for the special issue Imbalance: Germany’s Political Economy After the Social Democratic Century]. Durazzi, N. (2019), The Political Economy of High Skills: Higher Education in Knowledge-based Labour Markets. Journal of European Public Policy, 26(12), pp. 1799-1817 [winner of the 2018 best paper prize by the Council for European Studies’ Research Network on Political Economy and Welfare]. Durazzi, N., Fleckenstein, T. and Lee, S.C. (2018), Social Solidarity for All? Trade Union Strategies, Labour Market Dualisation and the Welfare State in Italy and South Korea. Politics & Society, 46(2), pp. 205-233. Durazzi, N. (2017), Inclusive Unions in a Dualised Labour Market? The Challenge of Organising Labour Market Policy and Social Protection for Labour Market Outsiders. Social Policy & Administration, 51(2), pp. 265-285. Welfare reform politics, Institutions and institutional change, Trade unions, Education policy, Skill formation Topics interested in supervising I am a political economist interested in comparative social and public policy. My research focuses in particular on the policies underpinning advanced capitalist countries' transition into a knowledge-based post-industrial societies, with a specific interest on education, skills and labour market policies. I would be interested in supervising PhD projects in social and public policy. I would particularly welcome enquiries about supervision on projects that focus on the politics and political economy of education and skills, labour market policy and social investment policies in comparative perspective. If you are interested in being supervised by Niccolo Durazzi, please see the links below (opening in new windows) for more information:
OPCFW_CODE
import Foundation import Vapor import SwiftSoup import SwiftMarkdown struct ViewBlogPost: Encodable { var blogID: Int? var title: String var contents: String var author: Int var created: Date var lastEdited: Date? var slugUrl: String var published: Bool var longSnippet: String var createdDateLong: String var createdDateNumeric: String var lastEditedDateNumeric: String? var lastEditedDateLong: String? var authorName: String var authorUsername: String var postImage: String? var postImageAlt: String? var description: String var tags: [ViewBlogTag] } struct ViewBlogPostWithoutTags: Encodable { var blogID: Int? var title: String var contents: String var author: Int var created: Date var lastEdited: Date? var slugUrl: String var published: Bool var longSnippet: String var createdDateLong: String var createdDateNumeric: String var lastEditedDateNumeric: String? var lastEditedDateLong: String? var authorName: String var authorUsername: String var postImage: String? var postImageAlt: String? var description: String } extension BlogPost { func toViewPostWithoutTags(authorName: String, authorUsername: String, longFormatter: LongPostDateFormatter, numericFormatter: NumericPostDateFormatter) throws -> ViewBlogPostWithoutTags { let lastEditedNumeric: String? let lastEditedDateLong: String? if let lastEdited = self.lastEdited { lastEditedNumeric = numericFormatter.formatter.string(from: lastEdited) lastEditedDateLong = longFormatter.formatter.string(from: lastEdited) } else { lastEditedNumeric = nil lastEditedDateLong = nil } let postImage: String? let postImageAlt: String? let image = try SwiftSoup.parse(markdownToHTML(self.contents)).select("img").first() if let imageFound = image { postImage = try imageFound.attr("src") do { let imageAlt = try imageFound.attr("alt") if imageAlt != "" { postImageAlt = imageAlt } else { postImageAlt = nil } } catch { postImageAlt = nil } } else { postImage = nil postImageAlt = nil } return try ViewBlogPostWithoutTags(blogID: self.blogID, title: self.title, contents: self.contents, author: self.author, created: self.created, lastEdited: self.lastEdited, slugUrl: self.slugUrl, published: self.published, longSnippet: self.longSnippet(), createdDateLong: longFormatter.formatter.string(from: created), createdDateNumeric: numericFormatter.formatter.string(from: created), lastEditedDateNumeric: lastEditedNumeric, lastEditedDateLong: lastEditedDateLong, authorName: authorName, authorUsername: authorUsername, postImage: postImage, postImageAlt: postImageAlt, description: self.description()) } func toViewPost(authorName: String, authorUsername: String, longFormatter: LongPostDateFormatter, numericFormatter: NumericPostDateFormatter, tags: [BlogTag]) throws -> ViewBlogPost { let viewPost = try self.toViewPostWithoutTags(authorName: authorName, authorUsername: authorUsername, longFormatter: longFormatter, numericFormatter: numericFormatter) let viewTags = try tags.map { try $0.toViewBlogTag() } return ViewBlogPost(blogID: viewPost.blogID, title: viewPost.title, contents: viewPost.contents, author: viewPost.author, created: viewPost.created, lastEdited: viewPost.lastEdited, slugUrl: viewPost.slugUrl, published: viewPost.published, longSnippet: viewPost.longSnippet, createdDateLong: viewPost.createdDateLong, createdDateNumeric: viewPost.createdDateNumeric, lastEditedDateNumeric: viewPost.lastEditedDateNumeric, lastEditedDateLong: viewPost.lastEditedDateLong, authorName: viewPost.authorName, authorUsername: viewPost.authorUsername, postImage: viewPost.postImage, postImageAlt: viewPost.postImageAlt, description: viewPost.description, tags: viewTags) } } extension Array where Element: BlogPost { func convertToViewBlogPosts(authors: [BlogUser], tagsForPosts: [Int: [BlogTag]], on container: Container) throws -> [ViewBlogPost] { let longDateFormatter = try container.make(LongPostDateFormatter.self) let numericDateFormatter = try container.make(NumericPostDateFormatter.self) let viewPosts = try self.map { post -> ViewBlogPost in guard let blogID = post.blogID else { throw SteamPressError(identifier: "ViewBlogPost", "Post has no ID set") } return try post.toViewPost(authorName: authors.getAuthorName(id: post.author), authorUsername: authors.getAuthorUsername(id: post.author), longFormatter: longDateFormatter, numericFormatter: numericDateFormatter, tags: tagsForPosts[blogID] ?? []) } return viewPosts } func convertToViewBlogPostsWithoutTags(authors: [BlogUser], on container: Container) throws -> [ViewBlogPostWithoutTags] { let longDateFormatter = try container.make(LongPostDateFormatter.self) let numericDateFormatter = try container.make(NumericPostDateFormatter.self) let viewPosts = try self.map { post -> ViewBlogPostWithoutTags in return try post.toViewPostWithoutTags(authorName: authors.getAuthorName(id: post.author), authorUsername: authors.getAuthorUsername(id: post.author), longFormatter: longDateFormatter, numericFormatter: numericDateFormatter) } return viewPosts } }
STACK_EDU
This month marks thirteen years of pfSense software releases! It’s amazing to reflect on how the project and community have grown and evolved over the years. Looking back on the journey, Netgate is proud of its involvement and contributions. A few interesting factoids: - pfSense® software was forked from m0n0wall in 2004, and first released in October 2006. - 6 more releases occurred between 2006 and 2012: 1.2 and 1.2.1 in 2008; 1.2.2 and 1.2.3 in 2009; 2.0 and 2.0.1 in 2011 - Since 2012 Netgate has been both the underwriter and steward of the pfSense project. In 2012, the project had an installed base of nearly 100,000 instances, but was significantly challenged by reliability, supportability, and scalability issues. - From 2012 to present, Netgate has contributed over 60% of the 39,892 code commits through Release 2.4.4-p3. Notable contributions include a completely rewritten and modernized GUI; GUI internationalization support for 18 languages; a completely rewritten package manager; a fully redesigned build system and refactored build processes for improved reliability; and strong IPsec support via IKEv2 and AES-GCM crypto - all of which substantially improved reliability, supportability, and scalability. - Netgate has provided a rewritten, scalable Automated Configuration Backup (ACB) - first as a pfSense package, but now built into core software - enabling instant, secure offsite backups of firewall configurations with no user intervention - 65 security advisories have been issued by Netgate - each describing problem, impact, and resolution - a testament to the company’s ethos that security is a right, not a privilege - Netgate has contributed 43 pfSense releases - each a compiled and fully tested binary accompanied by a release package, installer, documentation, and distribution - pfSense software has had well over a million consumer, home-lab, SMB, enterprise, educational institution, government agency, and service provider installations - across literally every continent on the planet The above is a synopsis of what the Netgate team has contributed. There are, of course, many unsung heroes who have generously contributed along the way - a few are called out below, though in some cases, we only know a pseudonym for them: - Bill Marquette - Phil Davis - Seth Mos - Bill Meeks - Warren Baker - Colin Fleming - Peter Berbec - Denny Page - Sjon Hortensius - Charlie Marshall - Darren Embry Beyond these, we also wish to acknowledge the greater FreeBSD community for their work on both the FreeBSD base system upon which pfSense software is built, as well as the excellent ports collection that is used for our add-on packages. Finally, one of our highly-valued users and a close friend of the project, Bill Bradford (@mrbill), passed away a few months ago. We’d like you to know Bill worked selflessly to continue assisting the project, even while battling cancer. We are forever grateful for his contributions, humor, generosity, and presence here on earth. Mahalo ā nui, e kuʻu hoaaloha, a hui hou, Bill.
OPCFW_CODE
[WEB SECURITY] Repository of site URL structures? chris at casabasecurity.com Tue Jun 21 14:36:47 EDT 2011 > There have been multiple situations where I've needed example of ! and ; > as URL delimeters (which I've seen before but lack urls for), or @ within > a URL (not in the context of user at domain.com auth). Or urls using comma's > such as http://site/foo?12,12,12 . > I am just looking for a central repository that I can point people to. > > https://github.com/cweb/iri-tests/blob/master/tests.xml > > Webkit also has a testing suite at > > http://trac.webkit.org/browser/trunk/LayoutTests/fast/url/ Note: I'm in > > process of incorporating all of these tests into my test.xml above. > Cool this is helpful thanks. Those test cases are included in both of these repositories. Webkit's is at least stable but spread out across as arrays in a bunch of js files. The test suite itself is only concerned with the DOM parsing. It's nice and portable so you can easily run it in any browser. My tests.xml will be changing a lot over the next few weeks to include as many of Webkit's tests as I can plus others. Contributions are welcome! Some goals of mine are to include all of the tests in one XML file with a unique id per test, plus an expected result. The reason you see a domain name like iris.test.ing is because I use a custom DNS zone in my test > > Everyone is definitely not following the RFC guidelines consistently. I > > built a test harness that correlates the DOM parsing of these URIs with > > HTTP request and the DNS queries. The differences are dramatic in some > > cases. > So how come we haven't seen more advisories/bugs from you? Surely there > are tons to be found :) I expect the same :) But there's a lot of work to do still. Right now it seems like interop bugs mostly, and the exploit scenarios seem more distributed, like depending how apps or security WAFs/filters/what-have-you handle the strings. Consider the following test cases, can you think of any 'general-purpose' Test Case: http://0028.iris.test.ing;g The DOM parsing is different in each FF, IE, and Opera - while both Safari and Chrome error. FF drops the ";g", Opera uses it in the path, and IE uses it in the hostname... Scheme Hostname Path Browser http: 0028.iris.test.ing / Firefox/4.0.1 http: 0028.iris.test.ing /;g Opera/9.80 http: 0028.iris.test.ing;g MSIE 7.0 But the raw HTTP request is interesting because Firefox does it differently than its DOM parsing. Neither Chrome, Safari, or IE even make an HTTP Test Case: http://0029.iris.test.ing;./g In this slightly different case Firefox has changed its handling of the ";" trailing the hostname, and treats it instead as part of the path. Scheme Hostname Path Browser http: 0029.iris.test.ing /;./g Firefox/4.0.1 http: 0029.iris.test.ing /;./g Opera/9.80 http: 0029.iris.test.ing;. g MSIE 7.0 Similar results as above for the HTTP request. There's many more edge cases. Check out the DOM parsing results of the following test case: foo%7Cbar/ MSIE 7.0 But the more interesting thing here is that the raw HTTP request doesn't match for Safari: /foo%7Cbar/ MSIE 7.0 In this case Safari's DOM 'path' property is different than the raw HTTP request 'path' it generates to fetch the resource. More information about the websecurity
OPCFW_CODE
What can you achieve with the Haskell language that you can't achieve with a more commonly used language like Python or Java? I guess I am after the specific benefits of Haskell in addition to the general benefits of functional. Are there any real world applications that warrant it's use? Is it purely for the theoretical types and mathematicians who are impressed by its purity or are there concrete applications that make it outperform Python, Java or other languages? Does it have applications in say optimisation? I've searched here for a similar question but couldn't find a direct match apart from requests to add Haskell to the list of languages taught. 3/19/2019 4:14:04 AMSonic 9 AnswersNew Answer Was still asleep sorry :P One thing Haskell has going for it is that all the code has no side effects ("purity"). You cannot simply print things to the console from anywhere for example without explicitly declaring that your function does I/O. This might sound restrictive but it works in practice and it means the compiler can do optimizations it couldn't otherwise (memoization!). Also, this means you can trivially run any Haskell code in multiple threads at once without having to worry about deadlocks etc. No side effects = No problems. Also Haskell is lazy, it only computes what is necessary. You will commonly write down infinitely long lists in Haskell, but if you never request more than 20 items from one, they'll never be computed. There's also stuff like stream fusion but I'm already running out of space. Lots of concepts from Haskell have found their way into other languages, most prominently monads (js Promises for example), and languages on average have become a lot more functional lately! I have found that it's harder to write super performant Haskell though, it's easy to shoot yourself in the foot without noticing, performance-wise. But you can certainly do it if you're willing to lose a bit of the elegance and terseness on the language. (You have to force non-lazy evaluation, rewrite stuff already in the standard lib, etc.) That being said most code performs fine without going to crazy about optimization. Aside from the benefits of functional (safety, correctness, etc.), there is one thing Haskell has which sets it apart from languages like OCaml. The compiler is absolutely incredible. If you write good code, you can get half the performance of a C program. Half! That's incredible! And that's with one of the highest-level languages available, that also has a garbage colector. For reference, Go, a fast, garbage collected language, is 1/10th the speed of C and Python is 1/100th. But aside from that, no, it's not very practical, which is why it isn't used much outside academia. Schindlabua's answer was excellent. Vlad's answer was also very good. Maybe to add more on the real-world application part, I'll try to offer some idea of breadth. We have a local Haskell Meetup that showcases their software. I haven't attended in awhile, but I remember one member was working on some sort of music composition software. There are a handful of "Haskell shops" in the US. Since they are both our (the company that I work for) competition and our colleagues, I won't mention them by name, but we get some clients from them every now and again. From that, we gauge that there is a very wide variety of industries that use Haskell in their software. Actually, FaceBook uses Haskell for some of their software (you can look it up). We primarily use Haskell for building domain-specific languages and fast trading robots. The more Haskell is used in paid projects, the better the tooling will become, which may encourage further adoption. I started learning Clojure recently, which is also a functional language, based on Lisp, with Java interoperability. It is awesome in so many ways, opens up new worlds, new ways of thinking. There are no variables, only immutable data and powerful transformations. You start to appreciate recursion and realize that almost everything can be expressed with filter, map and reduce. :) Of course I exaggarate but my Clojure journey has just started and I enjoy it immensely. Also I want to learn Haskell too. I believe that functional programming requires a mindset to tackle problems, which you cannot acquire from fashionable languages like java or js. And once you learn it, you can apply it elsewhere and will make you more productive. safety of function correctness and very good compiler what is the answer to the double quote
OPCFW_CODE
Nearly headless VMs Using utmctl I’ve been using UTM since I switched to Apple silicon macs, but I recently discovered it’s cli tool, utmctl. It’s completely changed how I mange UTM and VMs on my macs. How to install utmctl is included when you install UTM (via the AppStore or package download available on their website). Once installed, you should have the utmctl command in a terminal: iryan@venusaur ~ $ utmctl OVERVIEW: CLI tool for controlling UTM virtual machines. USAGE: utmctl <subcommand> OPTIONS: -h, --help Show help information. SUBCOMMANDS: list Enumerate all registered virtual machines. status Query the status of a virtual machine. start Start a virtual machine or resume a suspended virtual machine. suspend Suspend running a virtual machine to memory. stop Shuts down a running virtual machine. attach Redirect the serial input/output to this terminal. file Guest agent file operations. exec Execute an application on the guest. ip-address List all IP addresses associated with network interfaces on the guest. clone Clone an existing virtual machine. See 'utmctl help <subcommand>' for detailed help. utmctl options are all rather straightforward. I most often use the utmctl start VMNAME, and utmctl ip-address VMNAME. However, some of the other options, such as A quick example - Running (nearly) ‘headless’ VMs The main improvement that utmctl has provided me is the ability to run my VMs “headless” (mostly), which I am accustomed to doing on Linux. To do this: - Install and configure a VM in UTM, and ensure that sshis enabled so it will automatically start on boot. Also, verify that you can properly sshinto the VM. # Enable ssh to start. Likely this command on most linux systems: sudo systemctl enable sshd Remove the virtual display in the VM’s settings. This will prevent the VM window from opening while it is running, as there is no GUI for the VM to run. utmctlto list or check the status of the VM(s). ryan@venusaur ~ $ utmctl list UUID Status Name D77A861B-61DD-483A-BA09-D647C77EB77A stopped nixos EADB0899-8B20-48B9-BD23-B179E3C258B3 stopped fedora -- or -- ryan@venusaur ~ $ utmctl status fedora stopped utmctlto start the VM: ryan@venusaur ~ $ utmctl start fedora ryan@venusaur ~ $ utmctl status fedora started utmctlto get the ip address from the running VM (it might take a minute until the VM is fully started to work): ryan@venusaur ~ $ utmctl ip-address fedora 192.168.64.33 fda6:d2b:ac2e:c75a:f87:fc14:d8fc:ef8d fe80::6b82:d89b:3f27:9c3d sshinto the VM using the ip-address: ryan@venusaur ~ $ ssh email@example.com Last login: Fri Jul 28 05:43:17 2023 [ryan@fedora ~]$ # I'm in the VM! - Although I usually shutdown the VM from inside it (which is recommended), you can alternatively shut it down using ryan@venusaur ~ $ utmctl status fedora started ryan@venusaur ~ $ utmctl stop fedora ryan@venusaur ~ $ utmctl status fedora stopped After moving the display device, running these steps may still leave the UTM application open in the Dock, but the VM window should not pop up. It’s why I consider this to be a “mostly headless” process. I love a simple but useful cli tool, and utmctl has been just that. I am glad that the team added it, and that it continues to get updates. For me, it immensely improves my experience using UTM. In fact, while writing this post I finally went and bought the App Store version of UTM to help support the developers. If you also love UTM, I encourage you to do the same. Setting Up RssOnly Posts in Hugo Trying Choc Sunset Switches
OPCFW_CODE
I have the time series plot shown above. Is is possible to know which model I should use solely by looking at this plot? In order to make a good model-selection you should always do some statistical tests and evaluate the accuracy of your model with a training and a test set (or via forward chaining, because cross-validation is not possible in time series. 1. Basics of the time series According to your plot you have a univariate time series with ~13 years of data (from 2007 to 2019). However I do not know the frequency of your data. Do you have monthly or weekly or even daily observations? ARIMA and SARIMA are often good models for monthly data. Nonetheless in time series with weekly and daily and hourly seasonalities you face distinct problems. There might exist multiple seasonalities and uneven numbers of seasons per year (a year has ~52,1429 weeks). In this case you should rather stick to a tbats model. A time series is (strongly) stationary if is has a constant mean and its higher moments are also constant over time. A time series is weak stationary if it has constant mean and constant variance over time. In order to simplify I will discuss whether the data underlying your plot is weakly stationary. While ARIMA can be applied on non-stationary time series it cannot capture all types of missing stationarity. 2.1 Constant mean (no trend) Your data apparently has no underlying trend. That makes model selection easier. 2.2 Constant variance The variance of your data is increasing over time. As far as I know increasing variance cannot be captured with a standard ARIMA model. 2.3 Unit root and structural change Until 2017 the variance of your data is steadily increasing, but then the variance gets suddenly very small. This can be either due to noise (and lack of data after 2017) or due to a structural change. If it is just noise an ARIMA/SARIMA model can be appropriate, but ARIMA and SARIMA cannot deal with structural breaks. You obviously have a certain type of seasonality. Once per year you have a very high number and the subsequent number is far below the average. Therefore I would stick to a model which takes seasonalities into account, e.g. SARIMA. First step in time series modelling is the visual inspection. Eye can see things a model can't necessarily see (but be aware: may also see things of no existence). Hence what can our eye definitely see? Mean does no seem to change. The process seems to be zero-mean, hence there is no unit root. We can also see that the process is too erratic to be Gaussian. It is time series driven by (mildly) fat tailed noise (like a t-student 3 degrees of freedom). And correlations seem to be compatible to some ARMA models. It could be an ARMA(2,2) driven by t-noise. But we can't definitely know that just by visual inspection. Hence note, even though the eye can reject classes of models and speculate, it can not estimate and run diagnostics. You can attempt and fit what you may think are the most likely models (and avoid fitting unlikely ones) but statistical estimation and diagnostics is what would allow you get to a model closer (in certain respect at least) to the data generating process. The mean clearly shifts at the end of 2009 to a lower mean . Model error variance increases at about the same time suggesting the need for weighted estimation. see Negative values in time series forecast and high fluctuations in input data AND How to improve this time series model? improve-this-time-series-model/396423#396423 Seasonal Pulses or Pulses seem to emerge in 2011 . Would have to actually have the data to diagnose which. The underlying ARIMA model appears to be autoregressive showing a degree of persistancy There may also be a change in parameters over time detectable by the CHOW test for constancy of ALL parameters in the model https://en.wikipedia.org/wiki/Chow_test NOT just the mean as is implemented in the free R program. Signed by HAWKEYE ...
OPCFW_CODE
time delay after printing lines with sed or awk in large file I have a large file (1Gb) and I need to extract a few lines of it using the record number. I wrote my script with sed and, as it took too much time, I decided to investigate it. It turns out that, when I run something like sed -n '15689,15696p' filename the print is quick, but I have a time delay after it, and this is turning my script really slow. Doing the same task with awk the delay is smaller, but it's still there! The command line I used for awk was: awk 'NR>=15689 && NR<=15696' filename I tried to print just one line (sed -n '15689p' filename) and the same problem appears! I'm wondering if no one has ever seen that before and knows how to get rid of this stupid delay. It seems to me this is a big problem, because this delay occurs after the printing task! I already searched in this and in other forums and I haven't seen a question with this issue. Can someone help me? Thanks If you tell awk to 'exit' after printing the line you want does that help? awk (and sed) don't know that they are done unless you tell them to finish/exit. If you don't they still need to loop over the rest of the file. i think it may reduce the time awk 'NR==15689{print; exit}' Avoid using sed -n '15689,15696p', as sed will go through the entire file. The fastest way I know is this: head -15696 filename | tail -10 I benchmarked it, and it runs way faster: $ seq 1 100000000 > file $ time (head -50000000 file | tail -10) > /dev/null real 0m0.694s user 0m0.830s sys 0m0.333s $ time (sed -n '49999991,50000000p' file) > /dev/null real 0m6.018s user 0m5.863s sys 0m0.160s $ time (sed -n '50000000q;49999991,50000000p' file) > /dev/null real 0m3.197s user 0m3.153s sys 0m0.043s $ time (awk 'NR>=49999991 && NR<=50000000' file) > /dev/null real 0m12.665s user 0m12.543s sys 0m0.123s $ time (awk 'NR>=49999991 && NR<=50000000{print} NR==50000001{exit}' file) real 0m9.104s user 0m9.010s sys 0m0.100s Perfect! I loved this solution, but the number of lines is not always the same for me and, perhaps, recording this also as the record numbers will expend more time than using awk. But, I'll try both and see whats fits best for me! Thank you very much!!! Just adding one more comment.... why sed goes through the entire file if he already completed the task I asked him? This really makes no sense at all.... =/ It's because it scans right to the end of the file. Try this to quit after printing: sed -ne '15690q;15689p' file Or with awk: awk 'NR>=15689 && NR<=15696{print} NR==15697{exit}' filename Just for kicks, I ran @RichardHum 's timings and mine are totally the opposite on OSX Mavericks with a SSD drive: #!/bin/bash -xv seq 1 100000000 > file time (head -50000000 file | tail -10) > /dev/null time (sed -n '50000000q;49999991,50000000p' file) > /dev/null time (awk 'NR>=49999991 && NR<=50000000{print} NR==50000001{exit}' file) time (head -50000000 file | tail -10) > /dev/null and I got: time (head -50000000 file | tail -10) > /dev/null real 0m29.565s user 0m35.711s sys 0m0.733s time (sed -n '50000000q;49999991,50000000p' file) > /dev/null real 0m13.313s user 0m13.162s sys 0m0.150s time (awk 'NR>=49999991 && NR<=50000000{print} NR==50000001{exit}' file) real 0m7.433s user 0m7.293s sys 0m0.139s time (head -50000000 file | tail -10) > /dev/null real 0m29.560s user 0m35.697s sys 0m0.742s I even ran the head+tail solution at the end in case it had no benefit of caching the first time but it is definitely miles slower! Thanks! The first code only worked with me for one line, so it's not good for me. But the second works really well! =) Are any of your head, tail, sed, and awk the GNU versions or those that are shipped by Apple? I think I used GNU awk and the rest were Apple supplied.
STACK_EXCHANGE
February 24, 2014 I ‘m part of a hardware research group at Telefónica Digital called “Physical Internet Lab”. Three years ago we started a small group under the Emerging Technologies area of the company focusing on the Internet of Things. The commitment of the group was (and is), in ambitious terms, “to democratize the Internet of Things” opening it to as many makers, developers and users as possible. Our goal has been not entirely altruistic: Telefónica as a network operator has a lot of value to add in the Internet of Things economy. On day to day basis we build prototypes and products, usually connected objects or components like the Thinking Things building blocks. Setting up the lab three years ago was no easy task. We wanted to work at the crossroads of the Internet, the Things and the People. But our development skills were almost 100% software related. In the process we built a team skilled on all three sides. And we figured out how to do agile hardware. We ‘ve come full circle. Telefónica I+D (the Telefonica Digital development branch) was created 25 years ago to produce hardware innovations such as X.25 and ATM switches. We did that in the classical engineering fashion: writing long and rigid lists of requirements, splitting the work across solution providers, integrating and then testing following a waterfall schema. Over time Telefónica I+D adapted quickly to the technology changes and by the mid-nineties we were developing mostly software. First we followed the same engineering process; then we moved towards more iterative methods. In the last 10 years we have adapted fully to agile methodologies. As we were building the laboratory we found ourselves getting back to hardware. But the company now could not understand a slow-moving unit. The lab had to be agile. So we had to bring agile methodologies to hardware development. The first difficulties came with the corporate facilities. Hardware work demands physical proximity and we could not afford to have a distributed team depending on collaboration tools on the Internet. At the same time, soldering fumes or drilling noises were not welcome in our modern, bright, open spaces. So the team had to move to a closed office in an old building in Madrid city center. Moving to the city center was a boon: in minutes we could reach many shops and services, buying anything from hammers to plastic boxes. Visitors now found it easier to visit us in a centric garage-like office. This was great for our open approach as we wanted to help and interact with other companies and organizations. Purchasing tools was another problem. The corporate procedures were tuned for large-scale purchases such as server farms or external services. Buying a handful of resistors for 10 euros could take several weeks, creating bottlenecks to our work. Fortunately the purchasing department showed a great deal of sensibility. We worked together to redesign the process. Now we buy any component or tool in a single day while still working by the book. Hardware work implies multiple teams across several companies with extremely specialized profiles. When setting up the lab we opted for a small and autonomous team, able to build a hardware prototype with no external dependencies. A small team allows us to work closely integrated, in the same location, continuously coordinating our work. A small team also means that budgets are smaller and is well suited to experimenting, failing, learning and adapting. Basic agile methodologies such as Scrum expect some degree of overlap between the specializations of team members, so that different people can execute the same tasks naturally balancing the work load. But hardware work is different. It demands a lot of specialization. In our case most of the tasks can be executed only by one team member. As a result, the Scrum methods and tools have to be modified to reflect this reality. Our internal workflow follows many steps. The first step is the Industrial Designer, a role which is somewhat of a novelty in the Telefonica Digital payroll. Carlos (that’s his name) starts his work in the CAD station designing the physical product: plastic pieces, metal straps, cloth, magnets. Then he builds the design using the currently available 3D prototyping tools such as the laser cutter, the CNC tool (i.e. a computer controlled drill) and a variety of 3D printers. These tools give much flavor to the lab. In some cases we start from an existing object that we hack so that we can explain a new concept. Carlos at the same time designs and builds, which is a bit out of his job profile. Software developers are multi-taskers, too – they design and type, while software architects can also code. In the hardware industry this is somewhat unusual and typical engineers expect someone else to physically build what they have created. In the lab we follow the software philosophy. It is leaner, and gives the designer a real feel of the piece or circuit construction. This approach demands some tolerance and patience from engineers who have to get their hands dirty. The same philosophy applies to the next step in the workflow: the electronics engineering part. The electronics engineer first designs new circuits, then prototypes them. We even design and build the PCBs to check that everything fits in place. The agile doctrine underlines the importance of early user testing. Early use provides rapid feedback focusing the most important characteristics of the product and showing what isn’t relevant for customers. To shorten the time-to-test we use 3D printing and prototyping technologies. In electronics engineering we massively use Open Hardware. Open Hardware gives us access to lots of ready-to-use designs that we can employ in product testing. In a sense, Open Hardware behaves now like Linux and Open Software in the mid-nineties. It allows us to focus on the real technical or design challenge rather than reinventing the wheel for every test. Electronics and physical design teams work side by side, so they can verify in real time how components fit in the same object. Our objects become more than simple plastic boxes, as they are tightly coupled with the internal electronics. Electronics engineers work also with the firmware developers. The firmware developers write the code for the embedded microprocessors. They also have to deal with connectivity issues and power management. In our Physical Internet Lab, electronics and firmware engineers work side by side. In most situations knowing what will firmware do simplifies hardware design. Similarly, software developers can ask for fine changes in the hardware designs nearly in real time. On the other side of firmware sits backend development. In our typical systems architecture, distributed devices communicate with a backend service in the cloud. We push as much intelligence as possible to the backend service, so our designs can evolve without touching the deployed hardware or executing firmware updates. We like to think that the back-end gives every object nearly infinite computing power and knowledge, as it can interact with any other Internet service. Again back-end and firmware developers work side by side. This tight collaboration resolves any integration problems before they appear, and encourages electronics and firmware developers to take issues to the more powerful (and more agile) back-end platforms. The final technical step is the front-end development, usually based on web and native apps. Again we do a lot of work locally in the lab, well integrated across the team. The frontend is also tested in complete end-to-end scenarios. Automatic testing tools execute scripts that run against the firmware and the frontend. And of course, there is a Quality Assurance side. We are extending continuous integration, test driven development and automatic testing to the embedded firmware. At the same time we have to handle more hardware specific tasks such as sensor calibration, assuring robustness and strength. The web/application interface and physical design are the two endpoints of the “development chain” of our group. They form the two interfaces exposed to the final user. At the final part of our workflow, the physical interaction designer, works with both web / app and physical design. The physical interaction designer is responsible for the design of the connected object as a whole. He takes care of building a single object with a coherent interaction model in the physical world and in the Internet. Without the physical interaction designer we would have to separately design the physical object and the application or web interface. The result would be a split-personality product, usually an amalgamation of data stuck on top of a square box. The physical interaction designer combines the capabilities of the physical object and the Internet interface in a coherent manner. Physical interaction design, bringing together the Internet and physical objects is a completely new field. There are a handful of specialized schools in the world, and we are working too with UX designers with strong industrial design background. Everyday physical objects have usually long stories and designs optimized through centuries of use. We still have a lot to learn on how to take the Internet beyond of the smartphone/tablet/PC onto this physical object world. Customers will not adopt Internet of Things devices if they are a step behind of the design standards they have become accustomed in software interfaces. Agility plays a role here, once again. Developing and prototyping quickly we can try interaction designs with users, test our assumptions and build a sizeable bunch of knowledge around user interaction with connected objects. Of course we have to work with external providers, especially when dealing with complex technologies or industrialization. For development we often use online services for as PCB manufacturing or 3D printing. They are extremely easy to use, robust, fast, and offer a direct web interface instead of long negotiations with a salesperson. For the final manufacturing we interact with real, serious manufacturers. Agile, as a software development doctrine has no solutions to this task. But Agile can be seen as a spin-off of Lean philosophy, which was created to deal specifically with manufacturing issues. One of the main lessons from the Lean methods is that service providers have to be tightly integrated in the business process. We have found this is very important also for us. The lab has spent considerable efforts building trust relationships with service providers and manufacturers, integrating their teams with the lab. Schedules and plans are shared under an openness philosophy. We have established even real time communication so their teams get continuous feedback from the engineers in the lab. We have yet a long way to create a truly Agile Hardware lab. Physical work is sometimes slower than software development. Some other times (especially when prototyping on Open Hardware designs) they are blindingly fast and have to pause and wait for software components. Speed differences keep the group working on different “user stories” at the same time. External dependences are many, and the lab will never be, in that sense, completely autonomous. But we can find yet faster service providers and build leaner and more integrated workflows with them. Regarding Quality Assurance we have to handle correctly the physical device characterization and fit the expensive and slow certifications in the product workflow. The bright side is that Agile methodologies provide and require continuous improvement. Every sprint or work cycle forces us to learn and adapt our methodology and organization, looking for a better process. Perhaps in a couple of years we’ll have a completely different process in a completely different lab, and it will be all right.
OPCFW_CODE
Encode a symbol python I have a symbol as an output and it appears as '' when i print it. So I tried x.encode('utf-8') to get back the symbol, instead I get . I looked at many examples but nothing provides a solution for this. How do i fix this? What encoding does your terminal / console use? @MartijnPieters How do i know that?? What OS are you using? What terminal or console program are you using? And what does print repr(x) give you? The exact codepoint would be helpful. @MartijnPieters OS - Windows 7 and I am using a python IDLE version 2.7 So you are printing this in IDLE? What does import sys; print sys.stdout.encoding print? @MartijnPieters repr(x) gives <_sre.SRE_Match object at 0x02D42368> That is not a string, that is a regular expression match object. That is not the same object as what you are talking about in your question. @MartijnPieters import sys; print sys.stdout.encoding print gives cp1252 @MartijnPieters well u r right..mistake i referred to different variable... It gives u'\uf0fc' ' That is not a valid Unicode codepoint: http://www.fileformat.info/info/unicode/char/f0fc/index.htm let us continue this discussion in chat Your console or terminal font doesn't support that codepoint; it is printing just fine otherwise. However, your terminal is not configured to print UTF-8, so printing the UTF-8 bytes results in garbage instead. You need to change the font used for your terminal or console program instead, to show that specific codepoint. However, if the codepoint you are printing in U+F0FC then no font will print that other than as a placeholder glyph. That is private-use codepoint and no common-use font will be able to display it. In this case, you'll have to replace that codepoint with something else to represent whatever that codepoint stood for in the original data. In chat you mentioned it was a Powerpoint checkmark. Your output is limited to the Windows 1252 codepage so you need to pick a character within that standard, take the unicode codepoint for that character and replace the U+F0FC codepoints with that value. If you were to use • character (hex 95 in the 1252 codepage, unicode point U+2022) for example, then you replace the private codepoint with: someunicodestring.replace(u'\uf0fc', u'\u2022') Is there a solution for this to test. Because this is very critical for me atleast!!
STACK_EXCHANGE